AI Sign Language Anchor Will Serve the Winter Olympics 2022

Nexdata
3 min readDec 16, 2021

The Beijing Winter Olympics 2022 will kick off on Feb. 4, 2022. Among the audience of the event, there is a special group who can hardly hear the sound of the games. According to the statistics, nearly 28 million people in China are hearing impaired and about 430 million around the world also suffer from hearing loss.

China Central Television (CCTV) announced the launch of CCTV’ s first AI sign language host on Nov 24. The digital female host will translate events during the 16-day Beijing 2022 Winter Olympics that are to kick off on Feb. 4, 2022.

The sign language anchors is equipped with AI enhanced technologies including voice recognition and natural language understanding to build a complex and accurate sign language translation engine, which can achieve the translation from text, audio and video to sign language. It generates the virtual image through a natural motion engine specially developed for sign language optimization. These technologies enable AI sign language anchors to have highly sign language expression skills and accurate and coherent sign language presentation.

The launch of the CCTV AI sign language anchor is a back-feeding of artificial intelligence to humans. It is a moment of warmth brought by technological development. With the continuous development, AI technology is also getting warmer.

As a world’s leading AI data service provider, Datatang has developed a series of datasets, which could quickly improve the expression ability of AI sign language anchors and help more AI applications to serve humans.

Sign Language Gestures Recognition Data

It is not enough to learn the “National Sign Language Dictionary”, if AI anchors want to express sign language accurately and naturally. If AI anchors want to get rid of the mechanical sense and are closer to real people’s sign language expression, large volume of real people’s sign language data is needed for AI anchor to learn.

Lip Sync Multimodal Video Data

In addition to accurate sign language, AI anchor’s lip shape also need be accurate. If AI anchors do not specifically learn lip shape synchronization, when the news is officially broadcasting, there will be a dismatch between lip shape and voice.

Speech Synthesis Corpus

If the speech synthesized by the AI anchor is closer to a real person and can express rich emotions, then the audience will feel that it’s is not a cold machine, but an emotional “person”.

With the innovation of AI technology and 3D virtual scenes, the work space of AI anchors will be larger. Perhaps soon AI anchors will step out of the studio to better meet the diverse needs of people and achieve the vision that technology changes lives.

End

If you need data services, please feel free to contact us: info@datatang.com

--

--

Nexdata

Off-the-shelf AI training data, on-demand data collection & annotation services