Technology

Meta develops AI translation system for a primarily spoken language

Meta has developed a new AI translator that can convert spoken languages like Hokkien into spoken English in an attempt to knock down language barriers. Hokkien, a dialect of southern Min Chinese, is a primarily spoken language and does not have a standard writing system, which makes building translation tools for it a huge challenge.
The open-source translation system is a part of Meta’s Universal Speech Translator (UST) project and has made a breakthrough in this challenge. The company, formerly known as Facebook, hopes that this along with the other AI methods under development will eventually allow real-time speech-to-speech translation across hundreds of languages – even the spoken kind.
Languages like Hokkien are difficult to translate because machine translation tools require vast amounts of written text to train on, and such languages do not have a widely used writing system. To ease this problem, Meta used Mandarin – another Chinese language but with an ample supply of readily available training data – as an intermediary between English and Hokkien.

Researchers from the project also actively worked with native Hokkien speakers to verify the accuracy of the AI translation models. Of course, since a speech-to-text system was not available, Meta focused primarily on speech-to-speech translation to build the translation system.
While the model is still a work in progress, it does allow someone who speaks Hokkien to converse with someone who speaks English. The only catch is that it can translate only one full sentence at a time in its current state. Meta is encouraging other developers to jump on the bandwagon releasing technologies like SpeechMatrix to ass them in creating their own speech-to-speech translation systems or build on its work. The company has also open-sourced its Hokkien translation models along with the research papers associated with them.

Related Articles

Back to top button