Highlights

  • The social media giant will unfold the AI project in two parts -‘No Language Left Behind’ and ‘Developing a Universal Speech Translator.’
  • Meta aims to deploy these new technologies into an app integrated into future wearable devices and VR and AR projects.

During Meta’s latest ‘Inside the Lab: Building for the Metaverse with AI livestream’ event, CEO Mark Zuckerberg announced that the company’s research division is working on a “universal speech translation system” that will allow people worldwide to communicate with one other using different languages instantaneously. With this, the company aims to move with its unblinking vision for the future, the Metaverse.

“The big goal here is to build a universal model that can incorporate knowledge across all modalities… all the information that is captured through rich sensors,” Zuckerberg said. “This will enable a vast scale of predictions, decisions, and generation as well as whole new architectures training methods and algorithms that can learn from a vast and diverse range of different inputs.”

“This is going to be especially important when people begin teleporting across virtual worlds and experiencing things with people from different backgrounds,” he continued. “Now, we have the chance to improve the internet and set a new standard where we can all communicate with one another, no matter what language we speak or where we come from. And if we get this right, this is just one example of how AI can help bring people together on a global scale.”

Zuckerberg further stated that Meta is working on the project in two parts. The first initiative, called ‘No Language Left Behind,’ will provide AI models capable of translating languages using fewer inputs and training examples. “We are creating a single model that can translate hundreds of languages with state-of-the-art results and most of the language pairs, everything from Austrian to Uganda to Urdu,” he explained.

As the project name suggests, it aims to bridge the gap between the access to advanced translation technologies available for major languages like English, Mandarin, and Spanish and the options available for lesser-used, native languages. This last category still represents billions of people worldwide that Meta doesn’t want to leave behind.

The second part of the project talks about developing a Universal Speech Translator. It will be a real-time system that directly translates speech from one language to another without the need for a written intermediary. This will be particularly useful for translating languages where the translator cannot use any standard writing system.

“Eliminating language barriers would be profound, making it possible for billions of people to access information online in their native or preferred language,” Meta wrote in a blog post. “Advances in Machine Technology won’t just help those people who don’t speak one of the languages that dominate the internet today; they’ll also fundamentally change the way people in the world connect and share ideas.”

Once ready to deploy, Meta envisions these new technologies to be incorporated into an app for future wearable devices and into its VR and AR projects.

Google and Apple are among the other tech giants developing similar technologies. Meta is, thus, faced with the challenge to create a more reliable and user-friendly system than its competitors while keeping up with the trust of its users, whose conversations will flow through Meta’s servers.