Skip navigation
languages.jpg

AI and VR Will Revolutionize Communication at Events

Google’s Translatotron and systems like it will make simultaneous interpretation more available, while virtual reality will no longer be just for games.

The latest artificial intelligence innovation from Google could make translating your conference speakers cheaper, more accurate, and more personal. Nicknamed Translatotron, the new AI-powered system is in the testing phase but so far appears to be able to provide simultaneous translation using a close approximation of the speaker’s voice, retaining the intonation and emphasis. Check out the Translatotron blog post here for audio examples and an explanation of how the program works. 

In the near future, meeting planners could use the system to provide less costly simultaneous translation for international events requiring multiple languages for several sessions. Human interpreters are typically restricted to two hours of simultaneous translation at a time, but artificial intelligence does not tire, and one program available in multiple languages will ultimately be cheaper than several humans. Attendees requiring translation services will wear a headset, just like with a human interpreter, but add a microphone and the same software can interpret their questions for the speaker at the end of the session. Another advantage for the attendee will be that the system is like having a personal interpreter, so it could be used at networking events and small breakout sessions.

At the moment, the system faces the same challenge as human interpreters: learning specialized language associated with a particular field. But once a program has been optimized in, for example, aerospace engineering, it is available for as many events as required, the expertise is never lost or retired, and it can learn new terms and examples at each conference.

Sign Language

There are already several technologies that could help provide sign language interpretation for deaf attendees; here is one that uses Microsoft HoloLens goggles to show deaf people a signing virtual reality figure. But adaptations of this type of system could solve one of the main issues deaf people encounter when presentations are signed: In order to see the signing, audience members must take their eyes off the speaker and any presentation materials. VR technology already has the capability to project signing hands onto a conference speaker that would be visible only to attendees wearing goggles, whether the “hands” belong to a human ASL interpreter or a software program.

Up until now, VR and its sister, augmented reality, have mainly been used as an entertainment or marketing technology, providing immersive environments for gaming or showing products to potential customers. But as applications such as HoloHear and sign language translation apps such as Augmented Reality Sign Language improve and become scalable, the meetings industry will be able to provide a more inclusive environment and improve the experience for differently-abled attendees.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish