Found in Translation
Found in Translation is an interactive, immersive installation for the exhibition. Using their own spoken sentences, visitors viscerally experience the process of machine translation. Visualizations show how the machine learning model clusters words from different languages by semantic similarity, and translations are presented typographically and auditory across 24 languages.
The Challenge
The team was invited to create an installation for the exhibition Understanding Misunderstanding at 21 21 Design Sight in Tokyo.
On one hand the installation should show the magic of Google Translate and how it works today. On the other hand it should show recent advances in Machine Learning research around machine translation: Using the data sets from many languages instead just two actually improves translations across all languages. These fascinating findings were supposed to be communicated in an engaging and interactive way in the exhibition.
Project Vision
Once an answer is spoken into the microphone a visualization shows the multi-lingual machine learning model that is used for translations. Word by word and sentence by sentence it becomes apparent which words from which languages are clustered together.
The visualizations travel across the entire room, showing a spatialized version of multiple small diagrams across the room. Each diagram animates the text version of the translation. The back panel provides background information on the technology behind the installation, explaining multilingual machine learning models, transfer learning, and other findings by language machine learning researchers.
Design + Execution
The solution was to distribute information across interactivity, time and the spatial composition. TheGreenEyl created a room with 24 screen panels and 24 speakers, and a microphone at the center of it.
When visitors enter the room, they are asked a question that they can answer verbally. As this sentence is being translated, they see the entire data set on a central screen, with language-pair-specific visualizations. These all then resolve into the translation displayed in typography, different languages, and writing systems, and each speaker plays back voices for each sentence. Since viewers get a sense of the underlying data modeling, they understand which words and which languages are closer together than others. They can also try out different sentences to do further analysis and comparisons. In addition, there is a text panel at the back of the gallery explaining some of the underlying concepts.
Project Details
Design Team
Richard The (creative direction, concept, visual design)
Frédéric Eyl (concept)
Andreas Schmelas (software designer)
Marian Mentrup (sound designer)
Pam Anantrungroj (spatial designer)
Calen Chung (visual designer)
Collaborators
Google Creative Lab, Dominick Chen
Maco Film, Luftzug
Photo Credits
Taiyo Watanabe
Open Date
October 2020