Google’s AI Has Developed its Own Language

China Takes Lead in Deep Learning Research Race
November 24, 2016
European Medicines Agency Workshop Explores Key Questions Facing AI in Healthcare
November 24, 2016

Google’s AI Has Developed its Own Language


It is fair to assume that the majority of Internet users have come across Google Translate once in their life, in order to translate one language to another. Now, the technology applied in this software is able to identify hidden material between languages, in order to develop what’s referred to as ‘interlingua’.

According to an article published by WIRED, Google enabled its Google Neural Machine Translation system in September in order to assist it in automatically improve how it translates languages. By looking  at entire sentences, rather than individual phrases or words, the machine learning system analyses and makes sense of languages.

This new technology came about as a result of months of testing by researchers who eventually saw that the system was able to blindly translate languages, despite never having studied any of the languages involved in the translation.

“An example of this would be translations between Korean and Japanese where Korean⇄Japanese examples were not shown to the system,” Mike Schuster from Google Brain wrote in a blogpost.

The system showed ability to make reasonable translations, such as Portuguese to English, English to Spanish, and then being able to make translations between Portuguese and Spanish, despite not being taught to translate.

“To our knowledge, this is the first demonstration of true multilingual zero-shot translation,” a research paper published alongside the blog said. In order to make the system more accurate, the computer scientists then added additional data to the system about the languages.

The revolutionary news is not the fact that the AI can learn to translate languages without being shown examples on beforehand, rather the fact that it can create its own ‘language’. “Visual interpretation of the results shows that these models learn a form of interlingua representation for the multilingual model between all involved language pairs,” the paper reads.

“Using a 3-dimensional representation of internal network data, we were able to take a peek into the system as it translated a set of sentences between all possible pairs of the Japanese, Korean, and English languages,” the team’s blogpost read.

The data within the network allowed the team to interpret that the neural network was “encoding something” about the semantics of a sentence rather than comparing phrase-to-phrase translations, according to WIRED.

This article was first published at:

Photo Credit: Google

For the latest news and conversations about AI in business, follow us on Twitter, join our community on LinkedIn and like us on Facebook

Leave a Reply

Translate »

We use cookies. By browsing our site you agree to our use of cookies.Accept