Editors' Choice

Google Translate Is Nearing Human-Level Accuracy

Steve Porter / November 8, 2016

On April 28, 2006, Google launched their first step into electronic-based translation called Google Translate. Their initial algorithm was phrase-based. They called it Phrase-Based Machine Translation (PBMT) and how it worked was the algorithm would break the subject sentence into words and phrases separately, resulting in the output sentence to be a collection of independently translated words and phrases.

 

To address these shortcomings in PBMT, Neural Machine Translation (NMT) was an approach that utilized Recurrent Neural Networks (RNNs) to focus on both the input sentences and output sentences. In this sense, two RNNs would be used to learn the mapping between A. the input sentence in one language and B. the output sentence in the second language. Unlike PBMT, NMT would translate based on the subject sentence as a whole.

 

NMT, however, has its weaknesses. To be useful with a public tool like Google Translate, it needed to be fast and accurate – something NMT was lacking. It also had issues dealing with large data sets and when sentences contained rare words.

 

With the launch of the Google Neural Machine Translation (GNMT) system, Google has been able to make great improvements on the traditional NMT shortcomings and show significant improvements in translation accuracy when compared to PBMT systems. GNMT showed an impressive reduction in translation errors, roughly around 60% less across several language pairs. Sample sentences were tested from news sites and Wikipedia and compared to the results from human translators.

 

translation

Source: googleblog

 

As you can see above, GNMT shows improved translation quality over PBMT. In cases such as English to Spanish and French to English, the performance of GNMT is nearing human levels of accuracy. Even with Chinese, an extremely difficult language to pair with English, GNMT provides a boost in quality outputs when compared to PBMT.

 

How Does It Work?

 

From the following example, a Chinese to English translation is presented.

 

nmt-model-fast

Source: googleblog

 

In the simplest terms, GNMT uses three components: an Encoder, a Decoder, and an Attention layer which is present between the encoding and decoding steps. When the GNMT receives an input sentence, it starts by Encoding the sentence. Each word is read sequentially and placed into their respective vectors. When the sentence is complete, Decoding begins. As the Decoder starts to translate, the Attention layer is telling the Decoder how much weight it should focus on each particular word across the vectors. The process is much more complex than how it’s depicted, and if you want to learn more, take a look at the new report they also released here.

 

The announcement of GNMT is also a big step for translating from Chinese to English. In fact, Google Translate for mobile and web is already using GNMT, which means no more PBMT translations for Chinese to English. This accounts for roughly 18 million translations each day. Google Translate supports more than 10,000 language pairs, but GNMT will be rolled out for those in the future.

 

Even so, GNMT is not perfect and can still make errors that would make a capable human translator cringe. While it’s a giant leap in the right direction, Google is by no means done.