Google's Transformer Decodes Machine Translation's Problem

By CIOReview | Friday, December 29, 2017
137
251
48

Recently Google has explained the problem of translation using machine learning along with its solution in a post of its research blog. The company's natural language processing department explained the problem is that the general tendency of translation models is to translate their work word by word which can lead to severe errors.

Jakob Uszkoreit from the department has described problem. The example to explain the problem is about the usage of the word ‘bank.' It has been used in two different sentences that mean something different in each sentence. While the word ‘bank' in the first sentence refers to the commercial bank, in the second sentence the word ‘bank' alludes to the river bank. But an algorithm may pick the incorrect one for it doesn't know the correct one until it goes to the finish of the sentences. Such sort of ambiguity is all over the place in the event if one begins investigating it. 

The sentence can be rewritten, but that is not an option for the translation system. Additionally, it won't be exceptionally productive to improvise the neural systems to interpret the full sentence to check whether there is something incorrectly going on.

Transformer, Google's solution collates each word to each other word in a sentence to check whether any of them will influence others somehow, for instance, if a word like ‘bank' indicates different meanings. To be very specific, when a translated sentence is being structured, Transformer compares each word.

The reaction of Google's approach is it gives a window into the network's reason since transformer offers each word a score related to each other word.