Skip to main content

Facebook says its prototype translation technique is nine times faster than rivals

Facebook says its prototype translation technique is nine times faster than rivals

Share this story

Facebook stock image

Translation has always been one of the most important applications of Facebook’s AI research. After all, the social network’s overarching goal is to “make the world more open and connected,” so the language barrier is an obvious obstacle. To help leap this hurdle, Facebook today announced a novel method of machine learning translation that the company says is nine times faster than rival systems.

The work exists solely as research at the moment — it hasn’t yet been implemented in a Facebook product. But Facebook AI engineers Michael Auli and David Grangier tell The Verge this will likely happen further down the line. The social network already uses AI for things like automatically translating status updates to other languages, but making the transition from lab to app always requires more work.

“We’re currently talking with a product team to make this work in a Facebook environment,” says Grangier. “There are differences when moving from academic data to real environments in terms of language. The academic data is news-type data; while conversation on Facebook is much more colloquial.” Facebook has previously said it’s building a glossary of slang to make this process easier.

Auli and Grangier, though, are simply excited that Facebook’s “novel” approach to machine translation is paying off. They explain that usually, AI-powered translation relies on what are called recurrent neural networks, or RNNs, whereas this new research leverages convolutional neural networks, or CNNs, instead.

RNNs analyze date sequentially, working left to right through a sentence in order to translate it word by word. CNNs, by comparison, look at difference aspects of data simultaneously — a style of computation that is much better suited to the GPU hardware used to train most contemporary neural networks. GPUs were originally designed to render graphics in video games, and are best at making lots of small calculations in parallel.

So translating with CNNs means tackling the problem more holistically, say Auli and Grangier, and examining the higher-level structure of sentences. “The [CNNs] build a logical structure, a bit like linguistics, on top of the text,” says Auli.

As to why this approach isn’t used more widely, Grangier notes that the AI community has already sunk a lot of effort into using RNNs for translation, which people were happy to improve upon. He says: “The short answer is that people just hadn’t invested as much time in this, and we came up with some new developments that made it work better.”