Artificial Intelligence has inspired many Hollywood blockbusters, but it’s no longer taking a backseat to reality. In recent years, we’ve become accustomed to the presence of A.I. in our everyday lives, sometimes without even realizing it. But as we begin 2017, the stuff of Sci-Fi legend is truly becoming a reality and much of the action is taking place within the translation industry. To understand how we got here and where we’re going, let’s start with a brief overview of the four different types of A.I.
Since the 1950s – Narrow A.I. describes specific technologies that are able to perform rule-based tasks as well as or better than humans. For example: a chess-playing program on the Ferranti Mark 1 or the face recognition program on Facebook.
Since the 1990s – Machine Learning uses algorithms and large amounts of data to ‘train’ machines to properly identify and separate relevant data into a subset and then make an informed prediction based on that data subset. In essence, it’s programming machines to learn – not to follow a specific set of rules. For example: Apple’s digital assistant, Siri.
Since the 2000s – Deep Learning is a new cutting-edge branch of machine learning where the algorithm’s structure is based on the human brain in a system of artificial neural networks that can learn from past actions to solve new problems without being specifically programmed to do so. For example: After watching 5,000 hours of TV, Google’s DeepMind A.I. was able to master lip-reading and outperform a professional human lip-reader for accuracy.
The not too distant future – General A.I. refers to complex machines that possess the ability to independently perform ‘general’ intelligent action with the same characteristics of human intelligence. These are the kinds of intelligent machines you’ve seen in Sci-Fi characters such as Star Wars’ C-3PO, Pixar’s Wall-E and Samantha, the computer operation system voiced by Scarlett Johannsen in Her.
What makes deep learning so cutting edge?
Deep learning enables machines to learn independently in the same way that human beings learn. The human brain contains around 100 billion neurons, each of which is connected to thousands of neighboring neurons in a massive biological neural network. When a neuron passes an electrical charge to its neighbor, it forms a new thought pathway that can either help or hinder a person from learning new things. With life experience, the synaptic connections between pairs of neurons grow stronger or weaker based on trial and error. This, for example, is how toddlers learn to walk and talk (not through a rule book or fancy algorithms).
Over the past 16 years, artificial neural networks have revolutionized A.I. by copying the idea of learning from data in the world around them (like humans) instead of the way conventional computers learn from rules. When exposed to enough data, machines created with multiple layers of neural connections (like a brain) can, through trial and error, make classifications or predictions of their own. This process of learning through trial and error is called “training.” In order to be effective, training requires artificial neural networks to see hundreds of thousands, or even millions, of pieces of real-world data until the neurons are tuned so precisely that they pass on the correct information nearly every time.
Effective training is what’s new from the past decade and a half – not the concept of artificial neural networks as a whole. The idea itself dates back to the early 1940s, but until the last decade or so it was disregarded as impractical. The technology involved in building a neural network was thought to be too computationally complex and the mountains of data required for effective training were previously impossible to obtain. Today, we have both the technology and the data to implement artificial neural networks and we’ve learned that after sufficient training, they can extract patterns, detect valuable trends and even predict future events.
Breakthrough in machine translation
Before deep learning became possible, all machine translation services used Narrow A.I. and followed the same set of basic rules: (1) separate sentences into fragments, (2) look up those words in a dictionary of statistically derived vocabulary terms, (3) apply post-processing rules to rearrange the translated fragments back into a meaningful sentence. As we all know, this system can result in some pretty awkward translations (e.g. “priest of farming” instead of “minister of agriculture”). Flawed machine translations happen because Narrow A.I. is restricted to specific rules, and languages tend to have as many exceptions as they have rules. Narrow A.I. can’t get a joke, pick up on sarcasm, identify uncommon names or understand cultural context. Deep learning, however, opens up a world of possibilities for genuinely useful machine translation and Google Translate just kicked that door wide open.
Google Translate supports 103 different languages and for the past 10 years, it essentially consisted of 150 Narrow A.I. programs used to translate between all of these language pairs. Then, in September, Google announced that they were switching to a single multilingual system based on artificial neural networks. They call the system Google Neural Machine Translation (GNMT) because it continuously learns from millions of lingual examples and allows for a single system to translate between multiple languages, all while sounding much more natural and less awkward than Narrow A.I. translations.
The creators expected the new Google Translate to improve over time (that’s what neural networks do, after all), but even they got a shock when they discovered how much Neural Machine Translation can accomplish. After release, researchers on the Google Translate team asked if the new system can translate between a language pair which it has never seen before. For example, if the system is trained to translate Japanese to English and Korean to English, can the system then generate a decent Japanese to Korean translation without additional training? The answer, to their surprise, was yes: “To the best of our knowledge,” Google writes in its Research Blog, “this is the first time this type of transfer learning has worked in Machine Translation.” Researchers are calling it zero-shot translation.
What does zero-shot translation mean for the future of A.I.?
To achieve the zero-shot translation, Google researchers believe their A.I. system invented its own “interlingua” common language based on contextual concepts and sentence structure instead of relying on word-to-word translation equivalents. This means words and sentences with shared meaning are internally represented by the GNMT system in similar ways regardless of original language. Some are calling this a “secret” or “artificial” language because the GNMT created it for the specific task of translation and it’s not readable by humans, but the point remains – it’s a powerful first glimpse of a future where computers will be able to generate their own original creations to aid themselves in completing tasks they were never trained to do. Once machines can learn from and replicate realistic human speech, they will be able to irrefutably pass the Turing test and bring us that much closer to a world of general A.I. where characters like C-3PO are no longer the stuff of fiction.