It’s easy to think that machine translation has appeared overnight. We see it in everyday life, punching through language barriers and helping to connect the world. The fact of the matter is that machine translation, or at least Fully Automatic High Quality Translation (FAHQT) has been sought after for nearly a century now, with the first patent for a ‘machine translation device’ being filed in 1930.
When Machine Translation was first envisioned, it came with some extraordinary aspirations. For some of those original pioneers, the problems facing machine translation didn’t lie in the physical action of translating words. Rather, they delved into near spiritual realms and tried to identify if there was a way to understand pure meaning – not the meaning of words, or languages, but the very essence of human thoughts.
“For others it was a question of exploring the internal semantics of the human mind itself, with translation as an operation involving ‘atomic’ (perhaps universal) meaningful units, or of investigating the basic thought processes underlying communication”. (John Hutchins, 2000)
Even those who focused more on the mechanical aspects of automatic translation were not entirely spared from romanticising its benefits. Victor Yngve, one of the most influential developers of machine translation, wrote “If this task could be accomplished, the free flow of information between language communities would be expedited; the economic, cultural, and social advantages would be far-reaching.” (Victor Yngve, 1963) There was an idea that if machines could shoulder the burden imposed by language barriers, the world would reach a new level of connectivity and be able to build a new sense of unity across previously unreachable divides.
To achieve this romantic vision of a unified world, a much more nuanced translation, one that was able to incorporate context and intention, was needed. Time after time researchers ran in to the roadblock of computers not being powerful enough; they simply could not retain enough information to properly deduce context, “Computers were for a long time limited in storage and speed, expensive to use and not widely accessible” (John Hutchins, 2000). The end result was the devastating paper by Bar-Hillel (widely circulated in 1960), declaring that machine translation was a failure.
“Bar-Hillel’s critical report on the feasibility of MT (mainly the ‘nonfeasibility of FAHQT) convinced many in the field and out of it that MT is a failure.” (John Huthcins, 2000)
This was followed by the most infamous moment in machine translation history, the Alpac report. It was decided that machine translation had ‘no future’ and should not receive any more support from the United States government. Funding was cut, organisations created for the sole purpose of expanding machine translation branched into new fields and distanced themselves from the ‘failed experiment’. The industry that had promised a new world was forever tarred with the brush of ‘good enough, for certain niche tasks’, a mentality that persists to this day.
The innovations produced by these pioneers in the pursuit of perfect machine translation have drastically affected the world as we know it. One early problem that machine translation had to overcome was creating a way for computers to ‘read’ natural languages. Translating language into 0s and 1s required a brand-new language. In pursuit of this, Dr Victor Yngve developed the first ‘string processing language’ COMIT. Thus, computational linguistics was born, a field of study that is still going strong and has had far-reaching benefits, such as the development of search engines. It is therefore fair to say that Google is a direct descendent of machine translation.
But the world’s most powerful search engine isn’t the only technological advancement that owes its origins to machine translation. Before the 1950s, no one had even conceptualised the idea of artificial intelligence. The idea that a machine might be able to think, let alone ‘learn’ was unheard of. But in 1956, the artificial intelligence field of study was officially founded at Dartmouth University – in conjunction with machine translation research. By this point researchers understood that the only way to achieve FAHQT would be to adopt a more nuanced approach to language.
“Overall design was trending towards a three (or more) stage approach involving largely independent processes of analysis, transfer, and synthesis (or generation)” (John Hutchin, 2000)
The fact that these pioneers held on to the dream of connecting the world through language, long before something like the internet was even imagined, shows how ahead of their time they were. Carving out new fields of study for computation, developing their own languages to create a dialogue between mankind and machines, and even starting the process of artificial intelligence. These achievements bring us all closer to reaching that long sought after goal of universal understanding.