Machine Translation: History, Concerns, and Prospects


Being a translator, I can describe my relationship status with Machine Translation (MT) as “It is complicated”. While the latest MT technologies can provide valuable assistance to translators, many translators fear that future advancements would eventually replace human translators entirely.

Google Translate entered the scene in April 2006, at the very middle of my college years where I studied translation. Back then, it was kind of shameful to resort to MT; it was very discouraged and easily detectable by our lecturers. Now that I have been translating for a living for almost fifteen years, I have witnessed how MT has established its status in the translation market worldwide, albeit still side-by-side with the human element.

In 1949, American mathematician Warren Weaver was the first to propose the idea of decoding a human language using mathematics and computers and then re-encoding it into another. The first MT technology inspired by Weaver’s idea, Rule-Based MT, emerged in the 1950s. Language experts and programmers worked closely to develop linguistic rules that represent the structures of both the input and the output languages, which we, translators, call the Source Language (SL) and the Target Language (TL). The process involves a parser that analyses and decodes the SL, creates an intermediate coded representation of it, and finally encodes it into the TL to produce the translation.

With modern computers capacity increasing, Statistical MT (STM) technology appeared in the early 1990s; it was adopted by IBM Candide experimental machine translation system. The technology went through continuous refinement and was adopted by Google Translate as it launched in 2006. STM depends on a huge data set of approved translations, known as a corpus. These corpora help the machine generate a translation through automatically deducing the statistical model.

In 2015, Neural Machine Translation (NMT) emerged; it uses complex computational models inspired by the human brain. These models consist of interconnected nodes, or artificial neurons, that process information through a series of mathematical operations. NMT is a data-driven approach based on machine learning. It translates a whole sequence or sentence at a time, and depends on the broader context to help it figure out the most relevant translation. NMT has currently become the technology of choice across the industry, and was adopted by Google Translate in 2016.

In response to these subsequent developments, Machine Translation Post Editing (MTPE) services has become increasingly popular among Language Service Providers (LSPs) worldwide. This method depends on translating segmented texts through machines, and then using a human translator to finetune them, correct errors, etc. It is believed to save these companies much time and money, since MTPE rates are lower than human translation rates.

David Čaněk, CEO of Memsource—a leading translation software company—declared that 2020 marked the first year MTPE was the dominant method of translation. However, most translators, would go for translating without MT input than post-edit, mainly because MT stifles creativity.

MT technologies will definitely not stop here, given the recently released AI language generative bots such as Google Gemini and ChatGPT. Personally, I believe MT technologies can provide translators with valuable assistance in performing their tasks and delivering their tasks much faster, but they still cannot work without, let alone replace, a skillful translator.


Image by on Freepik

About Us

SCIplanet is a bilingual edutainment science magazine published by the Bibliotheca Alexandrina Planetarium Science Center and developed by the Cultural Outreach Publications Unit ...
Continue reading

Contact Us

P.O. Box 138, Chatby 21526, Alexandria, EGYPT
Tel.: +(203) 4839999
Ext.: 1737–1781

Become a member

© 2024 | Bibliotheca Alexandrina