News News


A study has evaluated the quality of different methods of machine translation

A study has evaluated the quality of different methods of machine translation

The line of research in which Clara Ginovart is working, researcher of the Research Group in Computational Linguistics of the Department of Translation and Language Sciences. He has compared statistical and neuronal evaluation methods in a work published in Translating and the Computer 40.



Machine translation is the process whereby software converts a text from one language to another. Post-edition consists of correcting machine translations to ensure their quality. The post-editing process aims to achieve similar quality to a human translation. As more and more language service providers are incorporating machine translation post-edition into their workflow, “we see how studies on the evaluation of the quality of machine translation are becoming increasingly important”, states Clara Ginovart, author of a study published recently in Translating and the Computer 40, which includes the papers presented at The International Association for Advancement in Language Technology (AsLing) congress, held on 15 and 16 November in London (United Kingdom).

Studies on the evaluation of the quality of machine translation are becoming increasingly important

Clara Ginovart is doing an Industrial Doctorate with the company Datawords in Paris (France), under the supervision of Carme Colominas, a researcher of the research group in Computational Linguistics GLICOM (UR-LING), at the UPF Department of Translation and Language Sciences (DTCL), and of Antoni Oliver (UOC), within UPF’s PhD programme in Translation and Language Sciences.

The research by Clara Ginovart consisted of analysing the results of a case study which evaluates three machine translation engines from French into Spanish and Italian, two of them statistical methods and the other a neural method, a new technology in the world of machine translation based on artificial intelligence (deep learning, recurrent neural networks, etc.) that has been shown in recent publications to get closer to the quality of human translation. But, in all cases?, Ginovart asks herself.

The recently published study describes the results of machine translation in two types of text from a website devoted to motorcycling that was translated by Datawords.

“We use task-based evaluation of post-editing and human evaluation through ranking, and we can thus establish which method requires less post-editing work”, says the author of the article

In short, the research concludes that, for some language pairs and specialist fields, parallel corpora are insufficient for the new neural systems to manage to match the quality of the statistics; therefore, in the light of the results, “we will have to be prudent and precise when speaking of the quality of machine translation so as not to create false hopes”, the author of the research points out.

Reference work:

Clara Ginovart Cid (2018), “Statistical & Neural MT Systems in the Motorcycling Domain for Less Frequent Language Pairs – How Do Professional Post-editors Perform?”,  Translating and the Computer 40, Proceedings 15-16 November 2018, One Birdcage Walk, London, pp. 66-78.





UPF in action UPF in action

Highlights Highlights

Why UPF?

UPF in figures

27 bachelor's degrees
33 master's degrees
9 doctorates
91 % of UPF graduates would choose the same university again
28 degree's programmes offered in English (undergraduate, master's and doctorate)
44 % of master's and doctoral students are international
26 % if international teaching staff
390 international agreements
30 % of UPF graduates found a job through the University
1240 companies for internship agreements
20 economic grants for development cooperation projects
9900 euros donated by UPF students allocated for grants and solidarity projects

Agenda Agenda