An open, standardized framework that categorizes translation errors by type and severity to produce consistent, comparable quality scores across any language or project.
Before MQM, the localization industry had no shared standard for measuring translation quality. Different companies, tools, and clients used incompatible scoring systems, what counted as a major error in one framework was minor in another, making quality scores meaningless outside the context in which they were produced. MQM was developed to solve this problem: a common vocabulary for describing translation errors that any team, tool, or LSP can use and compare.
MQM was originally developed through the EU-funded QTLaunchPad project and has since been adopted and maintained by the localization community through GALA and the W3C MQM Community Group. It applies to human translation, machine translation, and AI-generated translation, making it one of the few frameworks designed to work across all translation types in a single scoring model.
MQM is an analytic evaluation framework, meaning errors are identified and annotated at the segment level, associated with specific words or phrases in the translated text, rather than assessed holistically at the document level.
The process follows three stages:
MQM organizes errors into eight major dimensions, each covering a distinct aspect of translation quality:
Each dimension contains more granular issue types, MQM defines over 100 in total, but teams typically select only the dimensions and issue types relevant to their project type and content.
MQM sits within the quality evaluation stage of a localization workflow, it is a retrospective tool applied after translation is complete. It complements quality estimation (QE), which predicts quality before human review, and linguistic QA (LQA), which is the broader process of checking translations for errors. MQM provides the structured scoring model that makes LQA results objective, consistent, and actionable. For a practical look at using this framework to validate AI output, see this piece published on Substack by our Lead AI Researcher, David Václavek.
Read more about the framework on the official website: https://themqm.org/