Is Translation Memory Yesterday’s Technology?

Fully automated translation, or Machine Translation as it’s commonly known, is still an early adopters game for leading global enterprises – the Global 2000 or so. As a result, the older technology, Translation Memory, has yet to see any real impact. But we predict that this will change in the very near future, and that translation memory (TM) will become a feeder technology for machine translation (MT). That’s because the benefits of machine translation are simply too good to ignore.

Without doubt, translation memory is a great strategy to reduce translation costs. However, unless you have just started working with translation memory, the bad news is that you have likely realized most of the savings already.

This ceiling effect happens because translation memory works best when the domain of content is well established, say with an existing product.  But companies get the majority of their revenue and profits from new products.

And it is here that machine translation shines. New products require new terminology, terminology that has never been translated before and therefore has yet to find a place in your company’s translation memory. MT delivers on its promise when there is less content to recycle and more new content to translate.

As time passes, you can expect TMs to provide only incremental cost savings – while at the same time suffering incremental quality degradations. That’s because the dirty little secret about TMs is that they may be what we call “dirty”. That is, the old translations are not as relevant, and may not even meet the company’s own quality standards if they were examined (which they usually aren’t).

On the other hand, the quality of machine translation improves over time, and the savings it delivers increases accordingly.

Maintaining two separate systems is just not sustainable. Our prediction is that in the future TMs will not function as a stand-alone technology but will rather be used as input for the newer technology, MT. TMs will provide valuable training data for increasingly high performance engines.

At LexWorks we are already there. When working with our Hybrid technology, or when training an SMT engine, we use translation memories to build the language model. When the post-editors give us their corrections, we use that data as well to improve our systems in a virtuous circle of continual improvements.

Click here to email us to find out more.