| 科技翻译协会 - 江苏省科技翻译工作者协会
logo

科普知识

Knowledge
当前位置:
正文

【机器翻译科普4】神经机器翻译的发展趋势

发布时间:2021-12-28 02:00:00 


      Last month, OpenAI launched the closed beta version of its GPT-3 (Generative Pre-Trained Transformer-3) to show the potential of the model. As the number of those who have access to the program is starting to grow, a selected group of investors, experts and journalists have shared the results of their experiments on social media.
      The principles that guide GPT-3 are simple, at least conceptually: a machine–learning algorithm analyzes the statistical models of a trillion words extrapolated from digitized books and web discussions. The result is a fluent text, even if in the long run the software shows all its logical limits when subjected to complex reasoning–as it is often the case with this kind of software. And although some experts tested GPT-3’s ability to translate and obtained impressive results with a very small input, we’re still a far cry from the universal translator described in Murray Leinster’s First Contact or other fantastic devices found in more popular sci-fi books.
      So, it might be useful to be reminded of the current state of technology in the real world and, at the same time, have an overview of where things are going. For this reason, Wordbee organized a panel with four experts to discuss what we can expect from neural machine translation in the near future.
      Machine learning and neural machine translation
      Machine learning (ML) is a branch of computer science that can be considered a subfield of artificial intelligence. Defining in a simple way the characteristics and applications of machine learning is not always possible, since the application field is wide, and ML works with different ways, techniques, and tools.
      But the question that interests us is more specific, i.e. how is machine learning applied in computational linguistics and natural language programming?
      One might say that there is no big difference between machine learning and neural machine translation (NMT). Problems like developing a machine learning model, adapting an existing one, deploying it, and making sure it delivers high quality results can also be found in the field of machine translation. On the other hand, machine translation manages unstructured data, so we need specific models that can help find the structure (patterns) in a dataset.
      For many years, language service providers have tried to find the ideal use case for machine translation and make it work for customers and for themselves. Until more or less five years ago, the main discussion was centered around the productivity of machine translation and the usefulness of post-editing. After many benchmarks, academic papers, and conferences on these topics, in 2020 the discussion has finally moved forward.
      Our panel experts agreed to estimate that 80% of the training data used for generic NMT is useful. As Maxim Khalilov, Head of R&D at Glovo, suggests, this means that we are on the cusp of a new era, in which machine learning is playing a new and important role in how to distinguish between good and bad translations.
      Quality Estimation: A game-changer?
      A new industry paradigm might emerge, with QA, QC, and QE as essential elements. By the way, if these acronyms are making your head spin, we’ve got you covered with this previous article.
      When it comes to the topic of quality and machine translation in 2020, what can we expect for the next few years?
      As a machine–learning technology, a quality estimation (QE) algorithm automatically assigns a quality indicator to a machine-translation output without having access to a human-generated reference translation. The technology itself has been around for a while, but only a few companies have the financial and human resources necessary to experiment with QE in a production environment. Yuka Nakasone, Intento Inc.’s Globalization and Localization Director, states that in 2020 the technology for QE of machine translation systems will be productized at scale, and we will probably see the rise of hybrid MT-QE systems.
      This development could prove particularly interesting for machine translation providers. When deploying an MT system, the main factors to be reckoned with are the usual ones, time, cost, and quality. QE technology can allow tech providers to play with quality boundaries while trying to strike the right balance between cost and time.
      According to Paula Reichenberg, CEO at Hieronymus, two other interesting uses of QE technology could be a) the assessment of the quality of data used to train an NMT engine and b) the detection of the best NMT engine for the translation of a specific document. This would be particularly interesting in complex and highly specialized fields like law and pharmaceuticals. Google and Microsoft are already using this QE technology: the innovation will be then making QE available to the public.
      Tighter integration and adaptive systems
      Samuel Läubli, CTO at TextShuttle, underlines another interesting development, i.e. the interplay between various tools, especially CAT tools, and NMT, in combination with translation memories and term bases. The current level of integration - that allows translators to post-edit the suggestions of the NMT system to which the CAT is connected through an API - will become even tighter.
      Just like for statistical machine translation (SMT) in 2015, there is talk now of adaptive NMT systems. Thanks to the adaptive technology an NMT system can “learn” on the fly and improves during the post-editing. To this end, translation memories are essentials: they need to be relevant, precise and of good quality. The same goes for term bases, although terminology integration will probably remain a pain point for morphologically rich languages.
      Context-aware MT
      Traditionally MT systems translated phrase by phrase, and the translation of isolated units brought about some obvious limitations. The effort now is to develop document-level machine translation systems, so that, in order to translate a sentence, the MT engine will look at previous and following sentences. Google has recorded some progress in this field.
      There are other potential trends that are emerging: How do you choose an NMT engine in terms of verticals and language pairs? Do you need various NMT engines to handle multilingual content? Is hyper–specialization of NMT engines for specific segments a possibility? And most importantly, how do you choose which trends to follow? It is, of course, important to stay up to date with technological developments, but each new “thing” needs to be evaluated based on the problems that your own company needs to solve, the scalability of a solution, the availability of open source code and much more.
      Wordbee integrates with a variety of MT engines and is ready to assist you in adapting technological solutions into your translation workflow. Contact us for a free consultation.
      上个月,OpenAI推出了其GPT-3(Generative Pre-Trained Transformer-3)的内部测试版,展示了该模型的潜力。随着访问该项目的人数开始增加,一组精选的投资者、专家和记者已经在社交媒体上分享了他们的实验结果。
      指导GPT-3的原则很简单,至少在概念上是这样:机器学习算法分析从数字化书籍和网络讨论中推断出的一万亿字的统计模型。结果是一个流畅的文本,即使从长远来看,当受到复杂的推理时,该软件显示出所有的逻辑极限--因为这类软件经常是这样。虽然一些专家测试了GPT-3的翻译能力,并以很小的输入量获得了令人印象深刻的结果,但我们与默里·莱茵斯特的《First Contactor》中描述的万能翻译机其他在更流行的科幻书中发现的神奇设备还是有很大的差距。
      因此,在提醒人们注意现实世界的技术现状的同时,对事物的发展方向有一个大致的了解可能会有所帮助。为此,Word公司组织了一个由四位专家组成的小组,讨论在不久的将来,我们对神经机器翻译的期待。
      机器学习与神经机器翻译
      机器学习(ML)是计算机科学的一个分支,可以认为是人工智能的一个子领域。用简单的方式来定义机器学习的特点和应用并不总是可能的,因为应用领域很广,而且ML的工作方式、技术和工具都不一样。
      但我们感兴趣的问题更具体,即机器学习如何应用于计算语言学和自然语言编程?
      也许有人会说,机器学习和神经机器翻译(NMT)之间没有太大的区别。在机器翻译领域,同样存在着开发机器学习模型、改编现有模型、部署模型、确保高质量结果等问题。另一方面,机器翻译管理的是非结构化数据,所以我们需要特定的模型来帮助寻找数据集的结构(模式)。
      多年来,语言服务提供商一直试图为机器翻译找到理想的用例,并使其为客户和自己工作。直到大约五年前,主要的讨论都是围绕着机器翻译的生产力和后期编辑的有用性。在经历了许多关于这些主题的基准、学术论文和会议之后,在2020年,讨论终于向前推进了。
      我们的专家小组一致认为,估计80%用于通用NMT的训练数据是有用的。正如Glovo的研发负责人马克西姆·哈利洛夫所言,这意味着我们正处于一个新的时代,机器学习在如何区分好的和坏的翻译中发挥着新的重要作用。
      质量评估:改变游戏规则的玩家?
      一个新的行业模式可能会出现,QA、QC和QE是必不可少的元素。顺便说一下,如果这些缩写词让你头晕目眩,我们在之前的文章中已经为你做了介绍。
      谈到2020年质量和机器翻译的话题,未来几年我们能期待什么?
      作为一种机器学习技术,QE(quality estimation)算法可以在没有人工生成的参考译文的情况下,自动为机器翻译输出分配一个质量指标。Intento Inc.的全球化和本地化总监Yuka Nakasone表示,在2020年,机器翻译系统的QE技术将实现规模化生产,我们可能会看到混合MT-QE系统的兴起。
      在部署MT系统时,需要考虑的主要因素是时间、成本和质量。QE技术可以让技术提供商在尝试在成本和时间之间取得适当平衡的同时,也能发挥质量的优势。
      根据Hieronymus的首席执行官Paula Reichenberg的说法,QE技术的另外两个有趣的用途可能是:a)评估用于训练NMT引擎的数据质量;b)检测特定文档翻译的最佳NMT引擎。这在复杂和高度专业化的领域,如法律和制药领域将特别有趣。谷歌和微软已经在使用这种QE技术:那么创新之处在于将QE提供给公众。
      更紧密的一体化和适应性系统
      TextShuttle公司的首席技术官SamuelLäubli强调了另一个有趣的发展,即各种工具,特别是CAT工具和NMT之间的相互作用,与翻译记忆和术语库相结合。目前的整合水平--允许译者对通过API连接到CAT的NMT系统的建议进行后期编辑--将变得更加紧密。
      就像2015年的统计机器翻译(SMT)一样,现在也在讨论自适应NMT系统。得益于自适应技术,NMT系统可以在飞行中 "学习",并在后期编辑过程中进行改进。为此,翻译记忆是必不可少的:它们需要是相关的、精确的和高质量的。术语库也是如此,尽管对于形态丰富的语言来说,术语整合可能仍然是一个痛点。
      语境感知MT
      传统的MT系统是逐个短语翻译的,而孤立单元的翻译带来了一些明显的局限性.现在的努力是开发文档级的机器翻译系统,这样,为了翻译一个句子,MT引擎会查看之前和之后的句子。谷歌在这一领域已经记录了一些进展。
      还有其他潜在的趋势正在出现。在垂直领域和语言对方面如何选择NMT引擎?你是否需要各种NMT引擎来处理多语言内容?针对特定细分市场的NMT引擎是否有超专业化的可能?而最重要的是,你如何选择跟随哪些趋势?当然,紧跟技术发展的步伐是很重要的,但每一个新的 "事物 "都需要根据自己公司需要解决的问题、解决方案的可扩展性、开源代码的可用性等进行评估。

Baidu
map