0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Evaluating Automatic Metrics with Incremental Machine Translation Systems

      Preprint
      , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We introduce a dataset comprising commercial machine translations, gathered weekly over six years across 12 translation directions. Since human A/B testing is commonly used, we assume commercial systems improve over time, which enables us to evaluate machine translation (MT) metrics based on their preference for more recent translations. Our study confirms several previous findings in MT metrics research and demonstrates the dataset's value as a testbed for metric evaluation. We release our code at https://github.com/gjwubyron/Evo

          Related collections

          Author and article information

          Journal
          03 July 2024
          Article
          2407.03277
          3331cf2c-d4eb-4173-b393-0db1d05c57f5

          http://creativecommons.org/licenses/by/4.0/

          History
          Custom metadata
          cs.CL

          Theoretical computer science
          Theoretical computer science

          Comments

          Comment on this article