NII Testbeds and Community for Information access Research
  • NTCIR Official site
  • Conference
  • Organizers
  • Data
  • Important Dates

Automatic Evaluation Procedure


The tools used for the automatic evaluation.

For all subtasks

  • NIST's mteval-v13a.pl
  • NTT's RIBES.py version 1.01

For EJ subtask

  • perl module Lingua::JA::Regular::Unicode version 0.05
  • Mecab: version 0.98
  • Dictionary for Mecab: mecab-ipadic-2.7.0-20070801.tar.gz
  • nkf: version 2.1.1

For CE and JE subtasks

We used "mteval-v13a.pl" for tokenization and calculation of the BLEU and NIST scores.
We used the "mteval-v13a.pl" tokenization function for tokenization, and "RIBES.py" for calculation of the RIBES score.
The scores are case sensitive. The default parameters of the tools, except for case sensitivity, were used.

For EJ subtask

We used the following procedure of standardization and tokenization, and used "mteval-v13a.pl" for the BLEU and NIST scores, and "RIBES.py" for the RIBES score. The default parameters of the tools were used.

  • Procedure of standardization and tokenization for Japanese sentences
  • cat j.txt | \
    perl -pe 's/ +//g;' | \
    perl -MEncode -MLingua::JA::Regular::Unicode -ne 'if (s/[ \n\x80-\xff]+//){ print $&; } while (s/[\x00-\x7f]+//) { print Encode::encode_utf8(alnum_h2z($&)); if (s/[ \n\x80-\xff]+//) { print $&; } }' | \
    nkf -We | \
    mecab -O wakati | \
    nkf -Ew | \
    perl -Mencoding=utf8 -pe 'while(s/([0-9]) ([0-9])/$1$2/g){} s/ $//;' > j.tok.txt