首先你必须要有Moses(废话哈哈)、然后要有GIZA++用作词对齐(traning-model.perl的时候会用到)、IRSTLM产生语言模型
大体的步骤如下:
一、准备语料:
首先我们需要你找到你要创建的翻译系统匹配的平行语料,例如英法新闻平行语料,然后对语料进行三步处理后才可以使用:tokenisation、truecasing和cleaning
首先是tokenisation,就是词例化或者说分词,要使用 :mosesdecoder/scripts/tokenizer/tokenizer.perl进行词例化。 作用是将平行语料(其实是两个文本文件),中的每个句子进行词例化
例子如下:
~/mosesdecoder/scripts/tokenizer/tokenizer.perl -l en \ < ~/corpus/training/news-commentary-v8.fr-en.en \ > ~/corpus/news-commentary-v8.fr-en.tok.en ~/mosesdecoder/scripts/tokenizer/tokenizer.perl -l fr \ < ~/corpus/training/news-commentary-v8.fr-en.fr \ > ~/corpus/news-commentary-v8.fr-en.tok.fr
说明:-l 后指定文本的语言,看了一下文件,似乎只支持 en de fr 和 it,如要给中文分词的话,可能需要更多的配置,然后要指定输入的文本和输出的地点,切记 “<” ">"是必须的!
完整参数说明如下:
Usage ./tokenizer.perl (-l [en|de|...]) (-threads 4) < textfile > tokenizedfile Options: -q ... quiet. -a ... aggressive hyphen splitting. -b ... disable Perl buffering. -time ... enable processing time calculation. -penn ... use Penn treebank-like tokenization. -protected FILE ... specify file with patters to be protected in tokenisation. -no-escape ... don't perform HTML escaping on apostrophy, quotes, etc.
然后我们还要进行truecasing,也就是对词汇的大小写进行调整
truecasing: The initial words in each sentence are converted to their most probable casing. This helps reduce data sparsity.,使用的脚本是:
/mosesdecoder/scripts/recaser/train-truecaser.perl和 /scripts/recaser/truecase.perl例子如下
~/mosesdecoder/scripts/recaser/train-truecaser.perl \ --model ~/corpus/truecase-model.en --corpus \ ~/corpus/news-commentary-v8.fr-en.tok.en ~/mosesdecoder/scripts/recaser/train-truecaser.perl \ --model ~/corpus/truecase-model.fr --corpus \ ~/corpus/news-commentary-v8.fr-en.tok.fr
~/mosesdecoder/scripts/recaser/truecase.perl \ --model ~/corpus/truecase-model.en \ < ~/corpus/news-commentary-v8.fr-en.tok.en \ > ~/corpus/news-commentary-v8.fr-en.true.en ~/mosesdecoder/scripts/recaser/truecase.perl \ --model ~/corpus/truecase-model.fr \ < ~/corpus/news-commentary-v8.fr-en.tok.fr \ > ~/corpus/news-commentary-v8.fr-en.true.fr
其中的train-truecaser脚本用来训练truecaser的model,输入文件仍然是你的corpus的已tok文件,作用是修改每一句子的首字母,输出是每个不同单词的形式和频率
manual中的USER GUIDE写到:
Instead of lowercasing all training and test data, we may also want to keep words in their nat- ural case, and only change the words at the beginning of their sentence to their most frequent form. This is what we mean by truecasing. Again, this requires first the training of a truecasing model, which is a list of words and the frequency of their different forms.
然后最后一部就是用刚才的模型进行truecase啦:
truecase.perl --model MODEL [-b] < in > out
-b代表 unbuffered,不清楚用途目前
在中文处理中应该不需要truecasing这一步
最后的词处理clean,去除一些过长的单词
Finally we clean, limiting sentence length to 80: ~/mosesdecoder/scripts/training/clean-corpus-n.perl \ ~/corpus/news-commentary-v8.fr-en.true fr en \ ~/corpus/news-commentary-v8.fr-en.clean 1 80
终于搞定了我们的语料了,下面我们要进入更深入的话题:训练语言模型
语言模型最朴实的作用在于让你的output更加流畅,更加像母语,为了达到这一效果,我们需要另外的句子对齐的平行语料来训练我们的语言模型(如果你用training model的同一个语料来训练,未免感觉会无效?)
参看manual中base系统的描述,我们需要用到以下几个工具来训练我们的语言模型。
首先:add-start-end.sh
~/irstlm/bin/add-start-end.sh \ < ~/corpus/news-commentary-v8.fr-en.true.en \ > news-commentary-v8.fr-en.sb.en
用于把你的语料添加上开始结束标记(实际上就是<s></s>标签对)
然后使用:build-lm.sh build一个语言模型源文件,输出lm源文件
export IRSTLM=$HOME/irstlm; ~/irstlm/bin/build-lm.sh \ -i news-commentary-v8.fr-en.sb.en \ -t ./tmp -p -s improved-kneser-ney -o news-commentary-v8.fr-en.lm.en
最后compile:
~/irstlm/bin/compile-lm \
首先你必须要有Moses(废话哈哈)、然后要有GIZA++用作词对齐(traning-model.perl的时候会用到)、IRSTLM产生语言模型
大体的步骤如下:
一、准备语料:
首先我们需要你找到你要创建的翻译系统匹配的平行语料,例如英法新闻平行语料,然后对语料进行三步处理后才可以使用:tokenisation、truecasing和cleaning
首先是tokenisation,就是词例化或者说分词,要使用 :mosesdecoder/scripts/tokenizer/tokenizer.perl进行词例化。 作用是将平行语料(其实是两个文本文件),中的每个句子进行词例化
例子如下:
~/mosesdecoder/scripts/tokenizer/tokenizer.perl -l en \ < ~/corpus/training/news-commentary-v8.fr-en.en \ > ~/corpus/news-commentary-v8.fr-en.tok.en ~/mosesdecoder/scripts/tokenizer/tokenizer.perl -l fr \ < ~/corpus/training/news-commentary-v8.fr-en.fr \ > ~/corpus/news-commentary-v8.fr-en.tok.fr
说明:-l 后指定文本的语言,看了一下文件,似乎只支持 en de fr 和 it,如要给中文分词的话,可能需要更多的配置,然后要指定输入的文本和输出的地点,切记 “<” ">"是必须的!
完整参数说明如下:
Usage ./tokenizer.perl (-l [en|de|...]) (-threads 4) < textfile > tokenizedfile Options: -q ... quiet. -a ... aggressive hyphen splitting. -b ... disable Perl buffering. -time ... enable processing time calculation. -penn ... use Penn treebank-like tokenization. -protected FILE ... specify file with patters to be protected in tokenisation. -no-escape ... don't perform HTML escaping on apostrophy, quotes, etc.
然后我们还要进行truecasing,也就是对词汇的大小写进行调整
truecasing: The initial words in each sentence are converted to their most probable casing. This helps reduce data sparsity.,使用的脚本是:
/mosesdecoder/scripts/recaser/train-truecaser.perl和 /scripts/recaser/truecase.perl例子如下~/mosesdecoder/scripts/recaser/train-truecaser.perl \ --model ~/corpus/truecase-model.en --corpus \ ~/corpus/news-commentary-v8.fr-en.tok.en ~/mosesdecoder/scripts/recaser/train-truecaser.perl \ --model ~/corpus/truecase-model.fr --corpus \ ~/corpus/news-commentary-v8.fr-en.tok.fr
~/mosesdecoder/scripts/recaser/truecase.perl \ --model ~/corpus/truecase-model.en \ < ~/corpus/news-commentary-v8.fr-en.tok.en \ > ~/corpus/news-commentary-v8.fr-en.true.en ~/mosesdecoder/scripts/recaser/truecase.perl \ --model ~/corpus/truecase-model.fr \ < ~/corpus/news-commentary-v8.fr-en.tok.fr \ > ~/corpus/news-commentary-v8.fr-en.true.fr
其中的train-truecaser脚本用来训练truecaser的model,输入文件仍然是你的corpus的已tok文件,作用是修改每一句子的首字母,输出是每个不同单词的形式和频率
manual中的USER GUIDE写到:
Instead of lowercasing all training and test data, we may also want to keep words in their nat- ural case, and only change the words at the beginning of their sentence to their most frequent form. This is what we mean by truecasing. Again, this requires first the training of a truecasing model, which is a list of words and the frequency of their different forms.
然后最后一部就是用刚才的模型进行truecase啦:
truecase.perl --model MODEL [-b] < in > out
-b代表 unbuffered,不清楚用途目前
在中文处理中应该不需要truecasing这一步
最后的词处理clean,去除一些过长的单词
Finally we clean, limiting sentence length to 80: ~/mosesdecoder/scripts/training/clean-corpus-n.perl \ ~/corpus/news-commentary-v8.fr-en.true fr en \ ~/corpus/news-commentary-v8.fr-en.clean 1 80
终于搞定了我们的语料了,下面我们要进入更深入的话题:训练语言模型
语言模型最朴实的作用在于让你的output更加流畅,更加像母语,为了达到这一效果,我们需要另外的句子对齐的平行语料来训练我们的语言模型(如果你用training model的同一个语料来训练,未免感觉会无效?)
参看manual中base系统的描述,我们需要用到以下几个工具来训练我们的语言模型。
首先:add-start-end.sh
~/irstlm/bin/add-start-end.sh \ < ~/corpus/news-commentary-v8.fr-en.true.en \ > news-commentary-v8.fr-en.sb.en
用于把你的语料添加上开始结束标记(实际上就是<s></s>标签对)
然后使用:build-lm.sh build一个语言模型源文件,输出lm源文件
export IRSTLM=$HOME/irstlm; ~/irstlm/bin/build-lm.sh \ -i news-commentary-v8.fr-en.sb.en \ -t ./tmp -p -s improved-kneser-ney -o news-commentary-v8.fr-en.lm.en
最后compile:
~/irstlm/bin/compile-lm \ --text \ news-commentary-v8.fr-en.lm.en.gz \ news-commentary-v8.fr-en.arpa.en
注意这里的 --text 后不要加 yes manual中写错了,这样就生成了一个arpa文件(可以用于query和生成二进制的IRSTLM模型以及KenLM模型,这里一直用生成KenLM解决,因为不知道为何IRSTLM不好用)
You can directly create an IRSTLM binary LM (for faster loading in Moses) by replacing the last command with the following: ~/irstlm/bin/compile-lm news-commentary-v8.fr-en.lm.en.gz \ news-commentary-v8.fr-en.blm.en
You can transform an arpa LM (*.arpa.en file) into an IRSTLM binary LM as follows: ~/irstlm/bin/compile-lm \ news-commentary-v8.fr-en.arpa.en \ news-commentary-v8.fr-en.blm.en
or viceversa, you can transform an IRSTLM binary LM into an arpa LM as follows: ~/irstlm/bin/compile-lm \ --text yes \ news-commentary-v8.fr-en.blm.en \ news-commentary-v8.fr-en.arpa.en
This instead binarises (for faster loading) the *.arpa.en file using KenLM: ~/mosesdecoder/bin/build_binary \ news-commentary-v8.fr-en.arpa.en \ news-commentary-v8.fr-en.blm.en You can check the language model by querying it, e.g. $ echo "is this an English sentence ?" \ | ~/mosesdecoder/bin/query news-commentary-v8.fr-en.blm.en
--text \ news-commentary-v8.fr-en.lm.en.gz \ news-commentary-v8.fr-en.arpa.en
注意这里的 --text 后不要加 yes manual中写错了,这样就生成了一个arpa文件(可以用于query和生成二进制的IRSTLM模型以及KenLM模型,这里一直用生成KenLM解决,因为不知道为何IRSTLM不好用)
You can directly create an IRSTLM binary LM (for faster loading in Moses) by replacing the last command with the following: ~/irstlm/bin/compile-lm news-commentary-v8.fr-en.lm.en.gz \ news-commentary-v8.fr-en.blm.en
You can transform an arpa LM (*.arpa.en file) into an IRSTLM binary LM as follows: ~/irstlm/bin/compile-lm \ news-commentary-v8.fr-en.arpa.en \ news-commentary-v8.fr-en.blm.en
or viceversa, you can transform an IRSTLM binary LM into an arpa LM as follows: ~/irstlm/bin/compile-lm \ --text yes \ news-commentary-v8.fr-en.blm.en \ news-commentary-v8.fr-en.arpa.en
This instead binarises (for faster loading) the *.arpa.en file using KenLM: ~/mosesdecoder/bin/build_binary \ news-commentary-v8.fr-en.arpa.en \ news-commentary-v8.fr-en.blm.en
You can check the language model by querying it, e.g. $ echo "is this an English sentence ?" \ | ~/mosesdecoder/bin/query news-commentary-v8.fr-en.blm.en
首先看看参数:
--root-dir
-- root directory, where output files are stored--corpus
-- corpus file name (full pathname), excluding extension--e
-- extension of the English corpus file--f
-- extension of the foreign corpus file--lm
-- language model: <factor>:<order>:<filename> (option can be repeated)--first-step
-- first step in the training process (default 1)--last-step
-- last step in the training process (default 7)--parts
-- break up corpus in smaller parts before GIZA++ training--corpus-dir
-- corpus directory (default $ROOT/corpus
)--lexical-dir
-- lexical translation probability directory (default $ROOT/model
)--model-dir
-- model directory (default $ROOT/model
)--extract-file
-- extraction file (default $ROOT/model/extract
)--giza-f2e
-- GIZA++ directory (default $ROOT/giza.$F-$E
)--giza-e2f
-- inverse GIZA++ directory (default $ROOT/giza.$E-$F
)--alignment
-- heuristic used for word alignment: intersect
, union
, grow
, grow-final
, grow-diag
, grow-diag-final
(default), grow-diag-final-and
, srctotgt
, tgttosrc
--max-phrase-length
-- maximum length of phrases entered into phrase table (default 7)--giza-option
-- additional options for GIZA++ training--verbose
-- prints additional word alignment information--no-lexical-weighting
-- only use conditional probabilities for the phrase table, not lexical weighting--parts
-- prepare data for GIZA++ by running snt2cooc
in parts--direction
-- run training step 2 only in direction 1 or 2 (for parallelization)--reordering
-- specifies which reordering models to train using a comma-separated list of config-strings, see FactoredTraining.BuildReorderingModel. (default distance)--reordering-smooth
-- specifies the smoothing constant to be used for training lexicalized reordering models. If the letter "u" follows the constant, smoothing is based on actual counts. (default 0.5)--alignment-factors
----translation-factors
----reordering-factors
----generation-factors
----decoding-steps
--