台式机器:I5-6500 GTX950, ubuntu14.04(新手建议不要太新的ubutnu,出现问题不好百度)
1. 安装CUDA7.5 到NVIDIA下载CUDA安装包,然后在安装一些可能需要的库:
sudo apt-get install ppa-purge
sudo apt-add-repository ppa:xorg-edgers
sudo apt-get update
subversion ,automake , autoconf, libtool, g++, zlib, zlib1g-dev, libatal, perl, build-essential, gfortran, libatlas-dev, libatlas-base-dev, gawk
freeglut3-dev, libx11-dev , libxmu-dev, libxi-dev, libgl1-mesa-glx, libglu1-mesa-dev, libcheese-gtk23, libcheese7, libgl1-mesa-dri, git
安装这一大堆库,可能会出现ubutnu昭著的依赖问题,出现问题 就 sudo apt-get -f install ,再安装一遍,不行就装next 库包。
清楚所有自带Nvida 驱动:sudo apt-get remove --purge nvidia*
(这里趁还在图形界面,先官网下载一个你自己显卡的Nvidia的驱动预备着)
编辑黑名单(不知道有啥用,先干着先)
sudo vim /etc/modprobe.d/blacklist.conf
在文件末尾ADD:
blacklist vga16fb
blacklist nouveau
blacklist rivafb
blacklist nvidiafb
blacklist rivatv
2. 关闭图形界面到ctrl+alt+F1 到命令行Mode
sudo service lightdm stop //关闭X窗口
chmod +x cuda_7.5.18_linux.run (这个就是前面下载好的CUDA 7.5的包)
sudo ./ cuda_7.5.18_linux.run
能够默认配置路径就不改,看不懂的就yes
最后看
==========
=Summary =
==========
出现 driver: installed 就OK
设置CUDA path :在/etc/profile里面最末加入 PATH="$PATH:/usr/local/cuda-7.5/bin
在配置链接库: /etc/ld.so.conf 文件里面最末添加 :include /usr/local/cuda-7.5/lib64
然后再ctrl+alt+F2打开新的cmd,root身份登陆
ldconfig 重新设置链接库路径
开启X server: service lightdm start
这个时候你在ctrl+alt+F7到图形界面,发现分辨率很低,而且一直循环登陆不进去
这时就需要重新安装之前下载的Nvidia的驱动了 ,回到命令行界面,root账户登陆
service lightdm stop
./NVIDIA-Linux-x86_64-375.66.run --no-opengl-files
service lightdm start
再回到图形界面应该就OK了。
下面开始编译安装kaldi...
3. 下载Kaldi 源代码 svn co svn://svn.code.sf.NET/p/kaldi/code/trunk kaldi-trunk
自行百度安装配置GCC4.6 & G++ 4.6
保证在新的cmd shell里面输入gcc --version & g++ --version显示4.6版本就OK
修改默认shell
sudo ln -s -f bash /bin/sh
开始编译安装Kaldi ( 基本上都是ubuntu 三部曲式的安装方式)
cd ./kaldi-master/tools
make –j8
cd ../src
./configure
make depend –j8
make –j8
测试使用CUDA 计算:
cd kaldi-maser/src/cudamatrix/
sudo vim Makefile
将TESTFILES改成BINFILES
make all -j8
./cu-vector-test
可以看到显卡型号,显存使用情况
到此Kaldi安装差不多了...
下面是Timit 训练报告:
============================================================================
Data & Lexicon & Language Preparation
============================================================================
wav-to-duration scp:train_wav.scp ark,t:train_dur.ark
LOG (wav-to-duration:main():wav-to-duration.cc:68) Printed duration for 3696 audio files.
LOG (wav-to-duration:main():wav-to-duration.cc:70) Mean duration was 3.06336, min and max durations were 0.91525, 7.78881
wav-to-duration scp:dev_wav.scp ark,t:dev_dur.ark
LOG (wav-to-duration:main():wav-to-duration.cc:68) Printed duration for 400 audio files.
LOG (wav-to-duration:main():wav-to-duration.cc:70) Mean duration was 3.08212, min and max durations were 1.09444, 7.43681
wav-to-duration scp:test_wav.scp ark,t:test_dur.ark
LOG (wav-to-duration:main():wav-to-duration.cc:68) Printed duration for 192 audio files.
LOG (wav-to-duration:main():wav-to-duration.cc:70) Mean duration was 3.03646, min and max durations were 1.30562, 6.21444
Data preparation succeeded
/home/aderic/tmp/kaldi-trunk/egs/timit/s5/../../../tools/irstlm/bin//build-lm.sh
Temporary directory stat_20066 does not exist
creating stat_20066
Extracting dictionary from training corpus
Splitting dictionary into 3 lists
Extracting n-gram statistics for each word list
Important: dictionary must be ordered according to order of appearance of words in data
used to generate n-gram blocks, so that sub language model blocks results ordered too
dict.000
dict.001
dict.002
$bin/ngt -i="$inpfile" -n=$order -gooout=y -o="$gzip -c > $tmpdir/ngram.${sdict}.gz" -fd="$tmpdir/$sdict" $dictionary -iknstat="$tmpdir/ikn.stat.$sdict" >> $logfile 2>&1
Estimating language models for each word list
dict.000
dict.001
dict.002
$scr/build-sublm.pl $verbose $prune $smoothing --size $order --ngrams "$gunzip -c $tmpdir/ngram.${sdict}.gz" -sublm $tmpdir/lm.$sdict >> $logfile 2>&1
$scr/build-sublm.pl $verbose $prune $smoothing --size $order --ngrams "$gunzip -c $tmpdir/ngram.${sdict}.gz" -sublm $tmpdir/lm.$sdict >> $logfile 2>&1
Merging language models into data/local/lm_tmp/lm_phone_bg.ilm.gz
Cleaning temporary directory stat_20066
Removing temporary directory stat_20066
inpfile: data/local/lm_tmp/lm_phone_bg.ilm.gz
outfile: /dev/stdout
loading up to the LM level 1000 (if any)
dub: 10000000
Language Model Type of data/local/lm_tmp/lm_phone_bg.ilm.gz is 1
Language Model Type is 1
iARPA
loadtxt_ram()
1-grams: reading 51 entries
done level 1
2-grams: reading 1694 entries
done level 2
done
OOV code is 50
OOV code is 50
OOV code is 50
Saving in txt format to /dev/stdout
savetxt: /dev/stdout
save: 51 1-grams
save: 1694 2-grams
done
Dictionary & language model preparation succeeded
Checking data/local/dict/silence_phones.txt ...
--> reading data/local/dict/silence_phones.txt
--> data/local/dict/silence_phones.txt is OK
Checking data/local/dict/optional_silence.txt ...
--> reading data/local/dict/optional_silence.txt
--> data/local/dict/optional_silence.txt is OK
Checking data/local/dict/nonsilence_phones.txt ...
--> reading data/local/dict/nonsilence_phones.txt
--> data/local/dict/nonsilence_phones.txt is OK
Checking disjoint: silence_phones.txt, nonsilence_phones.txt
--> disjoint property is OK.
Checking data/local/dict/lexicon.txt
--> reading data/local/dict/lexicon.txt
--> data/local/dict/lexicon.txt is OK
Checking data/local/dict/lexiconp.txt
--> reading data/local/dict/lexiconp.txt
--> data/local/dict/lexiconp.txt is OK
Checking lexicon pair data/local/dict/lexicon.txt and data/local/dict/lexiconp.txt
--> lexicon pair data/local/dict/lexicon.txt and data/local/dict/lexiconp.txt match
Checking data/local/dict/extra_questions.txt ...
--> reading data/local/dict/extra_questions.txt
--> data/local/dict/extra_questions.txt is OK
--> SUCCESS [validating dictionary directory data/local/dict]
fstaddselfloops 'echo 49 |' 'echo 49 |'
prepare_lang.sh: validating output directory
Checking data/lang/phones.txt ...
--> data/lang/phones.txt is OK
Checking words.txt: #0 ...
--> data/lang/words.txt has "#0"
--> data/lang/words.txt is OK
Checking disjoint: silence.txt, nonsilence.txt, disambig.txt ...
--> silence.txt and nonsilence.txt are disjoint
--> silence.txt and disambig.txt are disjoint
--> disambig.txt and nonsilence.txt are disjoint
--> disjoint property is OK
Checking sumation: silence.txt, nonsilence.txt, disambig.txt ...
--> summation property is OK
Checking data/lang/phones/context_indep.{txt, int, csl} ...
--> 1 entry/entries in data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.int corresponds to data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.csl corresponds to data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.{txt, int, csl} are OK
Checking data/lang/phones/disambig.{txt, int, csl} ...
--> 2 entry/entries in data/lang/phones/disambig.txt
--> data/lang/phones/disambig.int corresponds to data/lang/phones/disambig.txt
--> data/lang/phones/disambig.csl corresponds to data/lang/phones/disambig.txt
--> data/lang/phones/disambig.{txt, int, csl} are OK
Checking data/lang/phones/nonsilence.{txt, int, csl} ...
--> 47 entry/entries in data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.int corresponds to data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.csl corresponds to data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.{txt, int, csl} are OK
Checking data/lang/phones/silence.{txt, int, csl} ...
--> 1 entry/entries in data/lang/phones/silence.txt
--> data/lang/phones/silence.int corresponds to data/lang/phones/silence.txt
--> data/lang/phones/silence.csl corresponds to data/lang/phones/silence.txt
--> data/lang/phones/silence.{txt, int, csl} are OK
Checking data/lang/phones/optional_silence.{txt, int, csl} ...
--> 1 entry/entries in data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.int corresponds to data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.csl corresponds to data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.{txt, int, csl} are OK
Checking data/lang/phones/roots.{txt, int} ...
--> 48 entry/entries in data/lang/phones/roots.txt
--> data/lang/phones/roots.int corresponds to data/lang/phones/roots.txt
--> data/lang/phones/roots.{txt, int} are OK
Checking data/lang/phones/sets.{txt, int} ...
--> 48 entry/entries in data/lang/phones/sets.txt
--> data/lang/phones/sets.int corresponds to data/lang/phones/sets.txt
--> data/lang/phones/sets.{txt, int} are OK
Checking data/lang/phones/extra_questions.{txt, int} ...
--> 2 entry/entries in data/lang/phones/extra_questions.txt
--> data/lang/phones/extra_questions.int corresponds to data/lang/phones/extra_questions.txt
--> data/lang/phones/extra_questions.{txt, int} are OK
Checking optional_silence.txt ...
--> reading data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.txt is OK
Checking disambiguation symbols: #0 and #1
--> data/lang/phones/disambig.txt has "#0" and "#1"
--> data/lang/phones/disambig.txt is OK
Checking topo ...
--> data/lang/topo's nonsilence section is OK
--> data/lang/topo's silence section is OK
--> data/lang/topo is OK
Checking data/lang/oov.{txt, int} ...
--> 1 entry/entries in data/lang/oov.txt
--> data/lang/oov.int corresponds to data/lang/oov.txt
--> data/lang/oov.{txt, int} are OK
--> data/lang/L.fst is olabel sorted
--> data/lang/L_disambig.fst is olabel sorted
--> SUCCESS [validating lang directory data/lang]
Preparing train, dev and test data
utils/validate_data_dir.sh: Successfully validated data-directory data/train
utils/validate_data_dir.sh: Successfully validated data-directory data/dev
utils/validate_data_dir.sh: Successfully validated data-directory data/test
Preparing language models for test
arpa2fst -
Processing 1-grams
Processing 2-grams
Connected 0 states without outgoing arcs.
fstisstochastic data/lang_test_bg/G.fst
0.0003667 -0.0763019
Checking data/lang_test_bg/phones.txt ...
--> data/lang_test_bg/phones.txt is OK
Checking words.txt: #0 ...
--> data/lang_test_bg/words.txt has "#0"
--> data/lang_test_bg/words.txt is OK
Checking disjoint: silence.txt, nonsilence.txt, disambig.txt ...
--> silence.txt and nonsilence.txt are disjoint
--> silence.txt and disambig.txt are disjoint
--> disambig.txt and nonsilence.txt are disjoint
--> disjoint property is OK
Checking sumation: silence.txt, nonsilence.txt, disambig.txt ...
--> summation property is OK
Checking data/lang_test_bg/phones/context_indep.{txt, int, csl} ...
--> 1 entry/entries in data/lang_test_bg/phones/context_indep.txt
--> data/lang_test_bg/phones/context_indep.int corresponds to data/lang_test_bg/phones/context_indep.txt
--> data/lang_test_bg/phones/context_indep.csl corresponds to data/lang_test_bg/phones/context_indep.txt
--> data/lang_test_bg/phones/context_indep.{txt, int, csl} are OK
Checking data/lang_test_bg/phones/disambig.{txt, int, csl} ...
--> 2 entry/entries in data/lang_test_bg/phones/disambig.txt
--> data/lang_test_bg/phones/disambig.int corresponds to data/lang_test_bg/phones/disambig.txt
--> data/lang_test_bg/phones/disambig.csl corresponds to data/lang_test_bg/phones/disambig.txt
--> data/lang_test_bg/phones/disambig.{txt, int, csl} are OK
Checking data/lang_test_bg/phones/nonsilence.{txt, int, csl} ...
--> 47 entry/entries in data/lang_test_bg/phones/nonsilence.txt
--> data/lang_test_bg/phones/nonsilence.int corresponds to data/lang_test_bg/phones/nonsilence.txt
--> data/lang_test_bg/phones/nonsilence.csl corresponds to data/lang_test_bg/phones/nonsilence.txt
--> data/lang_test_bg/phones/nonsilence.{txt, int, csl} are OK
Checking data/lang_test_bg/phones/silence.{txt, int, csl} ...
--> 1 entry/entries in data/lang_test_bg/phones/silence.txt
--> data/lang_test_bg/phones/silence.int corresponds to data/lang_test_bg/phones/silence.txt
--> data/lang_test_bg/phones/silence.csl corresponds to data/lang_test_bg/phones/silence.txt
--> data/lang_test_bg/phones/silence.{txt, int, csl} are OK
Checking data/lang_test_bg/phones/optional_silence.{txt, int, csl} ...
--> 1 entry/entries in data/lang_test_bg/phones/optional_silence.txt
--> data/lang_test_bg/phones/optional_silence.int corresponds to data/lang_test_bg/phones/optional_silence.txt
--> data/lang_test_bg/phones/optional_silence.csl corresponds to data/lang_test_bg/phones/optional_silence.txt
--> data/lang_test_bg/phones/optional_silence.{txt, int, csl} are OK
Checking data/lang_test_bg/phones/roots.{txt, int} ...
--> 48 entry/entries in data/lang_test_bg/phones/roots.txt
--> data/lang_test_bg/phones/roots.int corresponds to data/lang_test_bg/phones/roots.txt
--> data/lang_test_bg/phones/roots.{txt, int} are OK
Checking data/lang_test_bg/phones/sets.{txt, int} ...
--> 48 entry/entries in data/lang_test_bg/phones/sets.txt
--> data/lang_test_bg/phones/sets.int corresponds to data/lang_test_bg/phones/sets.txt
--> data/lang_test_bg/phones/sets.{txt, int} are OK
Checking data/lang_test_bg/phones/extra_questions.{txt, int} ...
--> 2 entry/entries in data/lang_test_bg/phones/extra_questions.txt
--> data/lang_test_bg/phones/extra_questions.int corresponds to data/lang_test_bg/phones/extra_questions.txt
--> data/lang_test_bg/phones/extra_questions.{txt, int} are OK
Checking optional_silence.txt ...
--> reading data/lang_test_bg/phones/optional_silence.txt
--> data/lang_test_bg/phones/optional_silence.txt is OK
Checking disambiguation symbols: #0 and #1
--> data/lang_test_bg/phones/disambig.txt has "#0" and "#1"
--> data/lang_test_bg/phones/disambig.txt is OK
Checking topo ...
--> data/lang_test_bg/topo's nonsilence section is OK
--> data/lang_test_bg/topo's silence section is OK
--> data/lang_test_bg/topo is OK
Checking data/lang_test_bg/oov.{txt, int} ...
--> 1 entry/entries in data/lang_test_bg/oov.txt
--> data/lang_test_bg/oov.int corresponds to data/lang_test_bg/oov.txt
--> data/lang_test_bg/oov.{txt, int} are OK
--> data/lang_test_bg/L.fst is olabel sorted
--> data/lang_test_bg/L_disambig.fst is olabel sorted
--> data/lang_test_bg/G.fst is ilabel sorted
--> data/lang_test_bg/G.fst has 50 states
fstdeterminizestar data/lang_test_bg/G.fst /dev/null
--> data/lang_test_bg/G.fst is determinizable
--> G.fst did not contain cycles with only disambig symbols or epsilon on the input, and did not contain
the forbidden symbols or (if present in vocab) on the input or output.
--> Testing determinizability of L_disambig . G
fsttablecompose data/lang_test_bg/L_disambig.fst data/lang_test_bg/G.fst
fstdeterminizestar
--> L_disambig . G is determinizable
--> SUCCESS [validating lang directory data/lang_test_bg]
Succeeded in formatting data.
============================================================================
MFCC Feature Extration & CMVN for Training and Test set
============================================================================
steps/make_mfcc.sh --cmd run.pl --nj 10 data/train exp/make_mfcc/train mfcc
steps/make_mfcc.sh: moving data/train/feats.scp to data/train/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/train
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
Succeeded creating MFCC features for train
steps/compute_cmvn_stats.sh data/train exp/make_mfcc/train mfcc
Succeeded creating CMVN stats for train
steps/make_mfcc.sh --cmd run.pl --nj 10 data/dev exp/make_mfcc/dev mfcc
steps/make_mfcc.sh: moving data/dev/feats.scp to data/dev/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/dev
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
Succeeded creating MFCC features for dev
steps/compute_cmvn_stats.sh data/dev exp/make_mfcc/dev mfcc
Succeeded creating CMVN stats for dev
steps/make_mfcc.sh --cmd run.pl --nj 10 data/test exp/make_mfcc/test mfcc
steps/make_mfcc.sh: moving data/test/feats.scp to data/test/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/test
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
Succeeded creating MFCC features for test
steps/compute_cmvn_stats.sh data/test exp/make_mfcc/test mfcc
Succeeded creating CMVN stats for test
============================================================================
MonoPhone Training & Decoding
============================================================================
steps/train_mono.sh --nj 30 --cmd run.pl data/train data/lang exp/mono
steps/train_mono.sh: Initializing monophone system.
steps/train_mono.sh: Compiling training graphs
steps/train_mono.sh: Aligning data equally (pass 0)
steps/train_mono.sh: Pass 1
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 2
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 3
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 4
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 5
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 6
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 7
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 8
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 9
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 10
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 11
steps/train_mono.sh: Pass 12
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 13
steps/train_mono.sh: Pass 14
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 15
steps/train_mono.sh: Pass 16
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 17
steps/train_mono.sh: Pass 18
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 19
steps/train_mono.sh: Pass 20
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 21
steps/train_mono.sh: Pass 22
steps/train_mono.sh: Pass 23
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 24
steps/train_mono.sh: Pass 25
steps/train_mono.sh: Pass 26
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 27
steps/train_mono.sh: Pass 28
steps/train_mono.sh: Pass 29
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 30
steps/train_mono.sh: Pass 31
steps/train_mono.sh: Pass 32
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 33
steps/train_mono.sh: Pass 34
steps/train_mono.sh: Pass 35
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 36
steps/train_mono.sh: Pass 37
steps/train_mono.sh: Pass 38
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 39
2 warnings in exp/mono/log/align.*.*.log
Done
fstdeterminizestar --use-log=true
fsttablecompose data/lang_test_bg/L_disambig.fst data/lang_test_bg/G.fst
fstminimizeencoded
fstisstochastic data/lang_test_bg/tmp/LG.fst
0.000361025 -0.0763603
[info]: LG not stochastic.
fstcomposecontext --context-size=1 --central-position=0 --read-disambig-syms=data/lang_test_bg/phones/disambig.int --write-disambig-syms=data/lang_test_bg/tmp/disambig_ilabels_1_0.int data/lang_test_bg/tmp/ilabels_1_0
fstisstochastic data/lang_test_bg/tmp/CLG_1_0.fst
0.000360913 -0.0763603
[info]: CLG not stochastic.
make-h-transducer --disambig-syms-out=exp/mono/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_1_0 exp/mono/tree exp/mono/final.mdl
fsttablecompose exp/mono/graph/Ha.fst data/lang_test_bg/tmp/CLG_1_0.fst
fstdeterminizestar --use-log=true
fstminimizeencoded
fstrmsymbols exp/mono/graph/disambig_tid.int
fstrmepslocal
fstisstochastic exp/mono/graph/HCLGa.fst
0.00039086 -0.0758928
HCLGa is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/mono/final.mdl
steps/decode.sh --nj 5 --cmd run.pl exp/mono/graph data/dev exp/mono/decode_dev
decode.sh: feature type is delta
steps/decode.sh --nj 5 --cmd run.pl exp/mono/graph data/test exp/mono/decode_test
decode.sh: feature type is delta
============================================================================
tri1 : Deltas + Delta-Deltas Training & Decoding
============================================================================
steps/align_si.sh --boost-silence 1.25 --nj 30 --cmd run.pl data/train data/lang exp/mono exp/mono_ali
steps/align_si.sh: feature type is delta
steps/align_si.sh: aligning data in data/train using model from exp/mono, putting alignments in exp/mono_ali
steps/align_si.sh: done aligning data.
steps/train_deltas.sh --cmd run.pl 2500 15000 data/train data/lang exp/mono_ali exp/tri1
steps/train_deltas.sh: accumulating tree stats
steps/train_deltas.sh: getting questions for tree-building, via clustering
steps/train_deltas.sh: building the tree
steps/train_deltas.sh: converting alignments from exp/mono_ali to use current tree
steps/train_deltas.sh: compiling graphs of transcripts
steps/train_deltas.sh: training pass 1
steps/train_deltas.sh: training pass 2
steps/train_deltas.sh: training pass 3
steps/train_deltas.sh: training pass 4
steps/train_deltas.sh: training pass 5
steps/train_deltas.sh: training pass 6
steps/train_deltas.sh: training pass 7
steps/train_deltas.sh: training pass 8
steps/train_deltas.sh: training pass 9
steps/train_deltas.sh: training pass 10
steps/train_deltas.sh: aligning data
steps/train_deltas.sh: training pass 11
steps/train_deltas.sh: training pass 12
steps/train_deltas.sh: training pass 13
steps/train_deltas.sh: training pass 14
steps/train_deltas.sh: training pass 15
steps/train_deltas.sh: training pass 16
steps/train_deltas.sh: training pass 17
steps/train_deltas.sh: training pass 18
steps/train_deltas.sh: training pass 19
steps/train_deltas.sh: training pass 20
steps/train_deltas.sh: aligning data
steps/train_deltas.sh: training pass 21
steps/train_deltas.sh: training pass 22
steps/train_deltas.sh: training pass 23
steps/train_deltas.sh: training pass 24
steps/train_deltas.sh: training pass 25
steps/train_deltas.sh: training pass 26
steps/train_deltas.sh: training pass 27
steps/train_deltas.sh: training pass 28
steps/train_deltas.sh: training pass 29
steps/train_deltas.sh: training pass 30
steps/train_deltas.sh: aligning data
steps/train_deltas.sh: training pass 31
steps/train_deltas.sh: training pass 32
steps/train_deltas.sh: training pass 33
steps/train_deltas.sh: training pass 34
1 warnings in exp/tri1/log/compile_questions.log
69 warnings in exp/tri1/log/init_model.log
43 warnings in exp/tri1/log/update.*.log
steps/train_deltas.sh: Done training system with delta+delta-delta features in exp/tri1
fstcomposecontext --context-size=3 --central-position=1 --read-disambig-syms=data/lang_test_bg/phones/disambig.int --write-disambig-syms=data/lang_test_bg/tmp/disambig_ilabels_3_1.int data/lang_test_bg/tmp/ilabels_3_1
fstisstochastic data/lang_test_bg/tmp/CLG_3_1.fst
0.000361405 -0.0763602
[info]: CLG not stochastic.
make-h-transducer --disambig-syms-out=exp/tri1/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/tri1/tree exp/tri1/final.mdl
fsttablecompose exp/tri1/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst
fstdeterminizestar --use-log=true
fstrmsymbols exp/tri1/graph/disambig_tid.int
fstrmepslocal
fstminimizeencoded
fstisstochastic exp/tri1/graph/HCLGa.fst
0.000847995 -0.0761719
HCLGa is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/tri1/final.mdl
steps/decode.sh --nj 5 --cmd run.pl exp/tri1/graph data/dev exp/tri1/decode_dev
decode.sh: feature type is delta
steps/decode.sh --nj 5 --cmd run.pl exp/tri1/graph data/test exp/tri1/decode_test
decode.sh: feature type is delta
============================================================================
tri2 : LDA + MLLT Training & Decoding
============================================================================
steps/align_si.sh --nj 30 --cmd run.pl data/train data/lang exp/tri1 exp/tri1_ali
steps/align_si.sh: feature type is delta
steps/align_si.sh: aligning data in data/train using model from exp/tri1, putting alignments in exp/tri1_ali
steps/align_si.sh: done aligning data.
steps/train_lda_mllt.sh --cmd run.pl --splice-opts --left-context=3 --right-context=3 2500 15000 data/train data/lang exp/tri1_ali exp/tri2
Accumulating LDA statistics.
rm: cannot remove 'exp/tri2/lda.*.acc': No such file or directory
Accumulating tree stats
Getting questions for tree clustering.
Building the tree
steps/train_lda_mllt.sh: Initializing the model
Converting alignments from exp/tri1_ali to use current tree
Compiling graphs of transcripts
Training pass 1
Training pass 2
Estimating MLLT
Training pass 3
Training pass 4
Estimating MLLT
Training pass 5
Training pass 6
Estimating MLLT
Training pass 7
Training pass 8
Training pass 9
Training pass 10
Aligning data
Training pass 11
Training pass 12
Estimating MLLT
Training pass 13
Training pass 14
Training pass 15
Training pass 16
Training pass 17
Training pass 18
Training pass 19
Training pass 20
Aligning data
Training pass 21
Training pass 22
Training pass 23
Training pass 24
Training pass 25
Training pass 26
Training pass 27
Training pass 28
Training pass 29
Training pass 30
Aligning data
Training pass 31
Training pass 32
Training pass 33
Training pass 34
145 warnings in exp/tri2/log/update.*.log
96 warnings in exp/tri2/log/init_model.log
1 warnings in exp/tri2/log/compile_questions.log
Done training system with LDA+MLLT features in exp/tri2
make-h-transducer --disambig-syms-out=exp/tri2/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/tri2/tree exp/tri2/final.mdl
fsttablecompose exp/tri2/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst
fstdeterminizestar --use-log=true
fstrmepslocal
fstrmsymbols exp/tri2/graph/disambig_tid.int
fstminimizeencoded
fstisstochastic exp/tri2/graph/HCLGa.fst
0.000844985 -0.0761719
HCLGa is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/tri2/final.mdl
steps/decode.sh --nj 5 --cmd run.pl exp/tri2/graph data/dev exp/tri2/decode_dev
decode.sh: feature type is lda
steps/decode.sh --nj 5 --cmd run.pl exp/tri2/graph data/test exp/tri2/decode_test
decode.sh: feature type is lda
============================================================================
tri3 : LDA + MLLT + SAT Training & Decoding
============================================================================
steps/align_si.sh --nj 30 --cmd run.pl --use-graphs true data/train data/lang exp/tri2 exp/tri2_ali
steps/align_si.sh: feature type is lda
steps/align_si.sh: aligning data in data/train using model from exp/tri2, putting alignments in exp/tri2_ali
steps/align_si.sh: done aligning data.
steps/train_sat.sh --cmd run.pl 2500 15000 data/train data/lang exp/tri2_ali exp/tri3
steps/train_sat.sh: feature type is lda
steps/train_sat.sh: obtaining initial fMLLR transforms since not present in exp/tri2_ali
steps/train_sat.sh: Accumulating tree stats
steps/train_sat.sh: Getting questions for tree clustering.
steps/train_sat.sh: Building the tree
steps/train_sat.sh: Initializing the model
steps/train_sat.sh: Converting alignments from exp/tri2_ali to use current tree
steps/train_sat.sh: Compiling graphs of transcripts
Pass 1
Pass 2
Estimating fMLLR transforms
Pass 3
Pass 4
Estimating fMLLR transforms
Pass 5
Pass 6
Estimating fMLLR transforms
Pass 7
Pass 8
Pass 9
Pass 10
Aligning data
Pass 11
Pass 12
Estimating fMLLR transforms
Pass 13
Pass 14
Pass 15
Pass 16
Pass 17
Pass 18
Pass 19
Pass 20
Aligning data
Pass 21
Pass 22
Pass 23
Pass 24
Pass 25
Pass 26
Pass 27
Pass 28
Pass 29
Pass 30
Aligning data
Pass 31
Pass 32
Pass 33
Pass 34
1 warnings in exp/tri3/log/est_alimdl.log
14 warnings in exp/tri3/log/update.*.log
1 warnings in exp/tri3/log/compile_questions.log
45 warnings in exp/tri3/log/init_model.log
steps/train_sat.sh: Likelihood evolution:
-50.2793 -49.3994 -49.194 -48.9933 -48.3106 -47.5621 -47.1024 -46.8379 -46.609 -46.0832 -45.8164 -45.4892 -45.3029 -45.1602 -45.0367 -44.9235 -44.8142 -44.7061 -44.6028 -44.4413 -44.3024 -44.2116 -44.1279 -44.0478 -43.9695 -43.8932 -43.8189 -43.7456 -43.6737 -43.5781 -43.5036 -43.4785 -43.4626 -43.4505
Done
make-h-transducer --disambig-syms-out=exp/tri3/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/tri3/tree exp/tri3/final.mdl
fstrmepslocal
fstminimizeencoded
fstdeterminizestar --use-log=true
fstrmsymbols exp/tri3/graph/disambig_tid.int
fsttablecompose exp/tri3/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst
fstisstochastic exp/tri3/graph/HCLGa.fst
0.000848651 -0.0488869
HCLGa is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/tri3/final.mdl
steps/decode_fmllr.sh --nj 5 --cmd run.pl exp/tri3/graph data/dev exp/tri3/decode_dev
steps/decode.sh --scoring-opts --num-threads 1 --skip-scoring false --acwt 0.083333 --nj 5 --cmd run.pl --beam 10.0 --model exp/tri3/final.alimdl --max-active 2000 exp/tri3/graph data/dev exp/tri3/decode_dev.si
decode.sh: feature type is lda
steps/decode_fmllr.sh: feature type is lda
steps/decode_fmllr.sh: getting first-pass fMLLR transforms.
steps/decode_fmllr.sh: doing main lattice generation phase
steps/decode_fmllr.sh: estimating fMLLR transforms a second time.
steps/decode_fmllr.sh: doing a final pass of acoustic rescoring.
steps/decode_fmllr.sh --nj 5 --cmd run.pl exp/tri3/graph data/test exp/tri3/decode_test
steps/decode.sh --scoring-opts --num-threads 1 --skip-scoring false --acwt 0.083333 --nj 5 --cmd run.pl --beam 10.0 --model exp/tri3/final.alimdl --max-active 2000 exp/tri3/graph data/test exp/tri3/decode_test.si
decode.sh: feature type is lda
steps/decode_fmllr.sh: feature type is lda
steps/decode_fmllr.sh: getting first-pass fMLLR transforms.
steps/decode_fmllr.sh: doing main lattice generation phase
steps/decode_fmllr.sh: estimating fMLLR transforms a second time.
steps/decode_fmllr.sh: doing a final pass of acoustic rescoring.
============================================================================
SGMM2 Training & Decoding
============================================================================
steps/align_fmllr.sh --nj 30 --cmd run.pl data/train data/lang exp/tri3 exp/tri3_ali
steps/align_fmllr.sh: feature type is lda
steps/align_fmllr.sh: compiling training graphs
steps/align_fmllr.sh: aligning data in data/train using exp/tri3/final.alimdl and speaker-independent features.
steps/align_fmllr.sh: computing fMLLR transforms
steps/align_fmllr.sh: doing final alignment.
steps/align_fmllr.sh: done aligning data.
steps/train_ubm.sh --cmd run.pl 400 data/train data/lang exp/tri3_ali exp/ubm4
steps/train_ubm.sh: feature type is lda
steps/train_ubm.sh: using transforms from exp/tri3_ali
steps/train_ubm.sh: clustering model exp/tri3_ali/final.mdl to get initial UBM
steps/train_ubm.sh: doing Gaussian selection
Pass 0
Pass 1
Pass 2
steps/train_sgmm2.sh --cmd run.pl 7000 9000 data/train data/lang exp/tri3_ali exp/ubm4/final.ubm exp/sgmm2_4
steps/train_sgmm2.sh: feature type is lda
steps/train_sgmm2.sh: using transforms from exp/tri3_ali
steps/train_sgmm2.sh: accumulating tree stats
steps/train_sgmm2.sh: Getting questions for tree clustering.
steps/train_sgmm2.sh: Building the tree
steps/train_sgmm2.sh: Initializing the model
steps/train_sgmm2.sh: doing Gaussian selection
steps/train_sgmm2.sh: compiling training graphs
steps/train_sgmm2.sh: converting alignments
steps/train_sgmm2.sh: training pass 0 ...
steps/train_sgmm2.sh: training pass 1 ...
steps/train_sgmm2.sh: training pass 2 ...
steps/train_sgmm2.sh: training pass 3 ...
steps/train_sgmm2.sh: training pass 4 ...
steps/train_sgmm2.sh: training pass 5 ...
steps/train_sgmm2.sh: re-aligning data
steps/train_sgmm2.sh: training pass 6 ...
steps/train_sgmm2.sh: training pass 7 ...
steps/train_sgmm2.sh: training pass 8 ...
steps/train_sgmm2.sh: training pass 9 ...
steps/train_sgmm2.sh: training pass 10 ...
steps/train_sgmm2.sh: re-aligning data
steps/train_sgmm2.sh: training pass 11 ...
steps/train_sgmm2.sh: training pass 12 ...
steps/train_sgmm2.sh: training pass 13 ...
steps/train_sgmm2.sh: training pass 14 ...
steps/train_sgmm2.sh: training pass 15 ...
steps/train_sgmm2.sh: re-aligning data
steps/train_sgmm2.sh: training pass 16 ...
steps/train_sgmm2.sh: training pass 17 ...
steps/train_sgmm2.sh: training pass 18 ...
steps/train_sgmm2.sh: training pass 19 ...
steps/train_sgmm2.sh: training pass 20 ...
steps/train_sgmm2.sh: training pass 21 ...
steps/train_sgmm2.sh: training pass 22 ...
steps/train_sgmm2.sh: training pass 23 ...
steps/train_sgmm2.sh: training pass 24 ...
steps/train_sgmm2.sh: building alignment model (pass 25)
steps/train_sgmm2.sh: building alignment model (pass 26)
steps/train_sgmm2.sh: building alignment model (pass 27)
223 warnings in exp/sgmm2_4/log/update_ali.*.log
1905 warnings in exp/sgmm2_4/log/update.*.log
1 warnings in exp/sgmm2_4/log/compile_questions.log
Done
make-h-transducer --disambig-syms-out=exp/sgmm2_4/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/sgmm2_4/tree exp/sgmm2_4/final.mdl
fsttablecompose exp/sgmm2_4/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst
fstdeterminizestar --use-log=true
fstrmsymbols exp/sgmm2_4/graph/disambig_tid.int
fstrmepslocal
fstminimizeencoded
fstisstochastic exp/sgmm2_4/graph/HCLGa.fst
0.000836893 -0.0766049
HCLGa is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/sgmm2_4/final.mdl
steps/decode_sgmm2.sh --nj 5 --cmd run.pl --transform-dir exp/tri3/decode_dev exp/sgmm2_4/graph data/dev exp/sgmm2_4/decode_dev
steps/decode_sgmm2.sh: feature type is lda
steps/decode_sgmm2.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2.sh --nj 5 --cmd run.pl --transform-dir exp/tri3/decode_test exp/sgmm2_4/graph data/test exp/sgmm2_4/decode_test
steps/decode_sgmm2.sh: feature type is lda
steps/decode_sgmm2.sh: using transforms from exp/tri3/decode_test
============================================================================
MMI + SGMM2 Training & Decoding
============================================================================
steps/align_sgmm2.sh --nj 30 --cmd run.pl --transform-dir exp/tri3_ali --use-graphs true --use-gselect true data/train data/lang exp/sgmm2_4 exp/sgmm2_4_ali
steps/align_sgmm2.sh: feature type is lda
steps/align_sgmm2.sh: using transforms from exp/tri3_ali
steps/align_sgmm2.sh: aligning data in data/train using model exp/sgmm2_4/final.alimdl
steps/align_sgmm2.sh: computing speaker vectors (1st pass)
steps/align_sgmm2.sh: computing speaker vectors (2nd pass)
steps/align_sgmm2.sh: doing final alignment.
steps/align_sgmm2.sh: done aligning data.
steps/make_denlats_sgmm2.sh --nj 30 --sub-split 30 --acwt 0.2 --lattice-beam 10.0 --beam 18.0 --cmd run.pl --transform-dir exp/tri3_ali data/train data/lang exp/sgmm2_4_ali exp/sgmm2_4_denlats
steps/make_denlats_sgmm2.sh: Making unigram grammar FST in exp/sgmm2_4_denlats/lang
steps/make_denlats_sgmm2.sh: Compiling decoding graph in exp/sgmm2_4_denlats/dengraph
fsttablecompose exp/sgmm2_4_denlats/lang/L_disambig.fst exp/sgmm2_4_denlats/lang/G.fst
fstdeterminizestar --use-log=true
fstminimizeencoded
fstisstochastic exp/sgmm2_4_denlats/lang/tmp/LG.fst
1.2886e-05 1.2886e-05
fstcomposecontext --context-size=3 --central-position=1 --read-disambig-syms=exp/sgmm2_4_denlats/lang/phones/disambig.int --write-disambig-syms=exp/sgmm2_4_denlats/lang/tmp/disambig_ilabels_3_1.int exp/sgmm2_4_denlats/lang/tmp/ilabels_3_1
fstisstochastic exp/sgmm2_4_denlats/lang/tmp/CLG_3_1.fst
1.28131e-05 0
make-h-transducer --disambig-syms-out=exp/sgmm2_4_denlats/dengraph/disambig_tid.int --transition-scale=1.0 exp/sgmm2_4_denlats/lang/tmp/ilabels_3_1 exp/sgmm2_4_ali/tree exp/sgmm2_4_ali/final.mdl
fsttablecompose exp/sgmm2_4_denlats/dengraph/Ha.fst exp/sgmm2_4_denlats/lang/tmp/CLG_3_1.fst
fstdeterminizestar --use-log=true
fstminimizeencoded
fstrmsymbols exp/sgmm2_4_denlats/dengraph/disambig_tid.int
fstrmepslocal
fstisstochastic exp/sgmm2_4_denlats/dengraph/HCLGa.fst
0.000481233 -0.000484407
add-self-loops --self-loop-scale=0.1 --reorder=true exp/sgmm2_4_ali/final.mdl
steps/make_denlats_sgmm2.sh: feature type is lda
steps/make_denlats_sgmm2.sh: using fMLLR transforms from exp/tri3_ali
steps/make_denlats_sgmm2.sh: Merging archives for data subset 1
steps/make_denlats_sgmm2.sh: Merging archives for data subset 2
steps/make_denlats_sgmm2.sh: Merging archives for data subset 3
steps/make_denlats_sgmm2.sh: Merging archives for data subset 4
steps/make_denlats_sgmm2.sh: Merging archives for data subset 5
steps/make_denlats_sgmm2.sh: Merging archives for data subset 6
steps/make_denlats_sgmm2.sh: Merging archives for data subset 7
steps/make_denlats_sgmm2.sh: Merging archives for data subset 8
steps/make_denlats_sgmm2.sh: Merging archives for data subset 9
steps/make_denlats_sgmm2.sh: Merging archives for data subset 10
steps/make_denlats_sgmm2.sh: Merging archives for data subset 11
steps/make_denlats_sgmm2.sh: Merging archives for data subset 12
steps/make_denlats_sgmm2.sh: Merging archives for data subset 13
steps/make_denlats_sgmm2.sh: Merging archives for data subset 14
steps/make_denlats_sgmm2.sh: Merging archives for data subset 15
steps/make_denlats_sgmm2.sh: Merging archives for data subset 16
steps/make_denlats_sgmm2.sh: Merging archives for data subset 17
steps/make_denlats_sgmm2.sh: Merging archives for data subset 18
steps/make_denlats_sgmm2.sh: Merging archives for data subset 19
steps/make_denlats_sgmm2.sh: Merging archives for data subset 20
steps/make_denlats_sgmm2.sh: Merging archives for data subset 21
steps/make_denlats_sgmm2.sh: Merging archives for data subset 22
steps/make_denlats_sgmm2.sh: Merging archives for data subset 23
steps/make_denlats_sgmm2.sh: Merging archives for data subset 24
steps/make_denlats_sgmm2.sh: Merging archives for data subset 25
steps/make_denlats_sgmm2.sh: Merging archives for data subset 26
steps/make_denlats_sgmm2.sh: Merging archives for data subset 27
steps/make_denlats_sgmm2.sh: Merging archives for data subset 28
steps/make_denlats_sgmm2.sh: Merging archives for data subset 29
steps/make_denlats_sgmm2.sh: Merging archives for data subset 30
steps/make_denlats_sgmm2.sh: done generating denominator lattices with SGMMs.
steps/train_mmi_sgmm2.sh --acwt 0.2 --cmd run.pl --transform-dir exp/tri3_ali --boost 0.1 --drop-frames true data/train data/lang exp/sgmm2_4_ali exp/sgmm2_4_denlats exp/sgmm2_4_mmi_b0.1
steps/train_mmi_sgmm2.sh: feature type is lda
steps/train_mmi_sgmm2.sh: using transforms from exp/tri3_ali
steps/train_mmi_sgmm2.sh: using speaker vectors from exp/sgmm2_4_ali
steps/train_mmi_sgmm2.sh: using Gaussian-selection info from exp/sgmm2_4_ali
Iteration 0 of MMI training
Iteration 0: objf was 0.501044281003688, MMI auxf change was 0.0160348072541191
Iteration 1 of MMI training
Iteration 1: objf was 0.515518450061245, MMI auxf change was 0.00245487512257484
Iteration 2 of MMI training
Iteration 2: objf was 0.518200455824607, MMI auxf change was 0.000697398612937324
Iteration 3 of MMI training
Iteration 3: objf was 0.519178111852977, MMI auxf change was 0.000478041434074517
MMI training finished
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 1 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it1
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/1.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 1 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it1
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/1.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 2 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it2
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/2.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 2 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it2
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/2.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 3 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it3
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/3.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 3 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it3
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/3.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 4 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it4
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/4.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 4 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it4
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/4.mdl
============================================================================
DNN Hybrid Training & Decoding
============================================================================
steps/nnet2/train_tanh.sh --mix-up 5000 --initial-learning-rate 0.015 --final-learning-rate 0.002 --num-hidden-layers 2 --num-jobs-nnet 30 --cmd run.pl data/train data/lang exp/tri3_ali exp/tri4_nnet
steps/nnet2/train_tanh.sh: calling get_lda.sh
steps/nnet2/get_lda.sh --transform-dir exp/tri3_ali --splice-width 4 --cmd run.pl data/train data/lang exp/tri3_ali exp/tri4_nnet
steps/nnet2/get_lda.sh: feature type is lda
steps/nnet2/get_lda.sh: using transforms from exp/tri3_ali
feat-to-dim 'ark,s,cs:utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- |' -
transform-feats exp/tri4_nnet/final.mat ark:- ark:-
apply-cmvn --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:-
transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:-
splice-feats --left-context=3 --right-context=3 ark:- ark:-
WARNING (feat-to-dim:Close():kaldi-io.cc:465) Pipe utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | had nonzero return status 36096
feat-to-dim 'ark,s,cs:utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | splice-feats --left-context=4 --right-context=4 ark:- ark:- |' -
apply-cmvn --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:-
splice-feats --left-context=4 --right-context=4 ark:- ark:-
transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:-
splice-feats --left-context=3 --right-context=3 ark:- ark:-
transform-feats exp/tri4_nnet/final.mat ark:- ark:-
WARNING (feat-to-dim:Close():kaldi-io.cc:465) Pipe utils/subset_scp.pl --quiet 333 data/train/split30/1/feats.scp | apply-cmvn --utt2spk=ark:data/train/split30/1/utt2spk scp:data/train/split30/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split30/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | splice-feats --left-context=4 --right-context=4 ark:- ark:- | had nonzero return status 36096
steps/nnet2/get_lda.sh: Accumulating LDA statistics.
steps/nnet2/get_lda.sh: Finished estimating LDA
steps/nnet2/train_tanh.sh: calling get_egs.sh
steps/nnet2/get_egs.sh --transform-dir exp/tri3_ali --splice-width 4 --samples-per-iter 200000 --num-jobs-nnet 30 --stage 0 --cmd run.pl --io-opts -tc 5 data/train data/lang exp/tri3_ali exp/tri4_nnet
steps/nnet2/get_egs.sh: feature type is lda
steps/nnet2/get_egs.sh: using transforms from exp/tri3_ali
steps/nnet2/get_egs.sh: working out number of frames of training data
steps/nnet2/get_egs.sh: Every epoch, splitting the data up into 1 iterations,
steps/nnet2/get_egs.sh: giving samples-per-iteration of 37308 (you requested 200000).
Getting validation and training subset examples.
steps/nnet2/get_egs.sh: extracting validation and training-subset alignments.
copy-int-vector ark:- ark,t:-
LOG (copy-int-vector:main():copy-int-vector.cc:83) Copied 3696 vectors of int32.
Getting subsets of validation examples for diagnostics and combination.
Creating training examples
Generating training examples on disk
steps/nnet2/get_egs.sh: rearranging examples into parts for different parallel jobs
steps/nnet2/get_egs.sh: Since iters-per-epoch == 1, just concatenating the data.
Shuffling the order of training examples
(in order to avoid stressing the disk, these won't all run at once).
steps/nnet2/get_egs.sh: Finished preparing training examples
steps/nnet2/train_tanh.sh: initializing neural net
Training transition probabilities and setting priors
steps/nnet2/train_tanh.sh: Will train for 15 + 5 epochs, equalling
steps/nnet2/train_tanh.sh: 15 + 5 = 20 iterations,
steps/nnet2/train_tanh.sh: (while reducing learning rate) + (with constant learning rate).
Training neural net (pass 0)
Training neural net (pass 1)
Training neural net (pass 2)
Training neural net (pass 3)
Training neural net (pass 4)
Training neural net (pass 5)
Training neural net (pass 6)
Training neural net (pass 7)
Training neural net (pass 8)
Training neural net (pass 9)
Training neural net (pass 10)
Training neural net (pass 11)
Training neural net (pass 12)
Mixing up from 1943 to 5000 components
Training neural net (pass 13)
Training neural net (pass 14)
Training neural net (pass 15)
Training neural net (pass 16)
Training neural net (pass 17)
Training neural net (pass 18)
Training neural net (pass 19)
Setting num_iters_final=5
Getting average posterior for purposes of adjusting the priors.
Re-adjusting priors based on computed posteriors
Done
Cleaning up data
steps/nnet2/remove_egs.sh: Finished deleting examples in exp/tri4_nnet/egs
Removing most of the models
steps/nnet2/decode.sh --cmd run.pl --nj 5 --num-threads 6 --transform-dir exp/tri3/decode_dev exp/tri3/graph data/dev exp/tri4_nnet/decode_dev
steps/nnet2/decode.sh: feature type is lda
steps/nnet2/decode.sh: using transforms from exp/tri3/decode_dev
score best paths
score confidence and timing with sclite
Decoding done.
steps/nnet2/decode.sh --cmd run.pl --nj 5 --num-threads 6 --transform-dir exp/tri3/decode_test exp/tri3/graph data/test exp/tri4_nnet/decode_test
steps/nnet2/decode.sh: feature type is lda
steps/nnet2/decode.sh: using transforms from exp/tri3/decode_test
score best paths
score confidence and timing with sclite
Decoding done.
============================================================================
System Combination (DNN+SGMM)
============================================================================
============================================================================
DNN Hybrid Training & Decoding (Karel's recipe)
============================================================================
steps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --transform-dir exp/tri3/decode_test data-fmllr-tri3/test data/test exp/tri3 data-fmllr-tri3/test/log data-fmllr-tri3/test/data
steps/nnet/make_fmllr_feats.sh: feature type is lda_fmllr
utils/copy_data_dir.sh: copied data from data/test to data-fmllr-tri3/test
utils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/test
steps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/test --> data-fmllr-tri3/test, using : raw-trans None, gmm exp/tri3, trans exp/tri3/decode_test
steps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --transform-dir exp/tri3/decode_dev data-fmllr-tri3/dev data/dev exp/tri3 data-fmllr-tri3/dev/log data-fmllr-tri3/dev/data
steps/nnet/make_fmllr_feats.sh: feature type is lda_fmllr
utils/copy_data_dir.sh: copied data from data/dev to data-fmllr-tri3/dev
utils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/dev
steps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/dev --> data-fmllr-tri3/dev, using : raw-trans None, gmm exp/tri3, trans exp/tri3/decode_dev
steps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --transform-dir exp/tri3_ali data-fmllr-tri3/train data/train exp/tri3 data-fmllr-tri3/train/log data-fmllr-tri3/train/data
steps/nnet/make_fmllr_feats.sh: feature type is lda_fmllr
utils/copy_data_dir.sh: copied data from data/train to data-fmllr-tri3/train
utils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/train
steps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/train --> data-fmllr-tri3/train, using : raw-trans None, gmm exp/tri3, trans exp/tri3_ali
utils/subset_data_dir_tr_cv.sh data-fmllr-tri3/train data-fmllr-tri3/train_tr90 data-fmllr-tri3/train_cv10
/home/aderic/tmp/kaldi-trunk/egs/timit/s5/utils/subset_data_dir.sh: reducing #utt from 3696 to 3320
/home/aderic/tmp/kaldi-trunk/egs/timit/s5/utils/subset_data_dir.sh: reducing #utt from 3696 to 376
# steps/nnet/pretrain_dbn.sh --hid-dim 1024 --rbm-iter 20 data-fmllr-tri3/train exp/dnn4_pretrain-dbn
# Started at Sun Jun 18 20:57:05 CST 2017
#
steps/nnet/pretrain_dbn.sh --hid-dim 1024 --rbm-iter 20 data-fmllr-tri3/train exp/dnn4_pretrain-dbn
# INFO
steps/nnet/pretrain_dbn.sh : Pre-training Deep Belief Network as a stack of RBMs
dir : exp/dnn4_pretrain-dbn
Train-set : data-fmllr-tri3/train
### IS CUDA GPU AVAILABLE? 'aderic-To-be-filled-by-O-E-M' ###
LOG (SelectGpuIdAuto():cu-device.cc:277) Selecting from 1 GPUs
LOG (SelectGpuIdAuto():cu-device.cc:292) cudaSetDevice(0): GeForce GTX 950 free:1844M, used:150M, total:1995M, free/total:0.924571
LOG (SelectGpuIdAuto():cu-device.cc:341) Trying to select device: 0 (automatically), mem_ratio: 0.924571
LOG (SelectGpuIdAuto():cu-device.cc:360) Success selecting device 0 free mem ratio: 0.924571
LOG (FinalizeActiveGpu():cu-device.cc:199) The active GPU is [0]: GeForce GTX 950 free:1825M, used:169M, total:1995M, free/total:0.915048 version 5.2
LOG (PrintMemoryUsage():cu-device.cc:376) Memory used: 0 bytes.
### HURRAY, WE GOT A CUDA GPU FOR COMPUTATION!!! ###
# PREPARING FEATURES
Preparing train/cv lists
3696 exp/dnn4_pretrain-dbn/train.scp
copy-feats scp:exp/dnn4_pretrain-dbn/train.scp_non_local ark,scp:/tmp/tmp.mUpNmkQuzN/train.ark,exp/dnn4_pretrain-dbn/train.scp
LOG (copy-feats:main():copy-feats.cc:100) Copied 3696 feature matrices.
apply-cmvn not used
Getting feature dim : copy-feats scp:exp/dnn4_pretrain-dbn/train.scp ark:-
WARNING (feat-to-dim:Close():kaldi-io.cc:465) Pipe copy-feats scp:exp/dnn4_pretrain-dbn/train.scp ark:- | had nonzero return status 13
40
Using splice +/- 5 , step 1
Renormalizing MLP input features into exp/dnn4_pretrain-dbn/tr_splice5-1_cmvn-g.nnet
compute-cmvn-stats ark:- -
nnet-concat --binary=false exp/dnn4_pretrain-dbn/tr_splice5-1.nnet - exp/dnn4_pretrain-dbn/tr_splice5-1_cmvn-g.nnet
LOG (nnet-concat:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/tr_splice5-1.nnet
cmvn-to-nnet - -
LOG (nnet-concat:main():nnet-concat.cc:65) Concatenating -
LOG (compute-cmvn-stats:main():compute-cmvn-stats.cc:168) Wrote global CMVN stats to standard output
LOG (compute-cmvn-stats:main():compute-cmvn-stats.cc:171) Done accumulating CMVN stats for 3696 utterances; 0 had errors.
LOG (cmvn-to-nnet:main():cmvn-to-nnet.cc:144) Written model to -
LOG (nnet-concat:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/tr_splice5-1_cmvn-g.nnet
# PRE-TRAINING RBM LAYER 1
Initializing 'exp/dnn4_pretrain-dbn/1.rbm.init'
Pretraining 'exp/dnn4_pretrain-dbn/1.rbm' (input gauss, lrate 0.01, iters 40)
rbm-convert-to-nnet --binary=true exp/dnn4_pretrain-dbn/1.rbm exp/dnn4_pretrain-dbn/1.dbn
LOG (rbm-convert-to-nnet:main():rbm-convert-to-nnet.cc:69) Written model to exp/dnn4_pretrain-dbn/1.dbn
# PRE-TRAINING RBM LAYER 2
Computing cmvn stats 'exp/dnn4_pretrain-dbn/2.cmvn' for RBM initialization
cmvn-to-nnet - exp/dnn4_pretrain-dbn/2.cmvn
LOG (cmvn-to-nnet:main():cmvn-to-nnet.cc:144) Written model to exp/dnn4_pretrain-dbn/2.cmvn
Initializing 'exp/dnn4_pretrain-dbn/2.rbm.init'
Pretraining 'exp/dnn4_pretrain-dbn/2.rbm' (lrate 0.4, iters 20)
rbm-convert-to-nnet --binary=true exp/dnn4_pretrain-dbn/2.rbm -
nnet-concat exp/dnn4_pretrain-dbn/1.dbn - exp/dnn4_pretrain-dbn/2.dbn
LOG (nnet-concat:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/1.dbn
LOG (nnet-concat:main():nnet-concat.cc:65) Concatenating -
LOG (rbm-convert-to-nnet:main():rbm-convert-to-nnet.cc:69) Written model to -
LOG (nnet-concat:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/2.dbn
# PRE-TRAINING RBM LAYER 3
Computing cmvn stats 'exp/dnn4_pretrain-dbn/3.cmvn' for RBM initialization
cmvn-to-nnet - exp/dnn4_pretrain-dbn/3.cmvn
LOG (cmvn-to-nnet:main():cmvn-to-nnet.cc:144) Written model to exp/dnn4_pretrain-dbn/3.cmvn
Initializing 'exp/dnn4_pretrain-dbn/3.rbm.init'
Pretraining 'exp/dnn4_pretrain-dbn/3.rbm' (lrate 0.4, iters 20)
rbm-convert-to-nnet --binary=true exp/dnn4_pretrain-dbn/3.rbm -
nnet-concat exp/dnn4_pretrain-dbn/2.dbn - exp/dnn4_pretrain-dbn/3.dbn
LOG (nnet-concat:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/2.dbn
LOG (nnet-concat:main():nnet-concat.cc:65) Concatenating -
LOG (rbm-convert-to-nnet:main():rbm-convert-to-nnet.cc:69) Written model to -
LOG (nnet-concat:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/3.dbn
# PRE-TRAINING RBM LAYER 4
Computing cmvn stats 'exp/dnn4_pretrain-dbn/4.cmvn' for RBM initialization
cmvn-to-nnet - exp/dnn4_pretrain-dbn/4.cmvn
LOG (cmvn-to-nnet:main():cmvn-to-nnet.cc:144) Written model to exp/dnn4_pretrain-dbn/4.cmvn
Initializing 'exp/dnn4_pretrain-dbn/4.rbm.init'
Pretraining 'exp/dnn4_pretrain-dbn/4.rbm' (lrate 0.4, iters 20)
rbm-convert-to-nnet --binary=true exp/dnn4_pretrain-dbn/4.rbm -
nnet-concat exp/dnn4_pretrain-dbn/3.dbn - exp/dnn4_pretrain-dbn/4.dbn
LOG (nnet-concat:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/3.dbn
LOG (nnet-concat:main():nnet-concat.cc:65) Concatenating -
LOG (rbm-convert-to-nnet:main():rbm-convert-to-nnet.cc:69) Written model to -
LOG (nnet-concat:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/4.dbn
# PRE-TRAINING RBM LAYER 5
Computing cmvn stats 'exp/dnn4_pretrain-dbn/5.cmvn' for RBM initialization
cmvn-to-nnet - exp/dnn4_pretrain-dbn/5.cmvn
LOG (cmvn-to-nnet:main():cmvn-to-nnet.cc:144) Written model to exp/dnn4_pretrain-dbn/5.cmvn
Initializing 'exp/dnn4_pretrain-dbn/5.rbm.init'
Pretraining 'exp/dnn4_pretrain-dbn/5.rbm' (lrate 0.4, iters 20)
rbm-convert-to-nnet --binary=true exp/dnn4_pretrain-dbn/5.rbm -
nnet-concat exp/dnn4_pretrain-dbn/4.dbn - exp/dnn4_pretrain-dbn/5.dbn
LOG (nnet-concat:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/4.dbn
LOG (nnet-concat:main():nnet-concat.cc:65) Concatenating -
LOG (rbm-convert-to-nnet:main():rbm-convert-to-nnet.cc:69) Written model to -
LOG (nnet-concat:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/5.dbn
# PRE-TRAINING RBM LAYER 6
Computing cmvn stats 'exp/dnn4_pretrain-dbn/6.cmvn' for RBM initialization
cmvn-to-nnet - exp/dnn4_pretrain-dbn/6.cmvn
LOG (cmvn-to-nnet:main():cmvn-to-nnet.cc:144) Written model to exp/dnn4_pretrain-dbn/6.cmvn
Initializing 'exp/dnn4_pretrain-dbn/6.rbm.init'
Pretraining 'exp/dnn4_pretrain-dbn/6.rbm' (lrate 0.4, iters 20)
rbm-convert-to-nnet --binary=true exp/dnn4_pretrain-dbn/6.rbm -
nnet-concat exp/dnn4_pretrain-dbn/5.dbn - exp/dnn4_pretrain-dbn/6.dbn
LOG (nnet-concat:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/5.dbn
LOG (nnet-concat:main():nnet-concat.cc:65) Concatenating -
LOG (rbm-convert-to-nnet:main():rbm-convert-to-nnet.cc:69) Written model to -
LOG (nnet-concat:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn/6.dbn
# REPORT
# RBM pre-training progress (line per-layer)
exp/dnn4_pretrain-dbn/log/rbm.1.log:progress: [69.4924 60.5308 57.9892 56.3941 55.3507 54.5784 54.0204 53.6018 53.2884 53.0793 52.8296 52.6876 52.4442 52.3074 52.2223 52.0072 51.9417 51.8653 51.6941 51.6856 51.5972 51.4874 51.4558 51.3756 51.3817 51.3327 51.2511 51.2724 51.196 51.1951 51.1366 51.1352 51.0948 51.0865 51.0832 51.0311 51.0304 50.9692 50.9465 51.0071 50.9248 50.9344 50.9512 50.8667 50.9195 50.9001 50.8517 50.8503 50.8292 50.859 50.8428 50.7932 50.8228 50.781 50.7816 50.7584 50.7418 50.7807 50.7676 50.7764 50.7599 50.8135 50.7436 50.7483 50.808 50.7366 50.7679 50.8278 50.7373 50.8027 50.8045 50.7794 50.7957 50.7664 50.8095 50.795 50.7524 50.8209 50.7901 50.7944 50.7858 50.7704 50.8217 50.7866 50.804 50.7979 50.7966 50.7793 50.7724 50.8137 50.7382 50.7839 50.8056 50.7236 50.7955 50.8011 50.7579 50.7742 50.7386 50.79 50.8001 50.7246 50.7948 50.7826 50.7574 50.7579 50.7578 50.7868 50.7498 50.7769 50.7642 50.7646 50.7447 50.7384 50.7968 50.7298 50.7622 50.7856 50.7274 50.7758 50.7685 50.727 50.7406 50.7577 ]
exp/dnn4_pretrain-dbn/log/rbm.2.log:progress: [9.3892 6.65086 6.00733 5.81395 5.70808 5.63795 5.59898 5.57326 5.53494 5.52794 5.50997 5.49249 5.48005 5.47046 5.45423 5.43849 5.4367 5.41862 5.40145 5.40679 5.3898 5.37223 5.36703 5.35522 5.34978 5.34081 5.32688 5.32188 5.31088 5.30494 5.28892 5.28056 5.27587 5.25638 5.25674 5.24594 5.23185 5.22602 5.2202 5.20871 5.19642 5.19577 5.18307 5.16642 5.17069 5.16093 5.14704 5.14118 5.13035 5.12656 5.11814 5.10704 5.10269 5.09545 5.08268 5.07387 5.06953 5.0785 5.06971 5.07304 5.07605 5.07497 ]
exp/dnn4_pretrain-dbn/log/rbm.3.log:progress: [9.12826 5.95837 5.20456 4.89933 4.74706 4.65457 4.60378 4.57749 4.54495 4.53851 4.52111 4.50254 4.49842 4.48848 4.47513 4.46767 4.46829 4.45448 4.44438 4.4437 4.43368 4.42602 4.42223 4.41105 4.41033 4.40145 4.39426 4.38684 4.38315 4.37653 4.36601 4.36422 4.3596 4.34571 4.34775 4.33977 4.33073 4.32286 4.32142 4.31039 4.30746 4.30657 4.29447 4.28389 4.28774 4.27966 4.27309 4.26888 4.26189 4.25736 4.25373 4.24528 4.24263 4.24074 4.23018 4.22503 4.22377 4.22984 4.22107 4.22696 4.22824 4.22279 ]
exp/dnn4_pretrain-dbn/log/rbm.4.log:progress: [6.57292 4.44239 3.98525 3.79373 3.68828 3.6188 3.58301 3.55984 3.53938 3.53422 3.52048 3.50818 3.50383 3.49857 3.48877 3.48479 3.48297 3.47152 3.46652 3.46409 3.45984 3.45524 3.44733 3.4441 3.44462 3.43582 3.43135 3.42902 3.42419 3.4209 3.41433 3.41177 3.40893 3.4009 3.40304 3.39746 3.39092 3.39072 3.38791 3.38009 3.37791 3.37563 3.36927 3.36574 3.36519 3.36163 3.36013 3.35251 3.34941 3.35049 3.34509 3.34098 3.33945 3.33863 3.32971 3.32987 3.32749 3.33384 3.32717 3.33254 3.33102 3.32982 ]
exp/dnn4_pretrain-dbn/log/rbm.5.log:progress: [6.34282 4.17784 3.66524 3.4095 3.27171 3.19132 3.14777 3.12674 3.10719 3.10297 3.09142 3.08171 3.08141 3.07143 3.0647 3.06519 3.06392 3.05491 3.05454 3.04961 3.04696 3.04536 3.03846 3.0335 3.03574 3.02528 3.02376 3.02226 3.02176 3.01756 3.01275 3.01245 3.00936 2.99978 3.00448 2.99968 2.99366 2.9956 2.9905 2.98383 2.98642 2.98377 2.97594 2.97607 2.97484 2.97289 2.9725 2.96455 2.96336 2.96538 2.9579 2.95666 2.95438 2.95394 2.94887 2.94531 2.9452 2.94869 2.94448 2.94993 2.94944 2.94647 ]
exp/dnn4_pretrain-dbn/log/rbm.6.log:progress: [4.63436 3.24115 2.90844 2.73384 2.64223 2.59398 2.56696 2.54933 2.53224 2.52928 2.51966 2.51179 2.5134 2.50207 2.49677 2.4977 2.49448 2.48704 2.48676 2.48468 2.48196 2.48212 2.47233 2.47284 2.47623 2.46761 2.46774 2.46403 2.46168 2.4594 2.45538 2.45388 2.45367 2.44582 2.45117 2.44544 2.44233 2.44448 2.43764 2.4355 2.43781 2.43377 2.42964 2.43024 2.42806 2.42748 2.42641 2.42094 2.42036 2.42224 2.41523 2.41443 2.41335 2.4134 2.40887 2.40803 2.40637 2.41036 2.40671 2.41057 2.4078 2.4077 ]
Pre-training finished.
Removing features tmpdir /tmp/tmp.mUpNmkQuzN @ aderic-To-be-filled-by-O-E-M
train.ark
# Accounting: time=6619 threads=1
# Ended (code 0) at Sun Jun 18 22:47:24 CST 2017, elapsed time 6619 seconds
# steps/nnet/train.sh --feature-transform exp/dnn4_pretrain-dbn/final.feature_transform --dbn exp/dnn4_pretrain-dbn/6.dbn --hid-layers 0 --learn-rate 0.008 data-fmllr-tri3/train_tr90 data-fmllr-tri3/train_cv10 data/lang exp/tri3_ali exp/tri3_ali exp/dnn4_pretrain-dbn_dnn
# Started at Sun Jun 18 22:47:24 CST 2017
#
steps/nnet/train.sh --feature-transform exp/dnn4_pretrain-dbn/final.feature_transform --dbn exp/dnn4_pretrain-dbn/6.dbn --hid-layers 0 --learn-rate 0.008 data-fmllr-tri3/train_tr90 data-fmllr-tri3/train_cv10 data/lang exp/tri3_ali exp/tri3_ali exp/dnn4_pretrain-dbn_dnn
# INFO
steps/nnet/train.sh : Training Neural Network
dir : exp/dnn4_pretrain-dbn_dnn
Train-set : data-fmllr-tri3/train_tr90 exp/tri3_ali
CV-set : data-fmllr-tri3/train_cv10 exp/tri3_ali
### IS CUDA GPU AVAILABLE? 'aderic-To-be-filled-by-O-E-M' ###
LOG (SelectGpuIdAuto():cu-device.cc:277) Selecting from 1 GPUs
LOG (SelectGpuIdAuto():cu-device.cc:292) cudaSetDevice(0): GeForce GTX 950 free:1844M, used:150M, total:1995M, free/total:0.924571
LOG (SelectGpuIdAuto():cu-device.cc:341) Trying to select device: 0 (automatically), mem_ratio: 0.924571
LOG (SelectGpuIdAuto():cu-device.cc:360) Success selecting device 0 free mem ratio: 0.924571
LOG (FinalizeActiveGpu():cu-device.cc:199) The active GPU is [0]: GeForce GTX 950 free:1825M, used:169M, total:1995M, free/total:0.915048 version 5.2
LOG (PrintMemoryUsage():cu-device.cc:376) Memory used: 0 bytes.
### HURRAY, WE GOT A CUDA GPU FOR COMPUTATION!!! ###
# PREPARING ALIGNMENTS
Using PDF targets from dirs 'exp/tri3_ali' 'exp/tri3_ali'
copy-transition-model --binary=false exp/tri3_ali/final.mdl exp/dnn4_pretrain-dbn_dnn/final.mdl
LOG (copy-transition-model:main():copy-transition-model.cc:62) Copied transition model.
# PREPARING FEATURES
Preparing train/cv lists :
3320 exp/dnn4_pretrain-dbn_dnn/train.scp
376 exp/dnn4_pretrain-dbn_dnn/cv.scp
3696 total
copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp_non_local ark,scp:/tmp/tmp.8cWHfij2mN/train.ark,exp/dnn4_pretrain-dbn_dnn/train.scp
LOG (copy-feats:main():copy-feats.cc:100) Copied 3320 feature matrices.
copy-feats scp:exp/dnn4_pretrain-dbn_dnn/cv.scp_non_local ark,scp:/tmp/tmp.8cWHfij2mN/cv.ark,exp/dnn4_pretrain-dbn_dnn/cv.scp
LOG (copy-feats:main():copy-feats.cc:100) Copied 376 feature matrices.
Imported config : cmvn_opts='' delta_opts=''
apply-cmvn is not used
Getting feature dim :
copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:-
WARNING (feat-to-dim:Close():kaldi-io.cc:465) Pipe copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- | had nonzero return status 13
Feature dim is : 40
Using pre-computed feature-transform : 'exp/dnn4_pretrain-dbn/final.feature_transform'
# NN-INITIALIZATION
Getting input/output dims :
feat-to-dim 'ark:copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- | nnet-forward exp/dnn4_pretrain-dbn_dnn/final.feature_transform ark:- ark:- |' -
copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:-
nnet-forward exp/dnn4_pretrain-dbn_dnn/final.feature_transform ark:- ark:-
LOG (nnet-forward:SelectGpuId():cu-device.cc:80) Manually selected to compute on CPU.
WARNING (feat-to-dim:Close():kaldi-io.cc:465) Pipe copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- | nnet-forward exp/dnn4_pretrain-dbn_dnn/final.feature_transform ark:- ark:- | had nonzero return status 36096
feat-to-dim ark:- -
nnet-forward 'nnet-concat exp/dnn4_pretrain-dbn_dnn/final.feature_transform exp/dnn4_pretrain-dbn/6.dbn -|' 'ark:copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- |' ark:-
LOG (nnet-forward:SelectGpuId():cu-device.cc:80) Manually selected to compute on CPU.
nnet-concat exp/dnn4_pretrain-dbn_dnn/final.feature_transform exp/dnn4_pretrain-dbn/6.dbn -
LOG (nnet-concat:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn_dnn/final.feature_transform
LOG (nnet-concat:main():nnet-concat.cc:65) Concatenating exp/dnn4_pretrain-dbn/6.dbn
LOG (nnet-concat:main():nnet-concat.cc:82) Written model to -
copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:-
Genrating network prototype exp/dnn4_pretrain-dbn_dnn/nnet.proto
Initializing exp/dnn4_pretrain-dbn_dnn/nnet.proto -> exp/dnn4_pretrain-dbn_dnn/nnet.init
nnet-concat exp/dnn4_pretrain-dbn/6.dbn exp/dnn4_pretrain-dbn_dnn/nnet.init exp/dnn4_pretrain-dbn_dnn/nnet_6.dbn_dnn.init
LOG (nnet-concat:main():nnet-concat.cc:53) Reading exp/dnn4_pretrain-dbn/6.dbn
LOG (nnet-concat:main():nnet-concat.cc:65) Concatenating exp/dnn4_pretrain-dbn_dnn/nnet.init
LOG (nnet-concat:main():nnet-concat.cc:82) Written model to exp/dnn4_pretrain-dbn_dnn/nnet_6.dbn_dnn.init
# RUNNING THE NN-TRAINING SCHEDULER
steps/nnet/train_scheduler.sh --feature-transform exp/dnn4_pretrain-dbn_dnn/final.feature_transform --learn-rate 0.008 --randomizer-seed 777 exp/dnn4_pretrain-dbn_dnn/nnet_6.dbn_dnn.init ark:copy-feats scp:exp/dnn4_pretrain-dbn_dnn/train.scp ark:- | ark:copy-feats scp:exp/dnn4_pretrain-dbn_dnn/cv.scp ark:- | ark:ali-to-pdf exp/tri3_ali/final.mdl "ark:gunzip -c exp/tri3_ali/ali.*.gz |" ark:- | ali-to-post ark:- ark:- | ark:ali-to-pdf exp/tri3_ali/final.mdl "ark:gunzip -c exp/tri3_ali/ali.*.gz |" ark:- | ali-to-post ark:- ark:- | exp/dnn4_pretrain-dbn_dnn
CROSSVAL PRERUN AVG.LOSS 7.7541 (Xent),
ITERATION 01: TRAIN AVG.LOSS 2.1114, (lrate0.008), CROSSVAL AVG.LOSS 1.9414, nnet accepted (nnet_6.dbn_dnn_iter01_learnrate0.008_tr2.1114_cv1.9414)
ITERATION 02: TRAIN AVG.LOSS 1.4023, (lrate0.008), CROSSVAL AVG.LOSS 1.8121, nnet accepted (nnet_6.dbn_dnn_iter02_learnrate0.008_tr1.4023_cv1.8121)
ITERATION 03: TRAIN AVG.LOSS 1.1973, (lrate0.008), CROSSVAL AVG.LOSS 1.7814, nnet accepted (nnet_6.dbn_dnn_iter03_learnrate0.008_tr1.1973_cv1.7814)
ITERATION 04: TRAIN AVG.LOSS 1.0502, (lrate0.008), CROSSVAL AVG.LOSS 1.7840, nnet rejected (nnet_6.dbn_dnn_iter04_learnrate0.008_tr1.0502_cv1.7840_rejected)
ITERATION 05: TRAIN AVG.LOSS 1.0070, (lrate0.004), CROSSVAL AVG.LOSS 1.6611, nnet accepted (nnet_6.dbn_dnn_iter05_learnrate0.004_tr1.0070_cv1.6611)
ITERATION 06: TRAIN AVG.LOSS 0.9168, (lrate0.002), CROSSVAL AVG.LOSS 1.5843, nnet accepted (nnet_6.dbn_dnn_iter06_learnrate0.002_tr0.9168_cv1.5843)
ITERATION 07: TRAIN AVG.LOSS 0.8765, (lrate0.001), CROSSVAL AVG.LOSS 1.5317, nnet accepted (nnet_6.dbn_dnn_iter07_learnrate0.001_tr0.8765_cv1.5317)
ITERATION 08: TRAIN AVG.LOSS 0.8581, (lrate0.0005), CROSSVAL AVG.LOSS 1.5001, nnet accepted (nnet_6.dbn_dnn_iter08_learnrate0.0005_tr0.8581_cv1.5001)
ITERATION 09: TRAIN AVG.LOSS 0.8490, (lrate0.00025), CROSSVAL AVG.LOSS 1.4816, nnet accepted (nnet_6.dbn_dnn_iter09_learnrate0.00025_tr0.8490_cv1.4816)
ITERATION 10: TRAIN AVG.LOSS 0.8438, (lrate0.000125), CROSSVAL AVG.LOSS 1.4715, nnet accepted (nnet_6.dbn_dnn_iter10_learnrate0.000125_tr0.8438_cv1.4715)
ITERATION 11: TRAIN AVG.LOSS 0.8406, (lrate6.25e-05), CROSSVAL AVG.LOSS 1.4670, nnet accepted (nnet_6.dbn_dnn_iter11_learnrate6.25e-05_tr0.8406_cv1.4670)
ITERATION 12: TRAIN AVG.LOSS 0.8385, (lrate3.125e-05), CROSSVAL AVG.LOSS 1.4653, nnet accepted (nnet_6.dbn_dnn_iter12_learnrate3.125e-05_tr0.8385_cv1.4653)
ITERATION 13: TRAIN AVG.LOSS 0.8372, (lrate1.5625e-05), CROSSVAL AVG.LOSS 1.4646, nnet accepted (nnet_6.dbn_dnn_iter13_learnrate1.5625e-05_tr0.8372_cv1.4646)
finished, too small rel. improvement .0004367825
Succeeded training the Neural Network : exp/dnn4_pretrain-dbn_dnn/final.nnet
Preparing feature transform with CNN layers for RBM pre-training.
steps/nnet/train.sh successfuly finished.. exp/dnn4_pretrain-dbn_dnn
Removing features tmpdir /tmp/tmp.8cWHfij2mN @ aderic-To-be-filled-by-O-E-M
cv.ark
train.ark
# Accounting: time=872 threads=1
# Ended (code 0) at Sun Jun 18 23:01:56 CST 2017, elapsed time 872 seconds
steps/nnet/decode.sh --nj 20 --cmd run.pl --acwt 0.2 exp/tri3/graph data-fmllr-tri3/test exp/dnn4_pretrain-dbn_dnn/decode_test
steps/nnet/decode.sh --nj 20 --cmd run.pl --acwt 0.2 exp/tri3/graph data-fmllr-tri3/dev exp/dnn4_pretrain-dbn_dnn/decode_dev
steps/nnet/align.sh --nj 20 --cmd run.pl data-fmllr-tri3/train data/lang exp/dnn4_pretrain-dbn_dnn exp/dnn4_pretrain-dbn_dnn_ali
steps/nnet/align.sh: aligning data 'data-fmllr-tri3/train' using nnet/model 'exp/dnn4_pretrain-dbn_dnn', putting alignments in 'exp/dnn4_pretrain-dbn_dnn_ali'
steps/nnet/align.sh: done aligning data.
steps/nnet/make_denlats.sh --nj 20 --cmd run.pl --acwt 0.2 --lattice-beam 10.0 --beam 18.0 data-fmllr-tri3/train data/lang exp/dnn4_pretrain-dbn_dnn exp/dnn4_pretrain-dbn_dnn_denlats
Making unigram grammar FST in exp/dnn4_pretrain-dbn_dnn_denlats/lang
Compiling decoding graph in exp/dnn4_pretrain-dbn_dnn_denlats/dengraph
fsttablecompose exp/dnn4_pretrain-dbn_dnn_denlats/lang/L_disambig.fst exp/dnn4_pretrain-dbn_dnn_denlats/lang/G.fst
fstdeterminizestar --use-log=true
fstminimizeencoded
fstisstochastic exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/LG.fst
1.2886e-05 1.2886e-05
fstcomposecontext --context-size=3 --central-position=1 --read-disambig-syms=exp/dnn4_pretrain-dbn_dnn_denlats/lang/phones/disambig.int --write-disambig-syms=exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/disambig_ilabels_3_1.int exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/ilabels_3_1
fstisstochastic exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/CLG_3_1.fst
1.28131e-05 0
make-h-transducer --disambig-syms-out=exp/dnn4_pretrain-dbn_dnn_denlats/dengraph/disambig_tid.int --transition-scale=1.0 exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/ilabels_3_1 exp/dnn4_pretrain-dbn_dnn/tree exp/dnn4_pretrain-dbn_dnn/final.mdl
fsttablecompose exp/dnn4_pretrain-dbn_dnn_denlats/dengraph/Ha.fst exp/dnn4_pretrain-dbn_dnn_denlats/lang/tmp/CLG_3_1.fst
fstdeterminizestar --use-log=true
fstrmepslocal
fstminimizeencoded
fstrmsymbols exp/dnn4_pretrain-dbn_dnn_denlats/dengraph/disambig_tid.int
fstisstochastic exp/dnn4_pretrain-dbn_dnn_denlats/dengraph/HCLGa.fst
0.000459552 -0.000485808
add-self-loops --self-loop-scale=0.1 --reorder=true exp/dnn4_pretrain-dbn_dnn/final.mdl
steps/nnet/make_denlats.sh: generating denlats from data 'data-fmllr-tri3/train', putting lattices in 'exp/dnn4_pretrain-dbn_dnn_denlats'
steps/nnet/make_denlats.sh: done generating denominator lattices.
steps/nnet/train_mpe.sh --cmd run.pl --num-iters 6 --acwt 0.2 --do-smbr true data-fmllr-tri3/train data/lang exp/dnn4_pretrain-dbn_dnn exp/dnn4_pretrain-dbn_dnn_ali exp/dnn4_pretrain-dbn_dnn_denlats exp/dnn4_pretrain-dbn_dnn_smbr
Pass 1 (learnrate 0.00001)
TRAINING FINISHED; Time taken = 4.47292 min; processed 4191.23 frames per second.
Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
Overall average frame-accuracy is 0.854151 over 1124823 frames.
Pass 2 (learnrate 1e-05)
TRAINING FINISHED; Time taken = 4.44345 min; processed 4219.03 frames per second.
Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
Overall average frame-accuracy is 0.861744 over 1124823 frames.
Pass 3 (learnrate 1e-05)
TRAINING FINISHED; Time taken = 4.4547 min; processed 4208.37 frames per second.
Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
Overall average frame-accuracy is 0.866113 over 1124823 frames.
Pass 4 (learnrate 1e-05)
TRAINING FINISHED; Time taken = 4.43732 min; processed 4224.86 frames per second.
Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
Overall average frame-accuracy is 0.869213 over 1124823 frames.
Pass 5 (learnrate 1e-05)
TRAINING FINISHED; Time taken = 4.44124 min; processed 4221.13 frames per second.
Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
Overall average frame-accuracy is 0.871653 over 1124823 frames.
Pass 6 (learnrate 1e-05)
TRAINING FINISHED; Time taken = 4.47654 min; processed 4187.85 frames per second.
Done 3696 files, 0 with no reference alignments, 0 with no lattices, 0 with other errors.
Overall average frame-accuracy is 0.873699 over 1124823 frames.
MPE/sMBR training finished
Re-estimating priors by forwarding the training set.
steps/nnet/make_priors.sh --cmd run.pl --nj 20 data-fmllr-tri3/train exp/dnn4_pretrain-dbn_dnn_smbr
Accumulating prior stats by forwarding 'data-fmllr-tri3/train' with 'exp/dnn4_pretrain-dbn_dnn_smbr'
Succeeded creating prior counts 'exp/dnn4_pretrain-dbn_dnn_smbr/prior_counts' from 'data-fmllr-tri3/train'
steps/nnet/decode.sh --nj 20 --cmd run.pl --nnet exp/dnn4_pretrain-dbn_dnn_smbr/1.nnet --acwt 0.2 exp/tri3/graph data-fmllr-tri3/test exp/dnn4_pretrain-dbn_dnn_smbr/decode_test_it1
steps/nnet/decode.sh --nj 20 --cmd run.pl --nnet exp/dnn4_pretrain-dbn_dnn_smbr/1.nnet --acwt 0.2 exp/tri3/graph data-fmllr-tri3/dev exp/dnn4_pretrain-dbn_dnn_smbr/decode_dev_it1
steps/nnet/decode.sh --nj 20 --cmd run.pl --nnet exp/dnn4_pretrain-dbn_dnn_smbr/6.nnet --acwt 0.2 exp/tri3/graph data-fmllr-tri3/test exp/dnn4_pretrain-dbn_dnn_smbr/decode_test_it6
steps/nnet/decode.sh --nj 20 --cmd run.pl --nnet exp/dnn4_pretrain-dbn_dnn_smbr/6.nnet --acwt 0.2 exp/tri3/graph data-fmllr-tri3/dev exp/dnn4_pretrain-dbn_dnn_smbr/decode_dev_it6
Success
============================================================================
Getting Results [see RESULTS file]
============================================================================
%WER 31.7 | 400 15057 | 72.0 19.3 8.7 3.6 31.7 100.0 | -0.462 | exp/mono/decode_dev/score_5/ctm_39phn.filt.sys
%WER 24.5 | 400 15057 | 79.4 15.5 5.1 3.9 24.5 99.8 | -0.165 | exp/tri1/decode_dev/score_10/ctm_39phn.filt.sys
%WER 23.3 | 400 15057 | 80.8 14.6 4.6 4.1 23.3 99.8 | -0.416 | exp/tri2/decode_dev/score_9/ctm_39phn.filt.sys
%WER 20.6 | 400 15057 | 82.3 12.9 4.8 2.9 20.6 99.8 | -0.596 | exp/tri3/decode_dev/score_10/ctm_39phn.filt.sys
%WER 23.6 | 400 15057 | 80.3 14.9 4.8 3.9 23.6 99.8 | -0.457 | exp/tri3/decode_dev.si/score_8/ctm_39phn.filt.sys
%WER 21.1 | 400 15057 | 81.9 12.7 5.4 3.1 21.1 99.8 | -0.572 | exp/tri4_nnet/decode_dev/score_5/ctm_39phn.filt.sys
%WER 18.3 | 400 15057 | 84.1 11.2 4.7 2.4 18.3 99.5 | -0.159 | exp/sgmm2_4/decode_dev/score_10/ctm_39phn.filt.sys
%WER 18.3 | 400 15057 | 84.8 11.3 3.9 3.1 18.3 99.0 | -0.322 | exp/sgmm2_4_mmi_b0.1/decode_dev_it1/score_8/ctm_39phn.filt.sys
%WER 18.3 | 400 15057 | 85.2 11.2 3.6 3.5 18.3 99.3 | -0.442 | exp/sgmm2_4_mmi_b0.1/decode_dev_it2/score_7/ctm_39phn.filt.sys
%WER 18.4 | 400 15057 | 85.2 11.3 3.5 3.6 18.4 99.3 | -0.475 | exp/sgmm2_4_mmi_b0.1/decode_dev_it3/score_7/ctm_39phn.filt.sys
%WER 18.4 | 400 15057 | 84.9 11.3 3.8 3.3 18.4 99.0 | -0.361 | exp/sgmm2_4_mmi_b0.1/decode_dev_it4/score_8/ctm_39phn.filt.sys
%WER 17.7 | 400 15057 | 85.0 10.7 4.3 2.7 17.7 98.8 | -1.060 | exp/dnn4_pretrain-dbn_dnn/decode_dev/score_4/ctm_39phn.filt.sys
%WER 17.5 | 400 15057 | 85.3 10.7 4.0 2.8 17.5 98.8 | -1.040 | exp/dnn4_pretrain-dbn_dnn_smbr/decode_dev_it1/score_4/ctm_39phn.filt.sys
%WER 17.5 | 400 15057 | 85.9 10.7 3.4 3.4 17.5 98.8 | -1.065 | exp/dnn4_pretrain-dbn_dnn_smbr/decode_dev_it6/score_4/ctm_39phn.filt.sys
%WER 16.9 | 400 15057 | 86.2 11.0 2.9 3.1 16.9 99.0 | -0.263 | exp/combine_2/decode_dev_it1/score_5/ctm_39phn.filt.sys
%WER 16.9 | 400 15057 | 85.9 11.0 3.1 2.8 16.9 99.0 | -0.134 | exp/combine_2/decode_dev_it2/score_6/ctm_39phn.filt.sys
%WER 17.0 | 400 15057 | 85.8 11.1 3.1 2.9 17.0 99.0 | -0.139 | exp/combine_2/decode_dev_it3/score_6/ctm_39phn.filt.sys
%WER 16.9 | 400 15057 | 85.6 11.1 3.3 2.6 16.9 99.3 | -0.051 | exp/combine_2/decode_dev_it4/score_7/ctm_39phn.filt.sys
%WER 32.4 | 192 7215 | 70.0 19.1 11.0 2.3 32.4 100.0 | -0.123 | exp/mono/decode_test/score_7/ctm_39phn.filt.sys
%WER 25.7 | 192 7215 | 78.1 16.6 5.3 3.8 25.7 100.0 | -0.172 | exp/tri1/decode_test/score_10/ctm_39phn.filt.sys
%WER 23.8 | 192 7215 | 79.5 15.0 5.4 3.3 23.8 99.0 | -0.332 | exp/tri2/decode_test/score_10/ctm_39phn.filt.sys
%WER 21.7 | 192 7215 | 81.3 13.7 4.9 3.0 21.7 99.5 | -0.516 | exp/tri3/decode_test/score_10/ctm_39phn.filt.sys
%WER 24.3 | 192 7215 | 78.9 15.6 5.5 3.2 24.3 99.5 | -0.232 | exp/tri3/decode_test.si/score_10/ctm_39phn.filt.sys
%WER 22.6 | 192 7215 | 80.9 13.3 5.8 3.4 22.6 99.5 | -0.822 | exp/tri4_nnet/decode_test/score_4/ctm_39phn.filt.sys
%WER 19.9 | 192 7215 | 83.5 12.2 4.3 3.5 19.9 99.5 | -0.421 | exp/sgmm2_4/decode_test/score_7/ctm_39phn.filt.sys
%WER 20.0 | 192 7215 | 84.1 12.3 3.7 4.1 20.0 99.5 | -0.627 | exp/sgmm2_4_mmi_b0.1/decode_test_it1/score_6/ctm_39phn.filt.sys
%WER 20.0 | 192 7215 | 82.9 12.5 4.6 2.9 20.0 99.5 | -0.153 | exp/sgmm2_4_mmi_b0.1/decode_test_it2/score_10/ctm_39phn.filt.sys
%WER 20.1 | 192 7215 | 82.9 12.5 4.7 3.0 20.1 99.5 | -0.173 | exp/sgmm2_4_mmi_b0.1/decode_test_it3/score_10/ctm_39phn.filt.sys
%WER 20.0 | 192 7215 | 83.4 12.4 4.3 3.4 20.0 99.5 | -0.353 | exp/sgmm2_4_mmi_b0.1/decode_test_it4/score_8/ctm_39phn.filt.sys
%WER 18.5 | 192 7215 | 83.8 11.2 5.0 2.3 18.5 99.0 | -0.817 | exp/dnn4_pretrain-dbn_dnn/decode_test/score_5/ctm_39phn.filt.sys
%WER 18.7 | 192 7215 | 84.1 11.6 4.3 2.8 18.7 99.0 | -1.081 | exp/dnn4_pretrain-dbn_dnn_smbr/decode_test_it1/score_4/ctm_39phn.filt.sys
%WER 18.5 | 192 7215 | 84.6 11.4 4.0 3.1 18.5 99.0 | -0.848 | exp/dnn4_pretrain-dbn_dnn_smbr/decode_test_it6/score_5/ctm_39phn.filt.sys
%WER 18.4 | 192 7215 | 84.5 12.0 3.5 3.0 18.4 99.5 | -0.097 | exp/combine_2/decode_test_it1/score_6/ctm_39phn.filt.sys
%WER 18.4 | 192 7215 | 85.2 11.8 3.0 3.6 18.4 99.5 | -0.439 | exp/combine_2/decode_test_it2/score_4/ctm_39phn.filt.sys
%WER 18.5 | 192 7215 | 85.1 11.9 3.0 3.7 18.5 99.5 | -0.453 | exp/combine_2/decode_test_it3/score_4/ctm_39phn.filt.sys
%WER 18.4 | 192 7215 | 84.9 11.8 3.3 3.3 18.4 99.5 | -0.238 | exp/combine_2/decode_test_it4/score_5/ctm_39phn.filt.sys
============================================================================
Finished successfully on Sun Jun 18 23:54:00 CST 2017
============================================================================