20180629 qzd
一 准备工程目录文件
建立在thch30上进行的。
- s5
├─── conf
├─── local
├─── step
├─── utils
└─── data/data_thchs30/(数据)
├─── train / *.wav
├─── dev / *.wav
└─── test / *.wav
二 数据准备
在data内新建train文件夹,这个文件夹内需要三个文件:wav.scp、utt2spk、spk2utt。
-
- data/
├───train
│ ├───wav.scp (发音id 发音文件)
│ ├───utt2spk (发音id 发音人)
│ └───spk2utt (发音人 发音id)
└───dev、test(同上)
- data/
local/thchs-30_data_prep.sh $H $thchs/data_thchs30 || exit 1;
格式分别如下:
wav.scp:
test1 data/test/test1.wav
test2 data/test/test2.wav
test3 data/test/test3.wavutt2spk:
test1 global
test2 global
test3 globalspk2utt:
global test1 test2 test3
至此,都准备好了。
三 特征提取
rm -rf data/mfcc && mkdir -p data/mfcc && cp -R data/{train,dev,test,test_phone} data/mfcc || exit 1;
#produce MFCC features
for x in train dev test; do
#make mfcc
steps/make_mfcc.sh --nj $n --cmd "$train_cmd" data/mfcc/$x exp/make_mfcc/$x mfcc/$x || exit 1;
##compute cmvn 计算每条wav文件的均值方差
#steps/compute_cmvn_stats.sh data/mfcc/$x exp/mfcc_cmvn/$x mfcc/$x || exit 1;
done
可视化特征数据:
/wd/qzd/kaldi-master/src/bin/copy-matrix ark:raw_mfcc_train.1.ark ark,t:- | less
或者:
/wd/qzd/kaldi-master/src/featbin/copy-feats ark:raw_mfcc_train.1.ark ark,t:- | head
四 ark-->txt
如何将kaldi中的特征数据读取出来,将ark二进制文件转化成.txt格式的文件
/wd/qzd/kaldi-master/src/featbin/copy-feats --binary=false ark:raw_mfcc_train.1.ark ark,t:1.txt
ark:(二进制文件位置) ark,t:(你需要存放文件的位置);
至此,在项目跟目录下即可看到已经转换好的文件1.txt,内容如下
附:特征提取代码
通过上面的过程,可以知道,对于一段语音,首先对其进行分帧(如,每10ms取25ms的窗口作为一帧),对每一帧进行平滑(加窗,如汉明窗),之后进行FFT(快速傅里叶变化)将连续信号转化为数字信号, 再进行离散余弦变换(DCT)得到一个多维向量,最后取前13维作为最后的MFCC。重复该过程,得到一段语音每个帧的特征向量。
steps/make_mfcc.sh --nj train_cmd" data/mfcc/x mfcc/$x || exit 1;
- step/make_mfcc.sh代码如下:
#!/bin/bash
# Copyright 2012-2016 Johns Hopkins University (Author: Daniel Povey)
# Apache 2.0
# To be run from .. (one directory up from here)
# see ../run.sh for example
# Begin configuration section.
nj=4
cmd=run.pl
mfcc_config=conf/mfcc.conf
compress=true
write_utt2num_frames=false # if true writes utt2num_frames
# End configuration section.
echo "$0 $@" # Print the command line for logging
if [ -f path.sh ]; then . ./path.sh; fi
. parse_options.sh || exit 1;
if [ $# -lt 1 ] || [ $# -gt 3 ]; then
echo "Usage: $0 [options] [ [] ]";
echo "e.g.: $0 data/train exp/make_mfcc/train mfcc"
echo "Note: defaults to /log, and defaults to /data"
echo "Options: "
echo " --mfcc-config # config passed to compute-mfcc-feats "
echo " --nj # number of parallel jobs"
echo " --cmd (utils/run.pl|utils/queue.pl ) # how to run jobs."
echo " --write-utt2num-frames # If true, write utt2num_frames file."
exit 1;
fi
data=$1
if [ $# -ge 2 ]; then
logdir=$2
else
logdir=$data/log
fi
if [ $# -ge 3 ]; then
mfccdir=$3
else
mfccdir=$data/data
fi
# make $mfccdir an absolute pathname.
mfccdir=`perl -e '($dir,$pwd)= @ARGV; if($dir!~m:^/:) { $dir = "$pwd/$dir"; } print $dir; ' $mfccdir ${PWD}`
# use "name" as part of name of the archive.
name=`basename $data`
mkdir -p $mfccdir || exit 1;
mkdir -p $logdir || exit 1;
if [ -f $data/feats.scp ]; then
mkdir -p $data/.backup
echo "$0: moving $data/feats.scp to $data/.backup"
mv $data/feats.scp $data/.backup
fi
scp=$data/wav.scp
required="$scp $mfcc_config"
for f in $required; do
if [ ! -f $f ]; then
echo "make_mfcc.sh: no such file $f"
exit 1;
fi
done
utils/validate_data_dir.sh --no-text --no-feats $data || exit 1;
if [ -f $data/spk2warp ]; then
echo "$0 [info]: using VTLN warp factors from $data/spk2warp"
vtln_opts="--vtln-map=ark:$data/spk2warp --utt2spk=ark:$data/utt2spk"
elif [ -f $data/utt2warp ]; then
echo "$0 [info]: using VTLN warp factors from $data/utt2warp"
vtln_opts="--vtln-map=ark:$data/utt2warp"
fi
for n in $(seq $nj); do
# the next command does nothing unless $mfccdir/storage/ exists, see
# utils/create_data_link.pl for more info.
utils/create_data_link.pl $mfccdir/raw_mfcc_$name.$n.ark
done
if $write_utt2num_frames; then
write_num_frames_opt="--write-num-frames=ark,t:$logdir/utt2num_frames.JOB"
else
write_num_frames_opt=
fi
if [ -f $data/segments ]; then
echo "$0 [info]: segments file exists: using that."
split_segments=""
for n in $(seq $nj); do
split_segments="$split_segments $logdir/segments.$n"
done
utils/split_scp.pl $data/segments $split_segments || exit 1;
rm $logdir/.error 2>/dev/null
$cmd JOB=1:$nj $logdir/make_mfcc_${name}.JOB.log \
extract-segments scp,p:$scp $logdir/segments.JOB ark:- \| \
compute-mfcc-feats $vtln_opts --verbose=2 --config=$mfcc_config ark:- ark:- \| \
copy-feats --compress=$compress $write_num_frames_opt ark:- \
ark,scp:$mfccdir/raw_mfcc_$name.JOB.ark,$mfccdir/raw_mfcc_$name.JOB.scp \
|| exit 1;
else
echo "$0: [info]: no segments file exists: assuming wav.scp indexed by utterance."
split_scps=""
for n in $(seq $nj); do
split_scps="$split_scps $logdir/wav_${name}.$n.scp"
done
utils/split_scp.pl $scp $split_scps || exit 1;
# add ,p to the input rspecifier so that we can just skip over
# utterances that have bad wave data.
$cmd JOB=1:$nj $logdir/make_mfcc_${name}.JOB.log \
compute-mfcc-feats $vtln_opts --verbose=2 --config=$mfcc_config \
scp,p:$logdir/wav_${name}.JOB.scp ark:- \| \
copy-feats $write_num_frames_opt --compress=$compress ark:- \
ark,scp:$mfccdir/raw_mfcc_$name.JOB.ark,$mfccdir/raw_mfcc_$name.JOB.scp \
|| exit 1;
fi
if [ -f $logdir/.error.$name ]; then
echo "Error producing mfcc features for $name:"
tail $logdir/make_mfcc_${name}.1.log
exit 1;
fi
# concatenate the .scp files together.
for n in $(seq $nj); do
cat $mfccdir/raw_mfcc_$name.$n.scp || exit 1;
done > $data/feats.scp || exit 1
if $write_utt2num_frames; then
for n in $(seq $nj); do
cat $logdir/utt2num_frames.$n || exit 1;
done > $data/utt2num_frames || exit 1
rm $logdir/utt2num_frames.*
fi
rm $logdir/wav_${name}.*.scp $logdir/segments.* 2>/dev/null
nf=`cat $data/feats.scp | wc -l`
nu=`cat $data/utt2spk | wc -l`
if [ $nf -ne $nu ]; then
echo "It seems not all of the feature files were successfully processed ($nf != $nu);"
echo "consider using utils/fix_data_dir.sh $data"
fi
if [ $nf -lt $[$nu - ($nu/20)] ]; then
echo "Less than 95% the features were successfully generated. Probably a serious error."
exit 1;
fi
echo "Succeeded creating MFCC features for $name"
下一篇:[Kaldi] 特征提取--MFCC(二)