孤立词语音识别(2)——利用webrtcvad实现语音分割

算法说明

webrtc的vad使用GMM(Gaussian Mixture Mode)对语音和噪音建模,通过相应的概率来判断语音和噪声,这种算法的优点是它是无监督的,不需要严格的训练。但是当语速比较快的时候会出现失误,我用百度AIP生成语音要把语速调到‘1’,也就是最慢。
GMM的噪声和语音模型如下:

p(xk|z,rk)={1/sqrt(2*pi*sita^2)} * exp{ - (xk-uz) ^2/(2 * sita ^2 )} 

xk是选取的特征量,在webrtc的VAD中具体是指子带能量,rk是包括均值uz和方差sita的参数集合。z=0,代表噪声,z=1代表语音

webrtc中的vad的c代码的详细步骤如下:

1. 设定模式

根据hangover、单独判决和全局判决门限将VAD检测模式分为以下4类:

  1. 0-quality mode
  2. 1-low bitrate mode
  3. 2-aggressive mode
  4. 3-very aggressive mode
2. 帧长

webrtc的VAD只支持帧长10ms,20ms和30ms,为此事先要加以判断,不符合条件的返回-1

3. 采样率

webrtc的VAD核心计算只支持8KHz采样率,所以当输入信号采样率为32KHz或者16KHz时都要先采样到8KHz

4. 帧长

在8KHz采样率上分为两个步骤:

  1. 计算子带能量
子带分为 80-250Hz,250-500Hz,500-1000Hz,1000-2000Hz,2000-3000Hz,3000-4000Hz
需要分别计算上述子带的能量feature_vector
  1. 通过高斯混合模型分别计算语音和非语音的概率,使用假设检验的方法确定信号的类型
首先通过高斯模型计算假设检验中的H0和H1(c代码是用h0_test和h1_test表示),通过门限判决vadflag

然后更新概率计算所需要的语音均值(speech_means)、噪声的均值(noise_means)、语音方差(speech_stds)和噪声方差(noise_stds)

代码

import collections
import contextlib
import sys
import wave
import webrtcvad
import shutil

def read_wave(filename):
    path = filename
    with contextlib.closing(wave.open(path, 'rb')) as wf:
        num_channels = wf.getnchannels()
        assert num_channels == 1
        sample_width = wf.getsampwidth()
        assert sample_width == 2
        sample_rate = wf.getframerate()
        assert sample_rate in (8000, 16000, 32000, 48000)
        pcm_data = wf.readframes(wf.getnframes())
        return pcm_data, sample_rate

def write_wave(path, audio, sample_rate):
    with contextlib.closing(wave.open(path, 'wb')) as wf:
        wf.setnchannels(1)
        wf.setsampwidth(2)
        wf.setframerate(sample_rate)
        wf.writeframes(audio)

class Frame(object):
    """Represents a "frame" of audio data."""
    def __init__(self, bytes, timestamp, duration):
        self.bytes = bytes
        self.timestamp = timestamp
        self.duration = duration

def frame_generator(frame_duration_ms, audio, sample_rate):
    n = int(sample_rate * (frame_duration_ms / 1000.0) * 2)
    offset = 0
    timestamp = 0.0
    duration = (float(n) / sample_rate) / 2.0
    while offset + n < len(audio):
        yield Frame(audio[offset:offset + n], timestamp, duration)
        timestamp += duration
        offset += n

def vad_collector(sample_rate, frame_duration_ms, padding_duration_ms, vad, frames):
    num_padding_frames = int(padding_duration_ms / frame_duration_ms)
    # We use a deque for our sliding window/ring buffer.
    ring_buffer = collections.deque(maxlen=num_padding_frames)
    # We have two states: TRIGGERED and NOTTRIGGERED. We start in the
    # NOTTRIGGERED state.
    triggered = False

    voiced_frames = []
    for frame in frames:
        is_speech = vad.is_speech(frame.bytes, sample_rate)

        if not triggered:
            ring_buffer.append((frame, is_speech))
            num_voiced = len([f for f, speech in ring_buffer if speech])
            # If we're NOTTRIGGERED and more than 90% of the frames in
            # the ring buffer are voiced frames, then enter the
            # TRIGGERED state.
            if num_voiced > 0.9 * ring_buffer.maxlen:
                triggered = True
                # We want to yield all the audio we see from now until
                # we are NOTTRIGGERED, but we have to start with the
                # audio that's already in the ring buffer.
                for f, s in ring_buffer:
                    voiced_frames.append(f)
                ring_buffer.clear()
        else:
            # We're in the TRIGGERED state, so collect the audio data
            # and add it to the ring buffer.
            voiced_frames.append(frame)
            ring_buffer.append((frame, is_speech))
            num_unvoiced = len([f for f, speech in ring_buffer if not speech])
            # If more than 90% of the frames in the ring buffer are
            # unvoiced, then enter NOTTRIGGERED and yield whatever
            # audio we've collected.
            if num_unvoiced > 0.9 * ring_buffer.maxlen:
                triggered = False
                yield b''.join([f.bytes for f in voiced_frames])
                ring_buffer.clear()
                voiced_frames = []
    if voiced_frames:
        yield b''.join([f.bytes for f in voiced_frames])

if __name__ == '__main__':
# 3. 切分为孤立字:
	audio, fs = read_wave(filepath)
	vad = webrtcvad.Vad(3)
	frames = frame_generator(30, audio, fs)
	frames = list(frames)
	segments = vad_collector(fs, 30, 300, vad, frames)
	 # 能删除该文件夹和文件夹下所有文件
	 # 这个文件夹是装切分后的单字音频的,先删除
	shutil.rmtree('.\\test\\') 
	 # 再创建
	os.mkdir('.\\test\\')
	for i, segment in enumerate(segments):
	    chunk_path = '.\\test\\chunk-%0003d.wav' % (i,)
	    write_wave(chunk_path, segment, fs)

参考资料

  1. benhuo931115:webrtcvad python——语音端点检测
    https://blog.csdn.net/benhuo931115/article/details/54909228
  2. u012123989:python的webrtc库实现语音端点检测
    https://blog.csdn.net/u012123989/article/details/72771667

你可能感兴趣的:(孤立词语音识别(2)——利用webrtcvad实现语音分割)