Keras从时域、频域处理音频分类问题(带详细注释)

本文章大部分参考https://www.kaggle.com/fizzbuzz/beginner-s-guide-to-audio-data/notebook,整理了一下自己学习这篇文章的思路,并根据学习的流程修改了一下代码。

简介

本篇文章要做的内容来源于Kaggle上的比赛
Freesound General-Purpose Audio Tagging Challenge:Can you automatically recognize sounds from a wide range of real-world environments?
类似于图片分类,这里的任务是根据分类出该音频是鼓声、婴儿的笑声还是鼓风机声音等等。不同于图片分类的是,音频处理不像图像那样直观,且是随时间变化的一个序列,下面就来学习一下应该怎么处理音频数据吧。

数据集下载

数据集从https://www.kaggle.com/c/freesound-audio-tagging/data下载,不过数据量比较大(7GB),且需要Kaggle帐号登录,建议从浏览器开始下载,然后复制链接,扔到迅雷里面下载。数据分为:

  • train.csv
    描述了每个wav文件对应的ID,以及它的分类,还有该分类标注是否经过人工审查,大致如下:

fname,label,manually_verified
00044347.wav,Hi-hat,0
001ca53d.wav,Saxophone,1
002d256b.wav,Trumpet,0
0033e230.wav,Glockenspiel,1
00353774.wav,Cello,1
003b91e8.wav,Cello,0
003da8e5.wav,Knock,1
0048fd00.wav,Gunshot_or_gunfire,1
004ad66f.wav,Clarinet,0

  • sample_submission.csv
    由于最终的评判规则是看最终输出的分类里面Top 3的准确率,提交的结果包含三个分类结果。

fname,label
00063640.wav,Laughter Hi-Hat Flute
0013a1db.wav,Laughter Hi-Hat Flute
002bb878.wav,Laughter Hi-Hat Flute
002d392d.wav,Laughter Hi-Hat Flute
00326aa9.wav,Laughter Hi-Hat Flute
0038a046.wav,Laughter Hi-Hat Flute
003995fa.wav,Laughter Hi-Hat Flute
005ae625.wav,Laughter Hi-Hat Flute
007759c4.wav,Laughter Hi-Hat Flute

  • train.zip,test.zip
    这两个就是两个csv所对应的wav了。

Version 0 从时域构建分类CNN网络

因为对音频处理的流程不熟悉,这里重点关注代码编写的流程。
先导入需要的库

import librosa
import numpy as np
import scipy
from keras import losses, models, optimizers
from keras.activations import relu, softmax
from keras.callbacks import (EarlyStopping, LearningRateScheduler,
                             ModelCheckpoint, TensorBoard, ReduceLROnPlateau)
from keras.layers import (Convolution1D, Dense, Dropout, GlobalAveragePooling1D, 
                          GlobalMaxPool1D, Input, MaxPool1D, concatenate)
from keras.utils import Sequence, to_categorical

首先需要确定以下问题:

网络结构

从时域来构建就是一维的序列输入了,这里用的是一个1D的CNN网络,输入是固定的2s的采样率为16000Hz的音频,但我们的数据有长有短,需要考虑如何构建为该长度的数据。


数据构建

先想一下我们需要配置哪些数据(常数):

  • 采样率
  • 送入网络的音频持续时长
  • 最终分类的数量
  • 迭代次数
  • 学习率
    所以我们写一个配置类如下
class Config(object):
    def __init__(self, sampling_rate=16000, audio_duration=2, n_classes=41,
                 learning_rate=0.0001,max_epochs=50):
        self.sampling_rate = sampling_rate
        self.audio_duration = audio_duration
        self.n_classes = n_classes
        self.learning_rate = learning_rate
        self.max_epochs = max_epochs
        #送入网络的帧数
        self.audio_length = self.sampling_rate * self.audio_duration
        #Input的维度
        self.dim = (self.audio_length, 1)

然后就是如何从wav文件产生数据了,下面内容参考:https://blog.csdn.net/m0_37477175/article/details/79716312
在使用keras训练model的时候,一般会将所有的训练数据加载到内存中,然后喂给网络,但当内存有限,且数据量过大时,此方法则不再可用。因此我们准备构建一个数据迭代器。参考Keras的官方API

Every Sequence must implement the __getitem__ and the __len__ methods. If you want to modify your dataset between epochs you may implement on_epoch_end. The method __getitem__ should return a complete batch.

class DataGenerator(Sequence):
    def __init__(self, config, data_dir, list_IDs, labels=None, 
                 batch_size=64, preprocessing_fn=lambda x: x):
        self.config = config
        self.data_dir = data_dir
        # 这里对原代码做了一些修改,为什么加上list()请看文章后续分析
        self.list_IDs = list(list_IDs)
        if(labels is None):
            self.labels=None
        else:
            self.labels = list(labels)
        self.batch_size = batch_size
        self.preprocessing_fn = preprocessing_fn
        # 在on_epoch_end内获得了wav文件共有多少个,便于从ID对应到文件名,在每个训练epoch结束之后也会执行该函数
        self.on_epoch_end()
        self.dim = self.config.dim

    # 以下两个函数是必须在Sequence类里实现的方法
    # 返回一共有多少个batch
    def __len__(self):
        return int(np.ceil(len(self.list_IDs) / self.batch_size))
    # 返回第index个batch内的内容
    def __getitem__(self, index):
        # 这些一个batch内的内容由indexes指定,例如[a,a+1,a+2,…,a+batch_size]
        indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
        # 将indexes映射至ID,组成一个需要处理的文件ID列表
        # 需要注意list_IDs如果是Dataframe,用[k]取元素取的是原表里面的元素
        list_IDs_temp = [self.list_IDs[k] for k in indexes]
        return self.__data_generation(list_IDs_temp)

    def on_epoch_end(self):
        self.indexes = np.arange(len(self.list_IDs))

    def __data_generation(self, list_IDs_temp):
        cur_batch_size = len(list_IDs_temp)
        # 这里的*是因为self.dim类似shape,是一个多维的参数,通过*当成参数列表传递进去
        X = np.empty((cur_batch_size, *self.dim))
        input_length = self.config.audio_length
        for i, ID in enumerate(list_IDs_temp):
            file_path = self.data_dir + ID            
            # Read and Resample the audio
            data, _ = librosa.core.load(file_path, sr=self.config.sampling_rate,
                                        res_type='kaiser_fast')

            # Random offset / Padding
            # 如果音频过长 从中取出指定长度的音频 eg:len(data)=5000 input_length=1000
            if len(data) > input_length:
                # max_offset=4000
                max_offset = len(data) - input_length
                # offset=3214
                offset = np.random.randint(max_offset)
                # [3214:1000+3214]
                data = data[offset:(input_length+offset)]
            else:
                # 如果音频过短,补成指定长度
                if input_length > len(data):
                    max_offset = input_length - len(data)
                    offset = np.random.randint(max_offset)
                else:
                    offset = 0
                data = np.pad(data, (offset, input_length - len(data) - offset), "constant")
                
            # Normalization + Other Preprocessing
            # 这里的preprocessing_fn是预处理函数,对音频进行归一化处理,参考后一段代码
            # 增添一个新维度以符合网络Input形状
            data = self.preprocessing_fn(data)[:, np.newaxis]
            # 第i个X为处理之后的data
            X[i,] = data

        # 如果带有label(train)
        if self.labels is not None:
            y = np.empty(cur_batch_size, dtype=int)            
            for i in range(len(list_IDs_temp)):
                y[i] = self.labels[i]
            #for i, ID in enumerate(list_IDs_temp):
                #y[i] = self.labels[ID]
            return X, to_categorical(y, num_classes=self.config.n_classes)
        #如果不带label(test)
        else:
            return X  

然后还需要一个对音频归一化处理的函数:

def audio_norm(data):
    max_data = np.max(data)
    min_data = np.min(data)
    data = (data-min_data)/(max_data-min_data+1e-6)
    return data-0.5

Model合成

def get_1d_conv_model(config):
    
    nclass = config.n_classes
    input_length = config.audio_length
    
    inp = Input(shape=(input_length,1))
    x = Convolution1D(16, 9, activation=relu, padding="valid")(inp)
    x = Convolution1D(16, 9, activation=relu, padding="valid")(x)
    x = MaxPool1D(16)(x)
    x = Dropout(rate=0.1)(x)
    
    x = Convolution1D(32, 3, activation=relu, padding="valid")(x)
    x = Convolution1D(32, 3, activation=relu, padding="valid")(x)
    x = MaxPool1D(4)(x)
    x = Dropout(rate=0.1)(x)
    
    x = Convolution1D(32, 3, activation=relu, padding="valid")(x)
    x = Convolution1D(32, 3, activation=relu, padding="valid")(x)
    x = MaxPool1D(4)(x)
    x = Dropout(rate=0.1)(x)
    
    x = Convolution1D(256, 3, activation=relu, padding="valid")(x)
    x = Convolution1D(256, 3, activation=relu, padding="valid")(x)
    x = GlobalMaxPool1D()(x)
    x = Dropout(rate=0.2)(x)

    x = Dense(64, activation=relu)(x)
    x = Dense(1028, activation=relu)(x)
    out = Dense(nclass, activation=softmax)(x)

    model = models.Model(inputs=inp, outputs=out)
    opt = optimizers.Adam(config.learning_rate)

    model.compile(optimizer=opt, loss=losses.categorical_crossentropy, metrics=['acc'])
    return model

训练

#读入数据
import pandas as pd
train=pd.read_csv('train.csv')
test=pd.read_csv('sample_submission.csv')
#将Index设为Wav名字
train.set_index("fname", inplace=True)
test.set_index("fname", inplace=True)

然后找出每个wav有多少个采样点

import wave
train['nframes']=train['fname'].apply(lambda f:
                                     wave.open('audio_train/'+f).getnframes())
test['nframes']=test['fname'].apply(lambda f:
                                     wave.open('audio_test/'+f).getnframes())

然后配置我们之前定义的数据迭代器:

train_generator = DataGenerator(config=config,data_dir='audio_train/', 
list_IDs=train.index,labels=train.label_idx, 
batch_size=64,  preprocessing_fn=audio_norm)

开始训练:

config = Config(sampling_rate=16000, audio_duration=2, n_folds=2, max_epochs=5)
history = model.fit_generator(train_generator, epochs=config.max_epochs, use_multiprocessing=True, workers=6, max_queue_size=20)

然后就可以输出结果啦。我们来看一下最终结果是怎样的:

 # Save test predictions
test_generator = DataGenerator(config, 'audio_test/', test['fname'], batch_size=128,
                                    preprocessing_fn=audio_norm)
predictions = model.predict_generator(test_generator, use_multiprocessing=True, 
                                          workers=6, max_queue_size=20, verbose=1)
# Make a submission file
# prediction.shape=(len(test),41)
top_3 = np.array(LABELS)[np.argsort(-predictions, axis=1)[:, :3]]
predicted_labels = [' '.join(list(x)) for x in top_3]
test['label'] = predicted_labels
# 双层括号返回的是DataFrame的形式(带label的表头且含index)
test[['label']].to_csv("predictions.csv")
test[['label']].head()

不过以上程序还非常简陋,只设置了训练集和测试集,跑出来测试集的结果直接就扔到Kaggle上提交了,如果没有对应的评分机制就不知道最终的模型效果是怎样的,所以我们再加上验证集和别的一些措施。

Version 1 增加可持续、交叉验证措施

为了更好地挑选出哪个模型更好,在Version 0 的基础上将训练拆分成10份,找出其中哪一份是最好的,再添加一些Keras的Callback方法,作用如下:

Callback

We use some Keras callbacks to monitor the training.

  • ModelCheckpoint saves the best weight of our model (using validation data). We use this weight to make >test predictions.
  • EarlyStopping stops the training once validation loss ceases to decrease
  • TensorBoard helps us visualize training and validation loss and accuracy.

pandas中需要注意的一个地方

a=pd.DataFrame({'A':['a','b','c'],'B':[11,12,13],'C':['h','e','h']})
print(a.head())

得到

   A   B  C
0  a  11  h
1  b  12  e
2  c  13  h

然后我们从中取出一段,并取出该段中的第二个元素。

b=a[1:]
b['B'][1]

按说b中的第二个元素应该是13,可是结果却是12。这是因为=符号并不是完全拷贝,表b仍然是表a的一部分,b['B'][1]取出来的仍然是表a里面的index=1的元素。

  • Kaggle上原方案给出的解决方法是设一个其他的index消除掉原有表的数字index,这样用下标定位就是准确的了。
a=pd.DataFrame({'A':['a','b','c'],'B':[11,12,13],'C':['h','e','h']})
a.set_index('A',inplace=True)
print(a.head())
    B  C
A       
a  11  h
b  12  e
c  13  h

  • 还可以不让表b再属于原有的表a的引用,可以加上一个list(),即
list(b['B'])[1]

同样可以输出正确的结果13

数据拆分

回到正文,我们为什么要关注上面这个问题呢?在Version 0的过程中用的是整个数据集,而现在则是用交叉验证从中取出一段数据,不解决上述问题,从DataGenerator中按Index取出数据时就会出现KeyError的报错。
关于Kfold等方法的解释参考
https://blog.csdn.net/FontThrone/article/details/79220,这里用的是StratifiedKFold,会让每一份的y分布和原数据一致。例如原数据中出现了10次0,5次1,拆出来的每一份也会是同样的2:1的比例。代码如下:

from sklearn.model_selection import StratifiedKFold
import os
import shutil

PREDICTION_FOLDER = "predictions_1d_conv"
if not os.path.exists(PREDICTION_FOLDER):
    os.mkdir(PREDICTION_FOLDER)
if os.path.exists('logs/' + PREDICTION_FOLDER):
    shutil.rmtree('logs/' + PREDICTION_FOLDER)

# 原方案写的是skf = StratifiedKFold(train.label_idx, n_splits=config.n_folds),已失效
skf = StratifiedKFold(n_splits=10)

然后对拆出来的每份数据集训练模型并预测结果(注意这里训练出来了10个模型,得出了10个Test预测结果)

for i, (train_split, val_split) in enumerate(skf.split(train['fname'],train['label_idx'])):
    #返回的train_split,val_split是Index形式
    train_set = train.iloc[train_split]
    val_set = train.iloc[val_split]
    checkpoint = ModelCheckpoint('best_%d.h5'%i, monitor='val_loss', verbose=1, save_best_only=True)
    early = EarlyStopping(monitor="val_loss", mode="min", patience=5)
    tb = TensorBoard(log_dir='./logs/' + PREDICTION_FOLDER + '/fold_%d'%i, 
                     write_graph=True)
    callbacks_list = [checkpoint, early, tb]
    print("Fold: ", i)
    print("#"*50)
    model = get_1d_conv_model(config)

    train_generator = DataGenerator(config, 'audio_train/', train_set['fname'], 
                                    train_set['label_idx'], batch_size=64,
                                    preprocessing_fn=audio_norm)
    val_generator = DataGenerator(config, 'audio_train/', val_set['fname'], 
                                  val_set['label_idx'], batch_size=64,
                                  preprocessing_fn=audio_norm)

    history = model.fit_generator(train_generator, callbacks=callbacks_list, 
                                  validation_data=val_generator,
                                  epochs=config.max_epochs, use_multiprocessing=True, 
                                  workers=6, max_queue_size=20)

    # 加载在这一份训练数据中出现的最好的模型(val_loss最低)的权重
    model.load_weights('best_%d.h5'%i)
    # Save train predictions
    # 这里保存每个模型对整个Train的预测结果,便于后续分析
    train_generator = DataGenerator(config, 'audio_train/', train['fname'], batch_size=128,
                                    preprocessing_fn=audio_norm)
    predictions = model.predict_generator(train_generator, use_multiprocessing=True, 
                                          workers=6, max_queue_size=20, verbose=1)
    np.save(PREDICTION_FOLDER + "/train_predictions_%d.npy"%i, predictions)

    # Save test predictions
    # 用每个模型来提交Test结果,并在后续进行整合
    test_generator = DataGenerator(config, 'audio_test/', test['fname'], batch_size=128,
                                    preprocessing_fn=audio_norm)
    predictions = model.predict_generator(test_generator, use_multiprocessing=True, 
                                          workers=6, max_queue_size=20, verbose=1)
    np.save(PREDICTION_FOLDER + "/test_predictions_%d.npy"%i, predictions)
    # Make a submission file
    top_3 = np.array(LABELS)[np.argsort(-predictions, axis=1)[:, :3]]
    predicted_labels = [' '.join(list(x)) for x in top_3]
    test['label'] = predicted_labels
    # 双层括号返回的是DataFrame的形式(带label的表头且含index)
    test[['label']].to_csv(PREDICTION_FOLDER + "/predictions_%d.csv"%i)

预测结果整合

最后我们用几何平均把这10份结果整合在一起,作为最终的结果进行提交。

pred_list = []
for i in range(10):
    pred_list.append(np.load("predictions_1d_conv/test_predictions_%d.npy"%i))
# 返回一个[1,1,1,...,1] 的数组
prediction = np.ones_like(pred_list[0])
# 累乘各个fold的预测概率
for pred in pred_list:
    prediction = prediction*pred
# 取几何平均    
prediction = prediction**(1./len(pred_list))
# Make a submission file
top_3 = np.array(LABELS)[np.argsort(-prediction, axis=1)[:, :3]]
predicted_labels = [' '.join(list(x)) for x in top_3]
test = pd.read_csv('sample_submission.csv')
test['label'] = predicted_labels
test[['fname', 'label']].to_csv("1d_conv_ensembled_submission.csv", index=False)

Version 2 使用MFCC特征从频域分类

有了以上两个版本,再进行修改就很容易了。稍稍修改Config类,加上和MFCC相关的配置

class Config(object):
    def __init__(self, sampling_rate=16000, audio_duration=2, n_classes=41,
                 n_folds=10,learning_rate=0.0001,max_epochs=50,
                use_mfcc=False,n_mfcc=20):
        self.sampling_rate = sampling_rate
        self.audio_duration = audio_duration
        self.n_classes = n_classes
        self.n_folds=n_folds
        self.learning_rate = learning_rate
        self.max_epochs = max_epochs
        self.n_mfcc=n_mfcc
        self.use_mfcc=use_mfcc
        #送入网络的帧数
        self.audio_length = self.sampling_rate * self.audio_duration
        if(use_mfcc):
            #MFCC不是一个采样点就计算一次,需要/512
            self.dim=(self.n_mfcc,1 + int(np.floor(self.audio_length/512)),1)
        else:
            self.dim = (self.audio_length, 1)

DataGenerator也增加上use_mfcc=True的选项

class DataGenerator(Sequence):
    def __init__(self, config, data_dir, list_IDs, labels=None, 
                 batch_size=64, preprocessing_fn=lambda x: x):
        
        self.config = config
        self.data_dir = data_dir
        self.list_IDs = list(list_IDs)
        if(labels is None):
            self.labels=None
        else:
            self.labels = list(labels)
        self.batch_size = batch_size
        self.preprocessing_fn = preprocessing_fn
        #在on_epoch_end内获得了wav文件共有多少个,便于从ID对应到文件名,在每个训练epoch结束之后也会执行该函数
        self.on_epoch_end()
        self.dim = self.config.dim

    # 以下两个函数是必须在Sequence类里实现的方法
    # 返回一共有多少个batch
    def __len__(self):
        return int(np.ceil(len(self.list_IDs) / self.batch_size))
    # 返回第index个batch内的内容
    def __getitem__(self, index):
        # 这些一个batch内的内容由indexes指定,例如[a,a+1,a+2,…,a+batch_size]
        indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
        # 将indexes映射至ID,组成一个需要处理的文件ID列表
        # 需要注意list_IDs如果是Dataframe,用[k]取元素取的是原表里面的元素
        list_IDs_temp = [self.list_IDs[k] for k in indexes]
        return self.__data_generation(list_IDs_temp)

    def on_epoch_end(self):
        self.indexes = np.arange(len(self.list_IDs))

    def __data_generation(self, list_IDs_temp):
        cur_batch_size = len(list_IDs_temp)
        # 这里的*是因为self.dim类似shape,是一个多维的参数,通过*当成参数列表传递进去
        X = np.empty((cur_batch_size, *self.dim))
        input_length = self.config.audio_length
        for i, ID in enumerate(list_IDs_temp):
            file_path = self.data_dir + ID            
            # Read and Resample the audio
            data, _ = librosa.core.load(file_path, sr=self.config.sampling_rate,
                                        res_type='kaiser_fast')

            # Random offset / Padding
            # 如果音频过长 从中取出指定长度的音频 eg:len(data)=5000 input_length=1000
            if len(data) > input_length:
                # max_offset=4000
                max_offset = len(data) - input_length
                # offset=3214
                offset = np.random.randint(max_offset)
                # [3214:1000+3214]
                data = data[offset:(input_length+offset)]
            else:
                # 如果音频过短,补成指定长度
                if input_length > len(data):
                    max_offset = input_length - len(data)
                    offset = np.random.randint(max_offset)
                else:
                    offset = 0
                data = np.pad(data, (offset, input_length - len(data) - offset), "constant")
                
            # Normalization + Other Preprocessing
            if(self.config.use_mfcc):
                # 这里原文在处理MFCC的时候并没有添加正则化,不知道为什么,MFCC信息也包含强度啊?
                # 在Kaldi里面添加的能量信息又是怎么回事?
                data=librosa.feature.mfcc(data,sr=self.config.sampling_rate,
                                                   n_mfcc=self.config.n_mfcc)
                # 多增加了一个维度
                data=np.reshape(data,(*data.shape,1))
            else:
                # 这里的preprocessing_fn是预处理函数,对音频进行归一化处理,参考后一段代码
                # 增添一个新维度以符合网络Input形状
                data = self.preprocessing_fn(data)[:, np.newaxis]
            # 第i个X为处理之后的data
            X[i,] = data

        # 如果带有label(train)
        if self.labels is not None:
            y = np.empty(cur_batch_size, dtype=int)            
            for i in range(len(list_IDs_temp)):
                y[i] = self.labels[i]
            #for i, ID in enumerate(list_IDs_temp):
                #y[i] = self.labels[ID]
            return X, to_categorical(y, num_classes=self.config.n_classes)
        #如果不带label(test)
        else:
            return X       

最后就是2D卷积模型了,

def get_2d_conv_model(config):
    
    nclass = config.n_classes
    inp = Input(shape=config.dim)
    #inp = Input(shape=(config.dim[0],config.dim[1],1))
    x = Convolution2D(32, (4,10), padding="same")(inp)
    x = BatchNormalization()(x)
    x = Activation("relu")(x)
    x = MaxPool2D()(x)
    
    x = Convolution2D(32, (4,10), padding="same")(x)
    x = BatchNormalization()(x)
    x = Activation("relu")(x)
    x = MaxPool2D()(x)
    
    x = Convolution2D(32, (4,10), padding="same")(x)
    x = BatchNormalization()(x)
    x = Activation("relu")(x)
    x = MaxPool2D()(x)
    
    x = Convolution2D(32, (4,10), padding="same")(x)
    x = BatchNormalization()(x)
    x = Activation("relu")(x)
    x = MaxPool2D()(x)

    x = Flatten()(x)
    x = Dense(64)(x)
    x = BatchNormalization()(x)
    x = Activation("relu")(x)
    out = Dense(nclass, activation=softmax)(x)

    model = models.Model(inputs=inp, outputs=out)
    opt = optimizers.Adam(config.learning_rate)

    model.compile(optimizer=opt, loss=losses.categorical_crossentropy, metrics=['acc'])
    return model

训练和Version 1的类似,只不过增加了Config里MFCC的相关设置。

config = Config(sampling_rate=44100, audio_duration=2, n_folds=10, 
                learning_rate=0.001, use_mfcc=True, n_mfcc=40)

MFCC归一化

需要注意的是,原文是直接用整个训练集去fit,不知道内存会不会爆掉。之前时域上的正则化处理是对每个音频的幅度做了个归一化处理,然而原作者这里是得出了整个X_train的MFCC特征,然而再做归一化处理,然后进行训练。由于文章连续性的问题,我还是采用DataGenerator的方法来训练,就没有做全局的正则化了。

def prepare_data(df, config, data_dir):
    X = np.empty(shape=(df.shape[0], config.dim[0], config.dim[1], 1))
    input_length = config.audio_length
    for i, fname in enumerate(df.index):
        print(fname)
        file_path = data_dir + fname
        data, _ = librosa.core.load(file_path, sr=config.sampling_rate, res_type="kaiser_fast")

        # Random offset / Padding
        if len(data) > input_length:
            max_offset = len(data) - input_length
            offset = np.random.randint(max_offset)
            data = data[offset:(input_length+offset)]
        else:
            if input_length > len(data):
                max_offset = input_length - len(data)
                offset = np.random.randint(max_offset)
            else:
                offset = 0
            data = np.pad(data, (offset, input_length - len(data) - offset), "constant")

        data = librosa.feature.mfcc(data, sr=config.sampling_rate, n_mfcc=config.n_mfcc)
        data = np.expand_dims(data, axis=-1)
        X[i,] = data
    return X
X_train = prepare_data(train, config, '../input/freesound-audio-tagging/audio_train/')
X_test = prepare_data(test, config, '../input/freesound-audio-tagging/audio_test/')
y_train = to_categorical(train.label_idx, num_classes=config.n_classes)
mean = np.mean(X_train, axis=0)
std = np.std(X_train, axis=0)

X_train = (X_train - mean)/std
X_test = (X_test - mean)/std

结果与展望

原作者最终跑出来的结果发现时域和频域结合起来的效果是最好的:



在语音识别中,对MFCC特征一般还会加上一阶差分、二阶差分、能量等信号,不知道增加这些参数效果会不会好一些。另外分帧的时长(一般为20ms~30ms,在这样的时长内既能捕捉到周期特性,也能捕捉到动态特定)也是针对人们的对话设定的,我想这样的设定对该任务分类也是有影响的,有待后续测试。

你可能感兴趣的:(Keras从时域、频域处理音频分类问题(带详细注释))