Google AudioSet-谷歌语音数据集如何解析

Google Audio Set是谷歌提供的语音数据集,对于语音相关的AI学习和研究有着至关重要的作用

因为身处长城之内,故从谷歌官网搬运该数据集的介绍、下载,及解析格式


数据集简介


AudioSet由632个音频事件类的扩展本体和从YouTube视频中提取的2084320个标记为10秒的声音片段组成,涵盖了人类和动物的各种声音、乐器和流派以及常见的日常环境声音。

二百一十万

annotated videos

5.8 k 小时音频

hours of audio

527 个类别

of annotated sounds

 

下载方式见https://blog.csdn.net/qq_39437746/article/details/80793476

下面是tfrecord文件的具体解析格式


Features dataset
Frame-level features are stored as tensorflow.SequenceExample protocol buffers. A tensorflow.SequenceExample proto is reproduced here in text format:

context: {
  feature: {
    key  : "video_id"
    value: {
      bytes_list: {
        value: [YouTube video id string]
      }
    }
  }
  feature: {
    key  : "start_time_seconds"
    value: {
      float_list: {
        value: 6.0
      }
    }
  }
  feature: {
    key  : "end_time_seconds"
    value: {
      float_list: {
        value: 16.0
      }
    }
  }
  feature: {
    key  : "labels"
      value: {
        int64_list: {
          value: [1, 522, 11, 172] # The meaning of the labels can be found here.
        }
      }
    }
}
feature_lists: {
  feature_list: {
    key  : "audio_embedding"
    value: {
      feature: {
        bytes_list: {
          value: [128 8bit quantized features]
        }
      }
      feature: {
        bytes_list: {
          value: [128 8bit quantized features]
        }
      }
    }
    ... # Repeated for every second of the segment
  }

}

tfRecord解析代码

def getParseData(filenames):
    # filenames = 'audioset_v1_embeddings/bal_train/5v.tfrecord'
    raw_dataset = tf.data.TFRecordDataset(filenames)

    # for raw_single in raw_dataset:
    #     print(repr(raw_single))

    # #查看feature
    # for raw_record in raw_dataset.take(1):
    #     example = tf.train.Example()
    #     example.ParseFromString(raw_record.numpy())
    #     print(example)

    context_feature = {
        "video_id": tf.io.FixedLenFeature([], tf.string),
        'labels': tf.io.VarLenFeature(tf.int64),
        'end_time_seconds': tf.io.FixedLenFeature([], tf.float32),
        'start_time_seconds': tf.io.FixedLenFeature([], tf.float32)
    }

    sequence_feature = {
        'audio_embedding': tf.io.FixedLenSequenceFeature(shape=[], dtype=tf.string, allow_missing=True)
    }

    def _parse_function(example_proto):
        return  tf.io.parse_single_sequence_example(example_proto, context_feature, sequence_feature)

 

你可能感兴趣的:(数据集,语音识别,深度学习,下载,解析)