本文以H.264视频流为例,用GStreamer实现插入和提取SEI(Supplemental Enhancement Information),实现视频流传输时延的测量。
第一关键节点是funnel,翻译过来就是漏斗,以GstBuffer为最小粒度对两路数据流做merge,形象点讲就是appsrc产生绿豆,x264enc产生红豆,绿豆红豆一字纵队过漏斗,每颗豆豆代表GstBuffer,是以alignment=au2为最小粒度的NAL。
第二关键节点是appsrc,通过need-data signal3向pipeline中插入SEI
gst-launch语法的pipeline如下:
gst-launch-1.0 funnel name=f \
appsrc name=appsrc-h264-sei do-timestamp=true block=true is-live=true ! video/x-h264, stream-format=byte-stream, alignment=au ! queue ! f. \
videotestsrc is-live=true ! x264enc ! video/x-h264, stream-format=byte-stream, alignment=au, profile=baseline ! queue ! f. \
f. ! queue ! h264parse ! video/x-h264, stream-format=byte-stream, alignment=au ! rtph264pay ! udpsink sync=false clients=127.0.0.1:5004
注册到need-data signal的need_data_callback实现:
static void need_data_callback(GstElement *appsrc, guint unused,
gpointer udata) {
GST_LOG("need_data_callback");
GstBuffer *buffer;
GstFlowReturn ret;
static uint64_t next_ms_time_insert_sei = 0;
struct timespec one_ms;
struct timespec rem;
uint8_t *h264_sei = NULL;
size_t length = 0;
one_ms.tv_sec = 0;
one_ms.tv_nsec = 1000000;
while (now_ms() <= next_ms_time_insert_sei) {
GST_TRACE("sleep to wait time trigger");
nanosleep(&one_ms, &rem);
}
if (!h264_sei_ntp_new(&h264_sei, &length)) {
GST_ERROR("h264_sei_ntp_new failed");
return;
}
if (NULL != h264_sei && length > 0) {
buffer =
gst_buffer_new_allocate(NULL, START_CODE_PREFIX_BYTES + length, NULL);
if (NULL != buffer) {
// fill start_code_prefix: 0x00000001
uint8_t start_code_prefix[] = START_CODE_PREFIX;
gst_buffer_fill(buffer, 0, start_code_prefix, START_CODE_PREFIX_BYTES);
// fill H.264 SEI
size_t bytes_copied =
gst_buffer_fill(buffer, START_CODE_PREFIX_BYTES, h264_sei, length);
if (bytes_copied == length) {
g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret);
GST_DEBUG("H264 SEI NTP timestamp inserted");
} else {
GST_ERROR("GstBuffer.fill without all bytes copied");
}
} else {
GST_ERROR("GstBuffer.new_allocate failed");
}
gst_buffer_unref(buffer);
}
next_ms_time_insert_sei = now_ms() + 1000;
free(h264_sei);
}
关键节点是identity,对于每个GstBuffer的到来都会触发handoff signal,接下来的识别处理工作就交给handoff_callback处理
gst-launch语法的pipeline如下:
gst-launch-1.0 udpsrc uri=udp://127.0.0.1:5004 caps="application/x-rtp, media=video, encoding-name=H264" ! rtph264depay ! video/x-h264, stream-format=byte-stream, alignment=nal ! identity name=identity ! fakesink
注意到alignment=nal跟sender不一样,因为rtph264depay输出为alignment=au时会把SEI丢弃掉4
注册到handoff signal的handoff_callback实现:
static void handoff_callback(GstElement *identity, GstBuffer *buffer,
gpointer user_data) {
GST_TRACE("handoff_callback");
GstMapInfo info = GST_MAP_INFO_INIT;
GstH264NalParser *nalparser = NULL;
GstH264NalUnit nalu;
if (gst_buffer_map(buffer, &info, GST_MAP_READ)) {
nalparser = gst_h264_nal_parser_new();
if (NULL != nalparser) {
if (GST_H264_PARSER_OK ==
gst_h264_parser_identify_nalu_unchecked(nalparser, info.data, 0,
info.size, &nalu)) {
// if (info.size < 100) GST_LOG("buffer info size %ld", info.size);
if (GST_H264_NAL_SEI == nalu.type) {
GST_LOG(
"identify sei nalu with size = %d, offset = %d, sc_offset = %d",
nalu.size, nalu.offset, nalu.sc_offset);
int64_t delay = -1;
if (TRUE ==
h264_sei_ntp_parse(nalu.data + nalu.offset, nalu.size, &delay)) {
GST_LOG("delay = %ld ms", delay);
}
}
} else {
GST_WARNING("gst_h264_parser_identify_nalu_unchecked failed");
}
gst_h264_nal_parser_free(nalparser);
} else {
GST_WARNING("gst_h264_nal_parser_new failed");
}
gst_buffer_unmap(buffer, &info);
} else {
GST_WARNING("gst_buffer_map failed");
}
}
基于aizvorski/h264bitstream实现
关于H.264 SEI的数据结构参考:FFmpeg从入门到精通——进阶篇,SEI那些事儿
// TODO: show me the code
播放器技术分享(5):延时优化 ↩︎
What is the alignment capability in video/x-h264 ↩︎
appsrc-stream2.c ↩︎
gstrtph264depay.c ↩︎