客户端使用librtmp推流实践

播推流一般都采用RTMP协议,它是一种基于TCP的应用层协议。librtmp是RTMP协议的一个实现库,它封装了协议细节,方便我们调用。
本文主要介绍如何利用librtmp库实现音视频推流,不涉及RTMP协议的细节解析。

建立连接

首先我们需要引入rtmp.h头文件,开始建立连接:

    // 为RTMP对象分配内存
    _rtmp = RTMP_Alloc();
    // 初始化_rtmp
    RTMP_Init(_rtmp);
    // 设置超时时间
    RTMP_SetSocketTimeout(_rtmp, 30);
    // 设置推流地址
    RTMP_SetupURL(_rtmp, "rtmp://xxx");
    // 设置可写状态,推流关键,一定要设置
    RTMP_EnableWrite(_rtmp);
    // 建立连接
    if (RTMP_Connect(_rtmp, NULL) == false) {
        return -1;
    }
    // 建立Stream连接
    if (RTMP_ConnectStream(_rtmp, 0) == false) {
        return -1;
    }

发送音视频编码信息

在发送音视频数据包之前,我们需要首先发送音视频流的编码信息。这些编码信息至关重要,没有它们,解码器将无法解码。

视频描述信息

常见H264格式编码的视频流,描述信息被称为AVCDecoderConfigurationRecord,该结构在“ISO-14496-15 AVC file format”中有详细说明。
需要注意,如果我们推的是视频文件,可以使用ffmpeg直接从文件中读取,它被存放在extradata中。如果是编码器输出,那么需要根据编码器输出的SPS、PPS重新配置AVCDecoderConfigurationRecord结构,构造示例代码如下:

- (void)sendAudioData:(char*)data size:(int)size pts:(long)pts {
    char* body = malloc(1024);
    int i = 0;
    /*AVCDecoderConfigurationRecord*/
    body[i++] = 0x01;
    body[i++] = sps[1];
    body[i++] = sps[2];
    body[i++] = sps[3];
    body[i++] = 0xff;
    /*sps*/
    body[i++] = 0xe1;
    body[i++] = (sLen >> 8) & 0xff;
    body[i++] = sLen & 0xff;
    memcpy(&body[i], sps, sLen);
    i += sLen;
    /*pps*/
    body[i++] = 0x01;
    body[i++] = (plen >> 8) & 0xff;
    body[i++] = (plen) & 0xff;
    memcpy(&body[i], pps, plen);
}

视频描述信息构造完成后就可以封包推流了。RTMP封包使用FLV tag格式,所以还需要加上tag header。示例代码如下:

- (void)sendVideoInfo:(void*)data size:(int)size {
    RTMPPacket *packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + size + 5);
    memset(packet, 0, RTMP_HEAD_SIZE);
    packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
    packet->m_nBodySize = size + 5;
    char *body = (char *)packet->m_body;
    memset(body,0,size + 5);
    int i = 0;
    // 前四位1表示关键帧,后四位7表示AVC格式
    body[i++] = 0x17;
    // 0x00 表示AVC Sequence Header
    body[i++] = 0x00;
    body[i++] = 0x00;
    body[i++] = 0x00;
    body[i++] = 0x00;
    // 拷贝AVCDecoderConfigurationRecord
    memcpy(&body[i], data, size);
    
    packet->m_body = body;
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    packet->m_nBodySize = size + 5;
    packet->m_nChannel = 0x04;
    packet->m_nTimeStamp = 0;
    packet->m_hasAbsTimestamp = 0;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nInfoField2 = _rtmp->m_stream_id;
    RTMP_SendPacket(_rtmp, packet, true);
    free(packet);
}

音频描述信息

AAC格式编码的音频流,描述信息被称为AudioSpecificConfig,该结构在“ISO-14496-3 Audio”中有详细说明。
同样的,如果推的是文件文件,可以使用ffmpeg直接读取。如果是编码器输出,那么需要根据采样率、通道数等重新配置AudioSpecificConfig结构。

接下来就是封包推流,示例代码如下:

- (void)sendAudioInfo:(void*)data size:(int)size {
    RTMPPacket *packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + size + 2);
    memset(packet, 0, RTMP_HEAD_SIZE);
    packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
    packet->m_nBodySize = size + 2;
    char *body = (char *)packet->m_body;
    memset(body,0,size + 2);
    int i = 0;
    body[i++] = 0xAF;
    body[i++] = 0x00;
    // 拷贝AudioSpecificConfig
    memcpy(&body[i], data, size);
    
    packet->m_body = body;
    packet->m_packetType = RTMP_PACKET_TYPE_AUDIO;
    packet->m_nBodySize = size + 2;
    packet->m_nChannel = 0x04;
    packet->m_nTimeStamp = 0;
    packet->m_hasAbsTimestamp = 0;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nInfoField2 = _rtmp->m_stream_id;
    RTMP_SendPacket(_rtmp, packet, true);
    free(packet);
}

发送音视频数据包

视频数据包

发送视频数据时,需要注意一点:一般视频编码器输出的视频包前四个字节已经就是NALU SIZE了,如果是这种情况,需要跳过前四个字节。

-(void)sendVideoData:(char*)data size:(int)size pts:(long)pts key:(BOOL)isKey {    
    // skip 4 bytes
//    data += 4;
//    size -= 4;
    RTMPPacket *packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE + size + 9);
    memset(packet, 0, RTMP_HEAD_SIZE);
    packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
    packet->m_nBodySize = size + 9;
    char *body = (char *)packet->m_body;
    memset(body,0,size + 9);
    int i = 0;
    // header
    // 前四位1表示关键帧,前四位2表示非关键帧
    body[i++] = isKey ? 0x17 : 0x27;
    // 0x01 表示AVC NALU
    body[i++] = 0x01;
    body[i++] = 0x00;
    body[i++] = 0x00;
    body[i++] = 0x00;
    // NALU SIZE
    body[i++] = size >> 24 & 0xff;
    body[i++] = size >> 16 & 0xff;
    body[i++] = size >> 8 & 0xff;
    body[i++] = size & 0xff;
    memcpy(&body[i], data, size);
    
    packet->m_body = body;
    packet->m_nBodySize = size + 9;
    packet->m_hasAbsTimestamp = 0;
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    packet->m_nInfoField2 = _rtmp->m_stream_id;
    packet->m_nChannel = 0x04;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nTimeStamp = pts;
    RTMP_SendPacket(_rtmp, packet, true);
    free(packet);
}

音频数据包

发送音频数据包的方式和发送描述信息的方式类似,示例代码如下:

- (void)sendAudioData:(char*)data size:(int)size pts:(long)pts {
    RTMPPacket *packet = (RTMPPacket *)malloc(RTMP_HEAD_SIZE+size+2);
    memset(packet, 0, RTMP_HEAD_SIZE);
    packet->m_body = (char *)packet + RTMP_HEAD_SIZE;
    packet->m_nBodySize = size + 2;
    char *body = (char *)packet->m_body;
    memset(body,0,size + 2);
    int i = 0;
    body[i++] = 0xAF;
    body[i++] = 0x01;
    memcpy(&body[i], data, size);
    
    packet->m_body = body;
    packet->m_packetType = RTMP_PACKET_TYPE_AUDIO;
    packet->m_nBodySize = size + 2;
    packet->m_nChannel = 0x04;
    packet->m_nTimeStamp = pts;
    packet->m_hasAbsTimestamp = 0;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nInfoField2 = _rtmp->m_stream_id;
    RTMP_SendPacket(_rtmp, packet, true);
    free(packet);
}

关闭连接

推流结束时,我们需要关闭连接:

    // 关闭连接
    RTMP_Close(_rtmp);
    // 释放内存
    RTMP_Free(_rtmp);
    _rtmp = NULL;

你可能感兴趣的:(客户端使用librtmp推流实践)