首先说下为什么要做这样一个东西
在上家公司的时候,作为客户端开发,一个月要给领导演示异常app的开发成果,当时用的策略是用录屏类软件,录制成mp4,然后通过投影播放mp4文件,来给领导看。这样做带来的问题是,要提前准备mp4需要时间,领导想要看除了mp4外的内容时,体验不好。自己对流媒体知识有一些了解,所以就想做一个直播android屏幕的app,这就是想做这样一个东西的原因。
项目地址:GITHUB
https://github.com/sszhangpengfei/AndroidShow
说下为什么选择rtsp协议
本来是想用rtmp来做流媒体协议的,如果用rtmp,手机作为推流端,将视频推给rtmp服务器,vlc等客户端可以播放。但是这样做还需要一个流媒体服务器,所以选择了rtsp协议。
思路是:手机端作为rtsp服务端,vlc作为客户端,通过rtp协议来传输视频流。这样做就省去了搭建流媒体服务的工作。这样做事参考了github上开源项目spydroid来做的。本项目所有功能使用java实现。
结构简述
app相当于一个rtsp服务器,vlc相当于客户端,通过rtsp协议与app的服务端交互,rtsp交互成功后,setup成功后服务端通过rtp协议开始推流。大概结构如下图所示
代码简析
android采集屏幕视频数据
这一步内容,可以看我之前的一片博客,连接如下:
android通过MediaProjectionManager录屏关联MediaCodec获取h264数据
rtsp server端的搭建
rtsp协议的交互过程
rtsp的简单交互过,以此app为例,来简单说下:
1.vlc发送OPTIONS报文到server端,server端根据计算结果,返回200 ok或者其他错误;
2.vlc发送DESCRIBE报文,server返回报文,报文中的字段,感兴趣的同学可以自己搜索下;
3.vlc发送SETUP报文,server返回,如果setup成功,server端会进行rtp发送的准备工作;
这一步还是比较关键的,服务端会告诉客户端Transport用的是rtp/udp还是rtp/tcp,告诉客户端一些端口的相关信息。
4.vlc发送PLAY报文,server返回;
5.如果播放结束,客户端可以发送TEARDOWN报文,一次完整的RTSP交互结束。
代码实现简述
本项目用的rtsp server端代码,是从GITHUB开源项目spydroid中摘取的,感情去的同学可以看下这个项目,代码些的很好。
工程中rtsp涉及到的文件如下:
RtspServer集成android Service,是整个rtsp功能部分的入口文件。
流处理类的继承关系: ScreenStream->VideoStream->MediaStream->Stream,ScreenStream为自己实现,VideoStream,MediaStream为抽象类,定义了关于流的一些基本的属性集方法,比如设置sps pps 端口等。
h264打包rtp的集成关系: H264Packetizer->AbstractPacketizer,下面这个核心功能函数的作用就是讲264帧打包成rtp包的作用
private void send() throws IOException, InterruptedException {
int sum = 1, len = 0, type;
if (streamType == 0) {
// NAL units are preceeded by their length, we parse the length
fill(header,0,5);
ts += delay;
naluLength = header[3]&0xFF | (header[2]&0xFF)<<8 | (header[1]&0xFF)<<16 | (header[0]&0xFF)<<24;
if (naluLength>100000 || naluLength<0) resync();
} else if (streamType == 1) {
// NAL units are preceeded with 0x00000001
fill(header,0,5);
ts = ((ScreenInputStream)is). getLastts();
//ts += delay;
naluLength = is.available();
Log.d(TAG,"header is "+header[0]+" "+header[1]+" "+header[2]+" "+header[3]+" "+header[4]+" ts = "+ts+" nalu len = "+naluLength+"");
if (!(header[0]==0 && header[1]==0 && header[2]==0)) {
// Turns out, the NAL units are not preceeded with 0x00000001
Log.e(TAG, "NAL units are not preceeded by 0x00000001");
streamType = 2;
return;
}
} else {
// Nothing preceededs the NAL units
fill(header,0,1);
header[4] = header[0];
ts = ((ScreenInputStream)is). getLastts();
//ts += delay;
naluLength = is.available()+1;
}
// Parses the NAL unit type
type = header[4]&0x1F;
Log.d(TAG,"NAL type is "+type+"");
// The stream already contains NAL unit type 7 or 8, we don't need
// to add them to the stream ourselves
if (type == 7 || type == 8) {
Log.v(TAG,"SPS or PPS present in the stream.");
count++;
if (count>4) {
sps = null;
pps = null;
}
}
//Log.d(TAG,"- Nal unit length: " + naluLength + " delay: "+delay/1000000+" type: "+type);
// Small NAL unit => Single NAL unit
if (naluLength<=MAXPACKETSIZE-rtphl-2) {
buffer = socket.requestBuffer();
buffer[rtphl] = header[4];
len = fill(buffer, rtphl+1, naluLength-1);
socket.updateTimestamp(ts);
socket.markNextPacket();
super.send(naluLength+rtphl);
//Log.d(TAG,"----- Single NAL unit - len:"+len+" delay: "+delay);
}
// Large NAL unit => Split nal unit
else {
// Set FU-A header
header[1] = (byte) (header[4] & 0x1F); // FU header type
header[1] += 0x80; // Start bit
// Set FU-A indicator
header[0] = (byte) ((header[4] & 0x60) & 0xFF); // FU indicator NRI
header[0] += 28;
while (sum < naluLength) {
buffer = socket.requestBuffer();
buffer[rtphl] = header[0];
buffer[rtphl+1] = header[1];
socket.updateTimestamp(ts);
if ((len = fill(buffer, rtphl+2, naluLength-sum > MAXPACKETSIZE-rtphl-2 ? MAXPACKETSIZE-rtphl-2 : naluLength-sum ))<0) return; sum += len;
// Last packet before next NAL
if (sum >= naluLength) {
// End bit on
buffer[rtphl+1] += 0x40;
socket.markNextPacket();
}
super.send(len+rtphl+2);
// Switch start bit
header[1] = (byte) (header[1] & 0x7F);
//Log.d(TAG,"----- FU-A unit, sum:"+sum);
}
}
}
RtpSocket类的作用:将打包好的rtp包通过socket发送,这个类用的是多播udp发送的。
该类继承Runnable接口,在该线程中进行数据的发送,包括rtcp报文
/** The Thread sends the packets in the FIFO one by one at a constant rate. */
@Override
public void run() {
Statistics stats = new Statistics(50,3000);
try {
// Caches mCacheSize milliseconds of the stream in the FIFO.
Thread.sleep(mCacheSize);
long delta = 0;
while (mBufferCommitted.tryAcquire(4,TimeUnit.SECONDS)) {
if (mOldTimestamp != 0) {
// We use our knowledge of the clock rate of the stream and the difference between two timestamps to
// compute the time lapse that the packet represents.
if ((mTimestamps[mBufferOut]-mOldTimestamp)>0) {
stats.push(mTimestamps[mBufferOut]-mOldTimestamp);
long d = stats.average()/1000000;
//Log.d(TAG,"delay: "+d+" d: "+(mTimestamps[mBufferOut]-mOldTimestamp)/1000000);
// We ensure that packets are sent at a constant and suitable rate no matter how the RtpSocket is used.
if (mCacheSize>0) Thread.sleep(d);
} else if ((mTimestamps[mBufferOut]-mOldTimestamp)<0) {
Log.e(TAG, "TS: "+mTimestamps[mBufferOut]+" OLD: "+mOldTimestamp);
}
delta += mTimestamps[mBufferOut]-mOldTimestamp;
if (delta>500000000 || delta<0) {
//Log.d(TAG,"permits: "+mBufferCommitted.availablePermits());
delta = 0;
}
}
mReport.update(mPackets[mBufferOut].getLength(), System.nanoTime(),(mTimestamps[mBufferOut]/100L)*(mClock/1000L)/10000L);
mOldTimestamp = mTimestamps[mBufferOut];
if (mCount++>30) mSocket.send(mPackets[mBufferOut]);
if (++mBufferOut>=mBufferCount) mBufferOut = 0;
mBufferRequested.release();
}
} catch (Exception e) {
e.printStackTrace();
}
mThread = null;
resetFifo();
}
Session SessionBuilder为Session管理类,每一个客户端的连接都是一个Session对象
SendReport为发送RTCP报文的管理类
目前只实现了视频功能,音频功能暂未实现
项目地址:GITHUB
run该工程后,app启动后,界面目前很简单,顶部会有rtsp的地址,比如:rtsp://192.168.60.120:8086。中间有俩按钮,开始录屏和结束录屏,点击开始录屏,此时会启动rtsp server,MediaCodec会将屏幕yuv编码为h264。此时就可以在vlc中输入rtsp地址,就可以播放了。
Session SessionBuilder为Session管理类,每一个客户端的连接都是一个Session对象
SendReport为发送RTCP报文的管理类,
————————————————
版权声明:本文为CSDN博主「sszpf」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/ss182172633/article/details/79578372