一.WebRTC的Android客户端搭建
1.libjingle_peerconnection_so.so
2.libjingle_peerconnection.jar
3.客户端源码一份(可以在github上下载)
二、相关概念介绍
1.P2P:点对点通讯;
2.STUN:提供反射地址使双方可以进行P2P通讯;
3.TURN:在反射地址方式失败情况下的补充方案,即使用中继器,使双方百分之百能够通讯;
4.ICE:综合STUN与TURN两种方案,找出一种最合理最廉价的可行路径;
5.SIP/SDP:SIP一种音视频通讯的协议,SDP为SIP协议中对音视频的描述;
6.PeerConnectionFactory/PeerConnection:整个WebRTC中最核心的类,有了这个类才能获得音视频相关的其他操作,
7.MediaStream:流媒体,包括音频和视频;
8.Track:流媒体中的轨,封装了流媒体的数据,包括音频数据中的声道,视频数据中的YUV场;
9.Session Management:抽象的会话层;
10.RTP:流媒体协议;
11.iSAC:音频语音编码器;
12.iLBC:音频语音解码器;
13.VP8:视频图像编解码器;
14.Room:在web客户端会生成一个随机的房间号,在Android客户端需要输入该房间号进行相互连接进行通讯;
15.Observe:观察连接方的数据,就是一个连接后数据的回调接口;
三、相关类的介绍
1.AppRTCAudioManager:音频管理类,全部调用Android SDK已有的方法;
2.AppRTCClient:自定义的与服务端进行通讯和消息回调的接口;
3.AppRTCProximitySensor:Android手机上的距离传感器,使手机靠近耳朵时让按键失效;
4.CallActivity:建立通讯连接,并且显示音视频的界面;
5.CallFragment:通讯界面,CallActivity的子界面;
6.CaptureQualityController:控制界面的画质分辨率大小,用seekbar显示;
7.ConnectActivity:初始化相机和通信的一些参数,点击呼叫按钮时跳转到CallActivity界面进行呼叫;
8.CpuMonitor:显示CPU的当前使用性能数据;
9.HudFragment:参数的界面显示,包括上行宽带和丢包率;
10.PeerConnectionClient:PeerConnection的实现,有了这个类才能进行音视频相关数据通讯;
11.PercentFrameLayout:封装了FrameLayout;
12.RoomParametersFetcher:解析服务器的json数据,并转发房间的参数和地址;
13.SettingsActivity:设置Activity;
14.SettingsFragment:设置Fragment,一些音视频通讯的参数设置;
15.UnhandledExceptionHandler:crash时抓异常,不会直接抛出异常;
16.WebSocketChannelClient:websockot封装;
17.WebSocketRTCClient:对WebSocketChannelClient使用,进行websockt方式发送数据;
工具类:
1.AppRTCUtils:
2.AsyncHttpURLConnection:
3.LooperExecutor:
四、WebSocket
如果下载的代码中含有autobanh.jar这个jar包,那么可以断定该Android客户端与WebRTC的通讯是采用WebSocket的方式进行通讯的,其实现主要在WebSocketChannelClient和WebSocketRTCClient这两个类中,其回调接口则在WebSocketConnectionObserver的三个回调函数里面,这个类被封装在了WebSocketChannelClient里面。(WebSocket的详细知识不做介绍)
五、WebRTC呼叫流程介绍(转载)
上述序列中,标注的场景是ClientA向ClientB发起对聊请求,调用描述如下:
·ClientA首先创建PeerConnection对象,然后打开本地音视频设备,将音视频数据封装成MediaStream添加到PeerConnection中。
·ClientA调用PeerConnection的CreateOffer方法创建一个用于offer的SDP对象,SDP对象中保存当前音视频的相关参数。ClientA通过PeerConnection的SetLocalDescription方法将该SDP对象保存起来,并通过Signal服务器发送给ClientB。
·ClientB接收到ClientA发送过的offer SDP对象,通过PeerConnection的SetRemoteDescription方法将其保存起来,并调用PeerConnection的CreateAnswer方法创建一个应答的SDP对象,通过PeerConnection的SetLocalDescription的方法保存该应答SDP对象并将它通过Signal服务器发送给ClientA。
·ClientA接收到ClientB发送过来的应答SDP对象,将其通过PeerConnection的SetRemoteDescription方法保存起来。
·在SDP信息的offer/answer流程中,ClientA和ClientB已经根据SDP信息创建好相应的音频Channel和视频Channel并开启Candidate数据的收集,Candidate数据可以简单地理解成Client端的IP地址信息(本地IP地址、公网IP地址、Relay服务端分配的地址)。
·当ClientA收集到Candidate信息后,PeerConnection会通过OnIceCandidate接口给ClientA发送通知,ClientA将收到的Candidate信息通过Signal服务器发送给ClientB,ClientB通过PeerConnection的AddIceCandidate方法保存起来。同样的操作ClientB对ClientA再来一次。
·这样ClientA和ClientB就已经建立了音视频传输的P2P通道,ClientB接收到ClientA传送过来的音视频流,会通过PeerConnection的OnAddStream回调接口返回一个标识ClientA端音视频流的MediaStream对象,在ClientB端渲染出来即可。同样操作也适应ClientB到ClientA的音视频流的传输。
六、回调函数
1.WebSocket回调接口与主要消息处理
Candidate、answer、offer、bye四大类消息
private class WebSocketObserverimplements WebSocketConnectionObserver {
@Override
public void onOpen() {
Log.d(TAG, "WebSocket connection opened to: " + wsServerUrl);
executor.execute(new Runnable() {
@Override
public void run() {
state = WebSocketConnectionState.CONNECTED;
// Check if we have pending register request.
if (roomID !=null && clientID != null) {
register(roomID, clientID);
}
}
});
}
@Override
public void onClose(WebSocketCloseNotification code, String reason) {
Log.d(TAG, "WebSocket connection closed. Code: " + code
+ ". Reason: " + reason + ". State: " + state);
synchronized (closeEventLock) {
closeEvent = true;
closeEventLock.notify();
}
executor.execute(new Runnable() {
@Override
public void run() {
if (state != WebSocketConnectionState.CLOSED) {
state = WebSocketConnectionState.CLOSED;
events.onWebSocketClose();
}
}
});
}
@Override
public void onTextMessage(String payload) {
Log.d(TAG, "WSS->C: " + payload);
final String message = payload;
executor.execute(new Runnable() {
@Override
public void run() {
if (state == WebSocketConnectionState.CONNECTED
|| state == WebSocketConnectionState.REGISTERED) {
events.onWebSocketMessage(message);
}
}
});
}
@Override
public void onRawTextMessage(byte[] payload) {
}
@Override
public void onBinaryMessage(byte[] payload) {
}
}
@Override
public void onWebSocketMessage(final String msg) {
if (wsClient.getState() != WebSocketConnectionState.REGISTERED) {
Log.e(TAG, "Got WebSocket message in non registered state.");
return;
}
try {
JSONObject json = new JSONObject(msg);
String msgText = json.getString("msg");
String errorText = json.optString("error");
if (msgText.length() > 0) {
json = new JSONObject(msgText);
String type = json.optString("type");
if (type.equals("candidate")) {
IceCandidate candidate = new IceCandidate(
json.getString("id"),
json.getInt("label"),
json.getString("candidate"));
events.onRemoteIceCandidate(candidate);
} else if (type.equals("answer")) {
if (initiator) {
SessionDescription sdp = new SessionDescription(
SessionDescription.Type.fromCanonicalForm(type),
json.getString("sdp"));
events.onRemoteDescription(sdp);
} else {
reportError("Received answer for call initiator: " + msg);
}
} else if (type.equals("offer")) {
if (!initiator) {
SessionDescription sdp = new SessionDescription(
SessionDescription.Type.fromCanonicalForm(type),
json.getString("sdp"));
events.onRemoteDescription(sdp);
} else {
reportError("Received offer for call receiver: " + msg);
}
} else if (type.equals("bye")) {
events.onChannelClose();
} else {
reportError("Unexpected WebSocket message: " + msg);
}
} else {
if (errorText !=null && errorText.length() > 0) {
reportError("WebSocket error message: " + errorText);
} else {
reportError("Unexpected WebSocket message: " + msg);
}
}
} catch (JSONException e) {
reportError("WebSocket message JSON parsing error: " + e.toString());
}
}
2.Observer接口
主要是连接建立完成后Ice的改变和流信息的改变引起的回调
public static interface Observer {
/** Triggered when the SignalingState changes. */
public void onSignalingChange(SignalingState newState);
/** Triggered when the IceConnectionState changes. */
public void onIceConnectionChange(IceConnectionState newState);
/** Triggered when the ICE connection receiving status changes. */
public void onIceConnectionReceivingChange(boolean receiving);
/** Triggered when the IceGatheringState changes. */
public void onIceGatheringChange(IceGatheringState newState);
/** Triggered when a new ICE candidate has been found. */
public void onIceCandidate(IceCandidate candidate);
/** Triggered when media is received on a new stream from remote peer. */
public void onAddStream(MediaStream stream);
/** Triggered when a remote peer close a stream. */
public void onRemoveStream(MediaStream stream);
/** Triggered when a remote peer opens a DataChannel. */
public void onDataChannel(DataChannel dataChannel);
/** Triggered when renegotiation is necessary. */
public void onRenegotiationNeeded();
}
3.SDP接口
主要是连接建立的过程中引起的回调
/** Interface for observing SDP-related events. */
public interface SdpObserver {
/** Called on success of Create{Offer,Answer}(). */
public void onCreateSuccess(SessionDescription sdp);
/** Called on success of Set{Local,Remote}Description(). */
public void onSetSuccess();
/** Called on error of Create{Offer,Answer}(). */
public void onCreateFailure(String error);
/** Called on error of Set{Local,Remote}Description(). */
public void onSetFailure(String error);
}
4.PeerConnectionClient
生成PeerConnection,实现相关的回调,完成整个业务逻辑
private final PCObserverpcObserver = new PCObserver();(Observer )
private final SDPObserversdpObserver = new SDPObserver();(SdpObserver)
private PeerConnectionFactoryfactory;
private PeerConnectionpeerConnection;
5.CallActivity
private PeerConnectionClientpeerConnectionClient = null;
private AppRTCClientappRtcClient;
static {
System.loadLibrary("jingle_peerconnection_so");
}
public static class Options {
// Keep in sync with webrtc/base/network.h!
static final int ADAPTER_TYPE_UNKNOWN = 0;
static final int ADAPTER_TYPE_ETHERNET = 1 << 0;
static final int ADAPTER_TYPE_WIFI = 1 << 1;
static final int ADAPTER_TYPE_CELLULAR = 1 << 2;
static final int ADAPTER_TYPE_VPN = 1 << 3;
static final int ADAPTER_TYPE_LOOPBACK = 1 << 4;
public int networkIgnoreMask;
public boolean disableEncryption;
}
// |context| is an android.content.Context object, but we keep it untyped here
// to allow building on non-Android platforms.
// Callers may specify either |initializeAudio| or |initializeVideo| as false
// to skip initializing the respective engine (and avoid the need for the
// respective permissions).
// |renderEGLContext| can be provided to suport HW video decoding to
// texture and will be used to create a shared EGL context on video
// decoding thread.
public static native boolean initializeAndroidGlobals(Object context, boolean initializeAudio, boolean initializeVideo,boolean videoHwAcceleration);
Context:简单的ApplicationContext,或者其他Context相关的上下文。
initializeAudio:初始化音频部分。(boolean)
videoHwAcceleration:是否启用硬件加速。(boolean)
private static final String FIELD_TRIAL_VP9 = "WebRTC-SupportVP9/Enabled/";
// Field trial initialization. Must be called before PeerConnectionFactory
// is created.
public static native void initializeFieldTrials(String fieldTrialsInitString);
//创建Factory
private static native long nativeCreatePeerConnectionFactory();
//创建指令回调接口(与ICE服务器进行交互的指令)
private static native long nativeCreateObserver(PeerConnection.Observer observer);
//创建PeerConnection
private static native long nativeCreatePeerConnection(long nativeFactory, PeerConnection.RTCConfiguration rtcConfig, ediaConstraints constraints, long nativeObserver);
//创建本地音视频流
private static native long nativeCreateLocalMediaStream(long nativeFactory, String label);
//创建本地视频源
private static native long nativeCreateVideoSource(long nativeFactory, long nativeVideoCapturer, MediaConstraints constraints);
//创建视频轨
private static native long nativeCreateVideoTrack(long nativeFactory, String id, long nativeVideoSource);
//创建本地音频流
private static native long nativeCreateAudioSource(long nativeFactory, MediaConstraints constraints);
//创建音频轨
private static native long nativeCreateAudioTrack(long nativeFactory, String id, long nativeSource);
//设置相关网络参数
public native void nativeSetOptions(long nativeFactory, Options options);
//设置视频硬件加速参数
private static native void nativeSetVideoHwAccelerationOptions(long nativeFactory, Object renderEGLContext);
//回收PeerConnectionFactory
private static native void freeFactory(long nativeFactory);
//检测本地candidate的状态:刚刚创建、正在收集、完成收集
** Tracks PeerConnectionInterface::IceGatheringState */
public enum IceGatheringState { NEW, GATHERING, COMPLETE };
//检测远端candidate的状态
/** Tracks PeerConnectionInterface::IceConnectionState */
public enum IceConnectionState {
NEW, CHECKING, CONNECTED, COMPLETED, FAILED, DISCONNECTED, CLOSED
};
//检测与Sigal信令服务器连接的状态
/** Tracks PeerConnectionInterface::SignalingState */
public enum SignalingState {
STABLE, HAVE_LOCAL_OFFER, HAVE_LOCAL_PRANSWER, HAVE_REMOTE_OFFER,
HAVE_REMOTE_PRANSWER, CLOSED
};
//得到本地sdp描述
public native SessionDescription getLocalDescription();
//得到远端sdp描述
public native SessionDescription getRemoteDescription();
//创建数据通道
public native DataChannel createDataChannel(String label, DataChannel.Init init);
//创建offer消息
public native void createOffer(SdpObserver observer, MediaConstraints constraints);
//创建answer消息
public native void createAnswer(SdpObserver observer, MediaConstraints constraints);
//设置本地sdp
public native void setLocalDescription(SdpObserver observer, SessionDescription sdp);
//设置远端sdp
public native void setRemoteDescription(SdpObserver observer, SessionDescription sdp);
//更新IceServer
public native boolean updateIce(List
//得到信令状态
public native SignalingState signalingState();
//获得远端连接状态
public native IceConnectionState iceConnectionState();
//获得本地连接状态
public native IceGatheringState iceGatheringState();
//关闭与Ice服务器的连接
public native void close();
//释放PeerConnection
private static native void freePeerConnection(long nativePeerConnection);
//释放Observer
private static native void freeObserver(long nativeObserver);
//添加新的Candidate
private native boolean nativeAddIceCandidate(String sdpMid, int sdpMLineIndex, String iceCandidateSdp);
//添加本地流
private native boolean nativeAddLocalStream(long nativeStream);
//移除本地流
private native void nativeRemoveLocalStream(long nativeStream);
//得到StatsObserver的状态
private native boolean nativeGetStats(StatsObserver observer, long nativeTrack);
一旦有了peerConnectionFactory实例,就应该从你的设备上获取音频和视频了,最终渲染到屏幕上。VideoCapturerAndroid,VideoSource,VideoTrack和VideoRenderer,都是以VideoCapturerAndroid开始。
VideoCapturerAndroid类是一个相机的包装类,提供访问设备相机数据流的江边方法。允许你获取设备数量,获取前置后置摄像头
// Returns the number of camera devices
VideoCapturerAndroid.getDeviceCount();
// Returns the front face device name
VideoCapturerAndroid.getNameOfFrontFacingDevice();
// Returns the back facing device name
VideoCapturerAndroid.getNameOfBackFacingDevice();
// Creates a VideoCapturerAndroid instance for the device name
VideoCapturerAndroid.create(name);
使用VideoCapturerAndroid类的实例,可以创建包含相机视频流的MediaStream,你可以给对方发送数据。
VideoSource可以开始或停止你的设备。在无用停止抓取信息有助于电池使用寿命的延长。
VideoTrack是一个添加VideoSource到MediaStream对象的一个包装。
除了不需要AudioCapturer获取麦克风数据,AudioSource/AudioTrack和VideoSource/VideoTrack很类似。audioConstraints是MediaContraints的实例。
VideoRendererGui是一个GLSurfaceView,在这之上,可以显示视频流,增加我们的renderer到VideoTrack上。
// To create our VideoRenderer, we can use the// included VideoRendererGui for simplicity// First we need to set the GLSurfaceView that it should render to
GLSurfaceView videoView = (GLSurfaceView) findViewById(R.id.glview_call);
// Then we set that view, and pass a Runnable// to run once the surface is ready
VideoRendererGui.setView(videoView, runnable);
// Now that VideoRendererGui is ready, we can get our VideoRenderer
VideoRenderer renderer = VideoRendererGui.createGui(x, y, width, height);
// And finally, with our VideoRenderer ready, we// can add our renderer to the VideoTrack.
localVideoTrack.addRenderer(renderer);
这个MediaConstraints是WebRTC支持将视频和音频放入MediaStream的方式。看这个支持的规范,大多数方法都需要MediaContraints的实例。
getUserMedia直接返回一个MediaStream,可以直接将其添加到RTCPeerConnection中发送给对端。
// We start out with an empty MediaStream object,
// created with help from our PeerConnectionFactory
// Note that LOCAL_MEDIA_STREAM_ID can be any string
MediaStream mediaStream = peerConnectionFactory.createLocalMediaStream(LOCAL_MEDIA_STREAM_ID);
// Now we can add our tracks.
mediaStream.addTrack(localVideoTrack);
mediaStream.addTrack(localAudioTrack);