【转载请注明出处: http://blog.csdn.net/leytton/article/details/76838194】
PS:如果本文对您有帮助,请点个赞让我知道哦~
《WebRTC实时通信系列教程》翻译自《Real time communication with WebRTC》
示例代码下载http://download.csdn.net/detail/leytton/9923708
WebRTC实时通信系列教程1 介绍
WebRTC实时通信系列教程2 概述
WebRTC实时通信系列教程3 获取示例代码
WebRTC实时通信系列教程4 从摄像头获取视频流
WebRTC实时通信系列教程5 RTCPeerConnection传输视频
WebRTC实时通信系列教程6 使用RTCDataChannel传输数据
WebRTC实时通信系列教程7 使用Socket.IO搭建信令服务器交换信息
WebRTC实时通信系列教程8 打通P2P连接和信令通信
WebRTC实时通信系列教程9 数据通道图片传输
WebRTC实时通信系列教程10 恭喜完成本系列课程
一、译文
1、你将学到
- 拍照并通过canvas标签获取图片数据.
- 与远程用户交换图片数据.
完整示例代码在 step-06 目录下.
2、工作原理
前面你学习了如何使用 RTCDataChannel 交换文本消息.
这一节将讲解如何传输整个文件: 案例中通过 getUserMedia()
获取图片.
核心步骤如下:
- 建立数据通道. 此节中你不需要添加媒体流到P2P连接之中.
- 使用
getUserMedia()
获取取用户摄像头视频流:
var video = document.getElementById('video');
function grabWebCamVideo() {
console.log('Getting user media (video) ...');
navigator.mediaDevices.getUserMedia({
audio: false,
video: true
})
.then(gotStream)
.catch(function(e) {
alert('getUserMedia() error: ' + e.name);
});
}
- 当用户点击 Snap(拍照) 按钮, 从视频流获取一张快照 (一个视频帧)并在
canvas
标签显示:
var photo = document.getElementById('photo');
var photoContext = photo.getContext('2d');
function snapPhoto() {
photoContext.drawImage(video, 0, 0, photo.width, photo.height);
show(photo, sendBtn);
}
- 当用户点击 Send(发送) 按钮, 将图片转换成字节并通过数据通道发送:
function sendPhoto() {
// Split data channel message in chunks of this byte length.
var CHUNK_LEN = 64000;
var img = photoContext.getImageData(0, 0, photoContextW, photoContextH),
len = img.data.byteLength,
n = len / CHUNK_LEN | 0;
console.log('Sending a total of ' + len + ' byte(s)');
dataChannel.send(len);
// split the photo and send in chunks of about 64KB
for (var i = 0; i < n; i++) {
var start = i * CHUNK_LEN,
end = (i + 1) * CHUNK_LEN;
console.log(start + ' - ' + (end - 1));
dataChannel.send(img.data.subarray(start, end));
}
// send the reminder, if any
if (len % CHUNK_LEN) {
console.log('last ' + len % CHUNK_LEN + ' byte(s)');
dataChannel.send(img.data.subarray(n * CHUNK_LEN));
}
}
- 接收端将接收到的信息字节转换成图片并展示:
function receiveDataChromeFactory() {
var buf, count;
return function onmessage(event) {
if (typeof event.data === 'string') {
buf = window.buf = new Uint8ClampedArray(parseInt(event.data));
count = 0;
console.log('Expecting a total of ' + buf.byteLength + ' bytes');
return;
}
var data = new Uint8ClampedArray(event.data);
buf.set(data, count);
count += data.byteLength;
console.log('count: ' + count);
if (count === buf.byteLength) {
// we're done: all data chunks have been received
console.log('Done. Rendering photo.');
renderPhoto(buf);
}
};
}
function renderPhoto(data) {
var canvas = document.createElement('canvas');
canvas.width = photoContextW;
canvas.height = photoContextH;
canvas.classList.add('incomingPhoto');
// trail is the element holding the incoming images
trail.insertBefore(canvas, trail.firstChild);
var context = canvas.getContext('2d');
var img = context.createImageData(photoContextW, photoContextH);
img.data.set(data);
context.putImageData(img, 0, 0);
}
3、获取代码
将step-06 中的index.html 文件替换掉 work 目录里的:
Realtime communication with WebRTC
rel="stylesheet" href="/css/main.css" />
Realtime communication with WebRTC
Room URL: id="url">...
id="videoCanvas">
id="buttons">
then
or
id="incoming">
Incoming photos
id="trail">
在 work 目录下使用以下命令行运行Node服务器:
node index.js
(确保你使用的的 index.js 文件中是上一节中实现了Socket.IO的代码.)
如果需要浏览器允许网页调用摄像头.
应用将会创建一个随机房间ID并将其添加到URL. 在新标签中打开这个URL.
点击 Snap & Send 按钮,查看另一个标签中页面下方的接收区域.
你将会看到如下所示:
4、拓展
- 如何修改代码让其支持传输任何类型的文件?
5、更多资料
- The MediaStream Image Capture API: 拍照和控制摄像头接口
- MediaRecorder 接口, 用于录制音视频: demo, documentation.
二、原文
摘自https://codelabs.developers.google.com/codelabs/webrtc-web/#8
9. Take a photo and share it via a data channel
What you'll learn
In this step you'll learn how to:
- Take a photo and get the data from it using the canvas element.
- Exchange image data with a remote user.
A complete version of this step is in the step-06 folder.
How it works
Previously you learned how to exchange text messages using RTCDataChannel.
This step makes it possible to share entire files: in this example, photos captured via getUserMedia()
.
The core parts of this step are as follows:
- Establish a data channel. Note that you don't add any media streams to the peer connection in this step.
- Capture the user's webcam video stream with
getUserMedia()
:
var video = document.getElementById('video');
function grabWebCamVideo() {
console.log('Getting user media (video) ...');
navigator.mediaDevices.getUserMedia({
audio: false,
video: true
})
.then(gotStream)
.catch(function(e) {
alert('getUserMedia() error: ' + e.name);
});
}
- When the user clicks the Snap button, get a snapshot (a video frame) from the video stream and display it in a
canvas
element:
var photo = document.getElementById('photo');
var photoContext = photo.getContext('2d');
function snapPhoto() {
photoContext.drawImage(video, 0, 0, photo.width, photo.height);
show(photo, sendBtn);
}
- When the user clicks the Send button, convert the image to bytes and send them via a data channel:
function sendPhoto() {
// Split data channel message in chunks of this byte length.
var CHUNK_LEN = 64000;
var img = photoContext.getImageData(0, 0, photoContextW, photoContextH),
len = img.data.byteLength,
n = len / CHUNK_LEN | 0;
console.log('Sending a total of ' + len + ' byte(s)');
dataChannel.send(len);
// split the photo and send in chunks of about 64KB
for (var i = 0; i < n; i++) {
var start = i * CHUNK_LEN,
end = (i + 1) * CHUNK_LEN;
console.log(start + ' - ' + (end - 1));
dataChannel.send(img.data.subarray(start, end));
}
// send the reminder, if any
if (len % CHUNK_LEN) {
console.log('last ' + len % CHUNK_LEN + ' byte(s)');
dataChannel.send(img.data.subarray(n * CHUNK_LEN));
}
}
- The receiving side converts data channel message bytes back to an image and displays the image to the user:
function receiveDataChromeFactory() {
var buf, count;
return function onmessage(event) {
if (typeof event.data === 'string') {
buf = window.buf = new Uint8ClampedArray(parseInt(event.data));
count = 0;
console.log('Expecting a total of ' + buf.byteLength + ' bytes');
return;
}
var data = new Uint8ClampedArray(event.data);
buf.set(data, count);
count += data.byteLength;
console.log('count: ' + count);
if (count === buf.byteLength) {
// we're done: all data chunks have been received
console.log('Done. Rendering photo.');
renderPhoto(buf);
}
};
}
function renderPhoto(data) {
var canvas = document.createElement('canvas');
canvas.width = photoContextW;
canvas.height = photoContextH;
canvas.classList.add('incomingPhoto');
// trail is the element holding the incoming images
trail.insertBefore(canvas, trail.firstChild);
var context = canvas.getContext('2d');
var img = context.createImageData(photoContextW, photoContextH);
img.data.set(data);
context.putImageData(img, 0, 0);
}
Get the code
Replace the contents of your work folder with the contents of step-06. Your index.html file in work should now look like this:
Realtime communication with WebRTC
rel="stylesheet" href="/css/main.css" />
Realtime communication with WebRTC
Room URL: id="url">...
id="videoCanvas">
id="buttons">
then
or
id="incoming">
Incoming photos
id="trail">
If your Node server is not running, start it by calling the following command from your work directory:
node index.js
(Make sure you're using the version of index.js that implements Socket.IO — and remember to restart your Node server if you make changes.)
If necessary, click on the Allow button to allow the app to use your webcam.
The app will create a random room ID and add that ID to the URL. Open the URL from the address bar in a new browser tab or window.
Click the Snap & Send button and then look at the Incoming area in the other tab at the bottom of the page. The app transfers photos between tabs.
You should see something like this:
Bonus points
- How can you change the code to make it possible to share any file type?
Find out more
- The MediaStream Image Capture API: an API for taking photographs and controlling cameras — coming soon to a browser near you!
- The MediaRecorder API, for recording audio and video: demo, documentation.
What you learned
- How to take a photo and get the data from it using the canvas element.
- How to exchange that data with a remote user.
A complete version of this step is in the step-06 folder.