转载地址:https://blog.csdn.net/Leytton/article/details/76704342
《WebRTC实时通信系列教程》翻译自《Real time communication with WebRTC》
示例代码下载http://download.csdn.net/detail/leytton/9923708
WebRTC实时通信系列教程1 介绍
WebRTC实时通信系列教程2 概述
WebRTC实时通信系列教程3 获取示例代码
WebRTC实时通信系列教程4 从摄像头获取视频流
WebRTC实时通信系列教程5 RTCPeerConnection传输视频
WebRTC实时通信系列教程6 使用RTCDataChannel传输数据
WebRTC实时通信系列教程7 使用Socket.IO搭建信令服务器交换信息
WebRTC实时通信系列教程8 打通P2P连接和信令通信
WebRTC实时通信系列教程9 数据通道图片传输
WebRTC实时通信系列教程10 恭喜完成本系列课程
在这一节中,你将学会:
此节代码保存在 step-01 文件夹下.
添加一个video 标签和一个 script 标签到 work 目录下的 index.html 文件中:
Realtime communication with WebRTC
添加以下代码到 js 目录中的 main.js 文件中:
'use strict';
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
var constraints = {
audio: false,
video: true
};
var video = document.querySelector('video');
function successCallback(stream) {
window.stream = stream; // stream available to console
if (window.URL) {
video.src = window.URL.createObjectURL(stream);
} else {
video.src = stream;
}
}
function errorCallback(error) {
console.log('navigator.getUserMedia error: ', error);
}
navigator.getUserMedia(constraints, successCallback, errorCallback);
所有的JavaScript案例代码使用 'use strict';
来避免常见代码错误.
详情阅读 ECMAScript 5 Strict Mode, JSON, and More.
在浏览器打开 index.html 你将看到这个 (展示的是你的摄像头视图!):
更好的语法形式
你会感觉这些代码看起来有点旧.
我们现在使用 getUserMedia()
的回调函数来兼容当前浏览器.
可以查看 github.com/webrtc/samples 上的Promise版示例代码, 使用的是 MediaDevices API 并且能更好地进行错误处理. 我们后面将会使用它.
getUserMedia()
调用语法为:
navigator.getUserMedia(constraints, successCallback, errorCallback);
这一接口相对较新, 所以各种浏览器在getUserMedia中依然使用的是前缀名,你可以查看 main.js 文件中的顶部代码.
constraints 变量的参数可以指定获取哪些媒体资源 — 在下面的代码中, 只获取视频而不获取音频:
var constraints = {
audio: false,
video: true
};
如果 getUserMedia() 函数执行成功, 摄像头视频流可以设置为video标签的src属性资源:
function successCallback(stream) {
window.stream = stream; // stream available to console
if (window.URL) {
video.src = window.URL.createObjectURL(stream);
} else {
video.src = stream;
}
}
video {
-webkit-filter: blur(4px) invert(1) opacity(0.5);
}
video {
filter: hue-rotate(180deg) saturate(200%);
-moz-filter: hue-rotate(180deg) saturate(200%);
-webkit-filter: hue-rotate(180deg) saturate(200%);
}
在这节内容中你学习了:
此节代码保存在 step-01 文件夹下.
video {
max-width: 100%;
width: 320px;
}
你已经获取到视频了, 但如何传输视频呢? 下一节即将揭晓!
摘自https://codelabs.developers.google.com/codelabs/webrtc-web/#3
In this step you'll find out how to:
A complete version of this step is in the step-01 folder.
Add a video
element and a script
element to index.html in your work directory:
Realtime communication with WebRTC
Add the following to main.js in your js folder:
'use strict';
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
var constraints = {
audio: false,
video: true
};
var video = document.querySelector('video');
function successCallback(stream) {
window.stream = stream; // stream available to console
if (window.URL) {
video.src = window.URL.createObjectURL(stream);
} else {
video.src = stream;
}
}
function errorCallback(error) {
console.log('navigator.getUserMedia error: ', error);
}
navigator.getUserMedia(constraints, successCallback, errorCallback);
All the JavaScript examples here use 'use strict';
to avoid common coding gotchas.
Find out more about what that means in ECMAScript 5 Strict Mode, JSON, and More.
Open index.html in your browser and you should see something like this (featuring the view from your webcam, of course!):
A better API for gUM
If you think the code looks a little old fashioned, you're right.
We're using the callback version of getUserMedia()
for compatibility with current browsers.
Check out the demo at github.com/webrtc/samples to see the Promise-based version, using the MediaDevices APIand better error handling. Much nicer! We'll be using that later.
getUserMedia()
is called like this:
navigator.getUserMedia(constraints, successCallback, errorCallback);
This technology is still relatively new, so browsers are still using prefixed names for getUserMedia
. Hence the shim code at the top of main.js!
The constraints
argument allows you to specify what media to get — in this example, video and not audio:
var constraints = {
audio: false,
video: true
};
If getUserMedia()
is successful, the video stream from the webcam is set as the source of the video element:
function successCallback(stream) {
window.stream = stream; // stream available to console
if (window.URL) {
video.src = window.URL.createObjectURL(stream);
} else {
video.src = stream;
}
}
stream
object passed to getUserMedia()
is in global scope, so you can inspect it from the browser console: open the console, type stream and press Return. (To view the console in Chrome, press Ctrl-Shift-J, or Command-Option-J if you're on a Mac.)stream.getVideoTracks()
return?stream.getVideoTracks()[0].stop()
.{audio: true, video: true}
?video {
-webkit-filter: blur(4px) invert(1) opacity(0.5);
}
video {
filter: hue-rotate(180deg) saturate(200%);
-moz-filter: hue-rotate(180deg) saturate(200%);
-webkit-filter: hue-rotate(180deg) saturate(200%);
}
In this step you learned how to:
A complete version of this step is in the step-01 folder.
autoplay
attribute on the video
element. Without that, you'll only see a single frame!getUserMedia()
constraints. Take a look at the demo atwebrtc.github.io/samples/src/content/peerconnection/constraints. As you'll see, there are lots of interesting WebRTC samples on that site.width
and max-width
to set a preferred size and a maximum size for the video. The browser will calculate the height automatically:video {
max-width: 100%;
width: 320px;
}
You've got video, but how do you stream it? Find out in the next step!