mediasoup-demo 运行实战

mediasoup 是一个强大的 WebRTC SFU 服务。mediasoup-demo 则是 mediasoup 的一个很不错的入门演示程序。这里记录把 mediasoup-demo 跑起来的过程。操作系统平台以 Ubuntu 20.04 为例。mediasoup 主要以 JavaScript 开发,运行环境为 Node.js,它一般作为 Node.js 模块运行于 Node.js 应用中。

准备环境

mediasoup v3 的 安装指南 中有安装要求:

  • node version >= v12.0.0
  • python version >= 3.6 with PIP
  • GNU make

对于 Linux,OSX 和任何 *NIX 系统,还有额外的要求:

  • gcc 和 g++ >= 4.9 或 clang (包含 C++11 支持)
  • cc 和 c++ 命令 (符号连接) 指向对应的 gcc/g++ 或 clang/clang++ 可执行文件。

当系统软件库中的 node 版本不符合要求时,需要自己安装适当版本的 Node。Node 版本不合适,很有可能 demo 就运行不起来。笔者试了多个版本都没能把 mediasoup 跑起来,包括 v13.1.0v13.10.0 和最新的长期支持版 v16.13.1,不过在 sequelize 的 GitHub issue 12419 ,看到有人提到用 v12.18.3 解决了笔者遇到的一些问题,笔者也选择了 v12.18.3 版。

下载 node 的预编译压缩包:

https://nodejs.org/dist/

如果系统中已经安装了其它版本的 node,在安装之前,还需要先移除之前安装的版本:

sudo rm -rf /usr/local/bin/npm /usr/local/share/man/man1/node* ~/.npm
sudo rm -rf /usr/local/lib/node*
sudo rm -rf /usr/local/bin/node*
sudo rm -rf /usr/local/include/node*

sudo apt-get purge nodejs npm
sudo apt autoremove

解压并安装 node:

tar -xf node-v12.18.3-linux-x64.tar.xz
sudo mv node-v12.18.3-linux-x64/bin/* /usr/local/bin/
sudo mv node-v12.18.3-linux-x64/lib/node_modules/ /usr/local/lib/

安装 mediasoup-demo

下载 mediasoup-demo,克隆 mediasoup-demo 工程

$ git clone https://github.com/versatica/mediasoup-demo.git
$ cd mediasoup-demo
$ git checkout v3

设置 mediasoup-demo server:

mediasoup-demo$ cd server
server$ npm install

拷贝 config.example.jsconfig.js,并对它做一些定制化的修改:

$ cp config.example.js config.js

这一步是必须的,否则 mediasoup-demo 运行将出错。所需要做的配置包括域名,监听的 IP 地址,HTTPS 证书和私钥的路径

域名、监听 HTTPS 的 IP/端口、证书路径及私钥路径:

    domain : process.env.DOMAIN || 'localhost',
    // Signaling settings (protoo WebSocket server and HTTP API server).
    https  :
    {
        listenIp   : '0.0.0.0',
        // NOTE: Don't change listenPort (client app assumes 4443).
        listenPort : process.env.PROTOO_LISTEN_PORT || 4443,
        // NOTE: Set your own valid certificate files.
        tls        :
        {
            cert : process.env.HTTPS_CERT_FULLCHAIN || `${__dirname}/certs/fullchain.pem`,
            key  : process.env.HTTPS_CERT_PRIVKEY || `${__dirname}/certs/privkey.pem`
        }
    },

这些配置可以通过修改 config.js 实现,也可以通过设置环境变量实现。这里不修改这些配置,将 TLS 证书和私钥放进 mediasoup-demo/server/certs/ 并按照这里的配置重命名。如果已经有网站域名,网站已经开了 HTTPS,且打算将 mediasoup-demo 跑在网站同一台机器上,可以将证书和私钥拷贝过来,或者用环境变量 HTTPS_CERT_FULLCHAINHTTPS_CERT_PRIVKEY 分别指向证书和私钥的路径:

export HTTPS_CERT_FULLCHAIN="XXX"
export HTTPS_CERT_PRIVKEY="YYY"

否则,可以用工具 https://github.com/aggresss/playground-cpp/blob/master/certs/autogen.sh 生成临时的自签名证书,运行这个脚本生成如下文件:

playground-cpp/certs$ git status
位于分支 master
您的分支与上游分支 'origin/master' 一致。

尚未暂存以备提交的变更:
  (使用 "git add <文件>..." 更新要提交的内容)
  (使用 "git restore <文件>..." 丢弃工作区的改动)
    修改:     ca.crt
    修改:     ca.csr
    修改:     ca.key
    修改:     ca.srl
    修改:     client.crt
    修改:     client.csr
    修改:     client.key
    修改:     md5.txt
    修改:     server.crt
    修改:     server.csr
    修改:     server.key

server.key 是私钥,server.crt 是证书。将这两个文件拷贝到 mediasoup-demo 下:

playground-cpp/certs$ mkdir -p ~/mediasoup-demo/server/certs/ 
playground-cpp/certs$ mv server.key ~/mediasoup-demo/server/certs/privkey.pem 
playground-cpp/certs$ mv server.crt ~/mediasoup-demo/server/certs/fullchain.pem 

没有私钥和证书,运行服务器应用时会报错找不到证书:

  mediasoup-demo-server:INFO running an HTTPS server... +6ms
(node:396580) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, open '~/mediasoup-demo/server/certs/fullchain.pem'
    at Object.openSync (fs.js:462:3)
    at Object.readFileSync (fs.js:364:35)
    at runHttpsServer (~/mediasoup-demo/server/server.js:431:13)
    at run (~/mediasoup-demo/server/server.js:74:8)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
(node:396580) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 2)
(node:396580) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

运行浏览器应用时会报错找不到私钥:

[16:02:19] Finished '' after 27 ms
[16:02:19] Finished 'live' after 14 s
internal/fs/utils.js:269
    throw err;
    ^

Error: ENOENT: no such file or directory, open '~/mediasoup-demo/server/certs/privkey.pem'
    at Object.openSync (fs.js:462:3)
    at Object.readFileSync (fs.js:364:35)
    at getKey (~/mediasoup-demo/app/node_modules/browser-sync/dist/server/utils.js:38:15)
    at getHttpsServerDefaults (~/mediasoup-demo/app/node_modules/browser-sync/dist/server/utils.js:45:14)
    at Object.getHttpsOptions (~/mediasoup-demo/app/node_modules/browser-sync/dist/server/utils.js:67:41)
    at ~/mediasoup-demo/app/node_modules/browser-sync/dist/server/utils.js:81:44
    at Object.getServer (~/mediasoup-demo/app/node_modules/browser-sync/dist/server/utils.js:85:15)
    at createServer (~/mediasoup-demo/app/node_modules/browser-sync/dist/server/static-server.js:71:24)
    at createServer (~/mediasoup-demo/app/node_modules/browser-sync/dist/server/index.js:72:42)
    at module.exports.plugin (~/mediasoup-demo/app/node_modules/browser-sync/dist/server/index.js:12:20) {
  errno: -2,
  syscall: 'open',
  code: 'ENOENT',
  path: '~/mediasoup-demo/server/certs/privkey.pem'
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: `gulp live`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     ~/.npm/_logs/2021-12-24T08_02_19_135Z-debug.log

还有一个必须要配置的是 RTC 传输选项中的监听 IP MEDIASOUP_LISTEN_IP

        webRtcTransportOptions :
        {
            listenIps :
            [
                {
                    ip          : process.env.MEDIASOUP_LISTEN_IP || '1.2.3.4',
                    announcedIp : process.env.MEDIASOUP_ANNOUNCED_IP
                }
            ],
            initialAvailableOutgoingBitrate : 1000000,
            minimumAvailableOutgoingBitrate : 600000,
            maxSctpMessageSize              : 262144,
            // Additional options that are not part of WebRtcTransportOptions.
            maxIncomingBitrate              : 1500000
        },

这个选项同样既可以通过修改 config.js 配置,也可以通过设置环境变量配置。监听的 IP 地址需要设置为机器本地 IP 地址。否则,浏览器应用运行和服务器通信时,服务器将报错:

  mediasoup:Router createWebRtcTransport() +3s
  mediasoup:Channel request() [method:router.createWebRtcTransport, id:5] +3s
  mediasoup:ERROR:Channel [pid:396733 RTC::PortManager::Bind() | throwing MediaSoupError: port bind failed due to address not available [transport:udp, ip:'1.2.3.4', port:42251, attempt:1/10000] +0ms
  mediasoup:ERROR:Channel [pid:396733 Worker::OnChannelRequest() | throwing MediaSoupError: port bind failed due to address not available [transport:udp, ip:'1.2.3.4', port:42251, attempt:1/10000] [method:router.createWebRtcTransport] +4ms
  mediasoup:WARN:Channel request failed [method:router.createWebRtcTransport, id:5]:  [method:router.createWebRtcTransport] +0ms
  mediasoup-demo-server:ERROR:Room request failed:Error:  [method:router.createWebRtcTransport] at Channel.processMessage (~/opensource/mediasoup-demo/server/node_modules/mediasoup/node/lib/Channel.js:195:37) at Socket. (~/opensource/mediasoup-demo/server/node_modules/mediasoup/node/lib/Channel.js:69:34)     at Socket.emit (events.js:315:20)     at Socket.EventEmitter.emit (domain.js:483:12)     at addChunk (_stream_readable.js:295:12)     at readableAddChunk (_stream_readable.js:271:9)     at Socket.Readable.push (_stream_readable.js:212:10)     at Pipe.onStreamRead (internal/stream_base_commons.js:186:23) +0ms
  mediasoup-demo-server:Room protoo Peer "close" event [peerId:ucayshrc] +27ms
  mediasoup-demo-server:INFO:Room last Peer in the room left, closing the room [roomId:rvpgogc7] +3s
  mediasoup-demo-server:Room close() +2ms
  mediasoup:Router close() +28ms
  mediasoup:Channel request() [method:router.close, id:6] +27ms
  mediasoup:Transport routerClosed() +3s
  mediasoup:DataProducer transportClosed() +3s
  mediasoup:RtpObserver routerClosed() +3s
  mediasoup:Channel request succeeded [method:router.close, id:6] +4ms

报错提示,由于 IP 地址不可用,绑定端口失败。

设置 mediasoup-demo 浏览器应用:

$ cd app
$ npm install

本地运行 mediasoup-demo

在终端中运行 Node.js 服务器应用:

mediasoup-demo$ cd server
mediasoup-demo/server$ npm start

在一个不同的终端中编译并运行浏览器应用程序:

mediasoup-demo$ cd app
mediasoup-demo/app$ npm start

然后就可以通过浏览器访问 mediasoup 了,如:

https://192.168.217.129:3000/?info=true&roomId=rvpgogc7
1640340445057.jpg

mediasoup-demo 这个运行起来之后,网络拓扑是这样的:

无标题演示文稿.png

mediasoup-demo/server 是 WebRTC 的 SFU 服务器,mediasoup-demo/app 是客户端浏览器应用服务器,用于提供网页和 JS 文件等资源。

mediasoup-broadcaster-demo

mediasoup 项目还提供了一个 libmediasoupclient 的演示程序 mediasoup-broadcaster-demo,它运行起来后可以与上面跑起来的 mediasoup-demo 系统中的示例 Web 应用互通,可以发送一些构造的音视频流给示例 Web 应用。

这里看下在 Ubuntu 20.04 Linux 平台编译并运行 mediasoup-broadcaster-demo 的过程。

下载并配置编译 mediasoup-broadcaster-demo:

opensource$ git clone https://github.com/versatica/mediasoup-broadcaster-demo.git
opensource$ cd mediasoup-broadcaster-demo
mediasoup-broadcaster-demo$ cmake . -Bbuild                                              \
  -DLIBWEBRTC_INCLUDE_PATH:PATH=~/data/opensource/webrtc-checkout/src \
  -DLIBWEBRTC_BINARY_PATH:PATH=~/data/opensource/webrtc-checkout/src/out/m96/obj   \
  -DOPENSSL_INCLUDE_DIR:PATH=/usr/include/       \
  -DCMAKE_USE_OPENSSL=ON    \
  -DCMAKE_BUILD_TYPE=Debug

注意正确配置 LIBWEBRTC_INCLUDE_PATHLIBWEBRTC_BINARY_PATH 分别为下载的 webrtc 的源代码路径和编译出来的目标文件路径。笔者这里在编译时,用了系统安装的 openssl 相关库,因而用于配置 openssl 头文件路径的 OPENSSL_INCLUDE_DIR 指向了系统 /usr/include/。另外,为了方便后面的动态调试,这里加了 -DCMAKE_BUILD_TYPE=Debug

完成了上面的配置之后,执行如下命令编译源码:

mediasoup-broadcaster-demo$ make -C build/

由于 mediasoup-broadcaster-demo 中的一些代码,和笔者本地的 webrtc 的版本不匹配,出现了如下的编译错误:

[  4%] Building CXX object libwebrtc/CMakeFiles/webrtc_broadcaster.dir/test/testsupport/ivf_video_frame_generator.cc.o
~/MyProjects/mediasoup-broadcaster-demo/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc: In constructor ‘webrtc::test::IvfVideoFrameGenerator::IvfVideoFrameGenerator(const string&)’:
~/MyProjects/mediasoup-broadcaster-demo/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc:48:18: error: ‘class webrtc::VideoCodec’ has no member named ‘buffer_pool_size’
   48 |   codec_settings.buffer_pool_size = std::numeric_limits::max();
      |                  ^~~~~~~~~~~~~~~~
In file included from ~/opensource/webrtc-checkout/src/api/sequence_checker.h:13,
                 from ~/MyProjects/mediasoup-broadcaster-demo/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.h:18,
                 from ~/MyProjects/mediasoup-broadcaster-demo/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc:11:
~/MyProjects/mediasoup-broadcaster-demo/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc:52:23: error: ‘class webrtc::VideoDecoder’ has no member named ‘InitDecode’; did you mean ‘Decode’?
   52 |       video_decoder_->InitDecode(&codec_settings, /*number_of_cores=*/1),
      |                       ^~~~~~~~~~
~/opensource/webrtc-checkout/src/rtc_base/checks.h:393:22: note: in definition of macro ‘RTC_CHECK_OP’
  393 |   ::rtc::Safe##name((val1), (val2))                        \
      |                      ^~~~
~/MyProjects/mediasoup-broadcaster-demo/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc:51:3: note: in expansion of macro ‘RTC_CHECK_EQ’
   51 |   RTC_CHECK_EQ(
      |   ^~~~~~~~~~~~
~/MyProjects/mediasoup-broadcaster-demo/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc:52:23: error: ‘class webrtc::VideoDecoder’ has no member named ‘InitDecode’; did you mean ‘Decode’?
   52 |       video_decoder_->InitDecode(&codec_settings, /*number_of_cores=*/1),
      |                       ^~~~~~~~~~
~/opensource/webrtc-checkout/src/rtc_base/checks.h:397:60: note: in definition of macro ‘RTC_CHECK_OP’
  397 |             ::rtc::webrtc_checks_impl::LogStreamer<>() << (val1) << (val2)
      |                                                            ^~~~
~/MyProjects/mediasoup-broadcaster-demo/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc:51:3: note: in expansion of macro ‘RTC_CHECK_EQ’
   51 |   RTC_CHECK_EQ(
      |   ^~~~~~~~~~~~
make[2]: *** [libwebrtc/CMakeFiles/webrtc_broadcaster.dir/build.make:167:libwebrtc/CMakeFiles/webrtc_broadcaster.dir/test/testsupport/ivf_video_frame_generator.cc.o] 错误 1
make[2]: 离开目录“~/MyProjects/mediasoup-broadcaster-demo/build”

上面这个错误来源于 mediasoup-broadcaster-demo/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc 文件,这里粗暴地注释掉相关的代码:

mediasoup-broadcaster-demo$ git diff 
diff --git a/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc b/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc
index e5c4c5f..788467a 100644
--- a/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc
+++ b/deps/libwebrtc/test/testsupport/ivf_video_frame_generator.cc
@@ -45,12 +45,12 @@ IvfVideoFrameGenerator::IvfVideoFrameGenerator(const std::string& file_name)
   // Set buffer pool size to max value to ensure that if users of generator,
   // ex. test frameworks, will retain frames for quite a long time, decoder
   // won't crash with buffers pool overflow error.
-  codec_settings.buffer_pool_size = std::numeric_limits::max();
-  RTC_CHECK_EQ(video_decoder_->RegisterDecodeCompleteCallback(&callback_),
-               WEBRTC_VIDEO_CODEC_OK);
-  RTC_CHECK_EQ(
-      video_decoder_->InitDecode(&codec_settings, /*number_of_cores=*/1),
-      WEBRTC_VIDEO_CODEC_OK);
+//  codec_settings.buffer_pool_size = std::numeric_limits::max();
+//  RTC_CHECK_EQ(video_decoder_->RegisterDecodeCompleteCallback(&callback_),
+//               WEBRTC_VIDEO_CODEC_OK);
+//  RTC_CHECK_EQ(
+//      video_decoder_->InitDecode(&codec_settings, /*number_of_cores=*/1),
+//      WEBRTC_VIDEO_CODEC_OK);
 }
 IvfVideoFrameGenerator::~IvfVideoFrameGenerator() {
   MutexLock lock(&lock_);

完成了上面的代码修改,重新编译之后,出现了如下的链接错误:

[ 86%] Linking CXX executable broadcaster
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/obj/libwebrtc.a(audio_device_alsa_linux.o): in function `webrtc::AudioDeviceLinuxALSA::Init()':
~/opensource/webrtc-checkout/src/out/m96/../../modules/audio_device/linux/audio_device_alsa_linux.cc:158: undefined reference to `XOpenDisplay'
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/obj/libwebrtc.a(audio_device_alsa_linux.o): in function `webrtc::AudioDeviceLinuxALSA::Terminate()':
~/opensource/webrtc-checkout/src/out/m96/../../modules/audio_device/linux/audio_device_alsa_linux.cc:189: undefined reference to `XCloseDisplay'
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/obj/libwebrtc.a(audio_device_alsa_linux.o): in function `webrtc::AudioDeviceLinuxALSA::KeyPressed() const':
~/opensource/webrtc-checkout/src/out/m96/../../modules/audio_device/linux/audio_device_alsa_linux.cc:1624: undefined reference to `XQueryKeymap'
 . . . . . .
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/../../base/message_loop/message_pump_glib.cc:181: undefined reference to `g_main_context_default'
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/../../base/message_loop/message_pump_glib.cc:183: undefined reference to `g_main_context_new'
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/../../base/message_loop/message_pump_glib.cc:184: undefined reference to `g_main_context_push_thread_default'
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/../../base/message_loop/message_pump_glib.cc:199: undefined reference to `g_source_new'
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/../../base/message_loop/message_pump_glib.cc:201: undefined reference to `g_source_add_poll'
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/../../base/message_loop/message_pump_glib.cc:202: undefined reference to `g_source_set_priority'
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/../../base/message_loop/message_pump_glib.cc:204: undefined reference to `g_source_set_can_recurse'
/usr/bin/ld: ~/opensource/webrtc-checkout/src/out/m96/../../base/message_loop/message_pump_glib.cc:205: undefined reference to `g_source_attach'

这个链接错误是因为 Linux 版 webrtc 代码依赖了 X11 和 glib-2.0 库,但 mediasoup-broadcaster-demoCMakeLists.txt 中并没有配置对这两个库的依赖。这里简单地添加对这两个库的依赖:

mediasoup-broadcaster-demo$ git diff 
diff --git a/CMakeLists.txt b/CMakeLists.txt
index b8c40a4..d06c499 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -90,5 +90,7 @@ target_link_libraries(${PROJECT_NAME} PUBLIC
        cpr
        mediasoupclient
        webrtc_broadcaster
+       X11
+       glib-2.0
 )

随后重新编译,则顺利完成。

mediasoup-broadcaster-demo$ make -C build/

运行 mediasoup-broadcaster-demo

mediasoup-broadcaster-demo 编译生成可执行文件 build/broadcaster。这个可执行文件的一些运行配置需要通过环境变量来完成。SERVER_URL 环境变量需要指向服务器的地址,也就是上面 mediasoup-demo 中 server 那个服务的地址,ROOM_ID 则指向房间号。此外,需要先在浏览器中,把上面 mediasoup-demo 中的示例 Web 应用跑起来,并加入相同的房间,如用浏览器打开 https://192.168.217.129:3000/?roomId=broadcaster,否则这里的 build/broadcaster 会由于找不到房间而直接失败退出。

运行 build/broadcaster

mediasoup-broadcaster-demo$ export SERVER_URL=https://192.168.217.129:4443
mediasoup-broadcaster-demo$ export ROOM_ID=broadcaster
mediasoup-broadcaster-demo$ build/broadcaster
[DEBUG] mediasoupclient::Initialize() | mediasoupclient v3.3.0
(field_trial.cc:140): Setting field trial string:WebRTC-SupportVP9SVC/EnabledByFlag_3SL3TL/
[INFO] welcome to mediasoup broadcaster app!

[INFO] verifying that room 'broadcaster' exists...
[ERROR] unable to retrieve room info [status code:0, body:""]

笔者是在本地环境中部署的 mediasoup-demo,没有域名,也没有正式的 PKI 证书,而只是有一个本地生成的自签名证书。所以上面 build/broadcaster 跑起来后就立即由于 SSL 证书验证失败而结束了。在代码中关掉 build/broadcaster 的 SSL 验证:

diff --git a/src/main.cpp b/src/main.cpp
index 21f8bcc..f84cb6f 100644
--- a/src/main.cpp
+++ b/src/main.cpp
@@ -57,7 +57,7 @@ int main(int /*argc*/, char* /*argv*/[])
        if (envUseSimulcast && std::string(envUseSimulcast) == "false")
                useSimulcast = false;
 
-       bool verifySsl = true;
+       bool verifySsl = false;
        if (envVerifySsl && std::string(envVerifySsl) == "false")
                verifySsl = false;

随后再次运行,则能在 mediasoup-demo 中的示例 Web 应用的浏览器窗口中看到 build/broadcaster 发过来的画面:

1640831833321.jpg

窗口中间五颜六色的框框即是 build/broadcaster 发过来的画面

mediasoup-broadcaster-demo 的音频源和视频源

在 WebRTC 的概念体系中,Track 用于将音频源或者视频源接入整个音视频数据处理编码发送流水线中。mediasoup-broadcaster-demo/src/Broadcaster.cpp 文件的 Broadcaster::CreateSendTransport(bool enableAudio, bool useSimulcast) 函数里可以看到如下这段代码:

  ///////////////////////// Create Audio Producer //////////////////////////

  if (enableAudio && this->device.CanProduce("audio")) {
    auto audioTrack = createAudioTrack(std::to_string(rtc::CreateRandomId()));

    /* clang-format off */
        json codecOptions = {
            { "opusStereo", true },
            { "opusDtx",        true }
        };
    /* clang-format on */

    this->sendTransport->Produce(this, audioTrack, nullptr, &codecOptions,
                                 nullptr);
  } else {
    std::cerr << "[WARN] cannot produce audio" << std::endl;
  }

  ///////////////////////// Create Video Producer //////////////////////////

  if (this->device.CanProduce("video")) {
    auto videoTrack =
        createSquaresVideoTrack(std::to_string(rtc::CreateRandomId()));

    if (useSimulcast) {
      std::vector encodings;
      encodings.emplace_back(webrtc::RtpEncodingParameters());
      encodings.emplace_back(webrtc::RtpEncodingParameters());
      encodings.emplace_back(webrtc::RtpEncodingParameters());

      this->sendTransport->Produce(this, videoTrack, &encodings, nullptr,
                                   nullptr);
    } else {
      this->sendTransport->Produce(this, videoTrack, nullptr, nullptr, nullptr);
    }
  } else {
    std::cerr << "[WARN] cannot produce video" << std::endl;

    return;
  }

这里分别调用 createAudioTrack()createSquaresVideoTrack() 创建音频的 track 和视频的 track。在 mediasoup-broadcaster-demo/src/MediaStreamTrackFactory.cpp 文件中,这两个函数的实现如下:

// Audio track creation.
rtc::scoped_refptr createAudioTrack(
    const std::string& label) {
  if (!factory)
    createFactory();

  cricket::AudioOptions options;
  options.highpass_filter = false;

  rtc::scoped_refptr source =
      factory->CreateAudioSource(options);

  return factory->CreateAudioTrack(label, source);
}

// Video track creation.
rtc::scoped_refptr createVideoTrack(
    const std::string& /*label*/) {
  if (!factory)
    createFactory();

  auto* videoTrackSource =
      new rtc::RefCountedObject(
          false /* remote */);

  return factory->CreateVideoTrack(rtc::CreateRandomUuid(), videoTrackSource);
}

rtc::scoped_refptr createSquaresVideoTrack(
    const std::string& /*label*/) {
  if (!factory)
    createFactory();

  std::cout << "[INFO] getting frame generator" << std::endl;
  auto* videoTrackSource =
      new rtc::RefCountedObject(
          webrtc::FrameGeneratorCapturerVideoTrackSource::Config(),
          webrtc::Clock::GetRealTimeClock(), false);
  videoTrackSource->Start();

  std::cout << "[INFO] creating video track" << std::endl;
  return factory->CreateVideoTrack(rtc::CreateRandomUuid(), videoTrackSource);
}

对于 createAudioTrack(),它通过 webrtc::PeerConnectionFactoryInterface 创建了表示麦克风的音频源。对于 createSquaresVideoTrack() ,它则创建了类型为 webrtc::FrameGeneratorCapturerVideoTrackSource 的视频源,通过 WebRTC 中这个组件的代码,我们可以看到,它不操作摄像头,而是在内存中构造了一些花花绿绿的方块出来。这也就是我们在把 mediasoup-broadcaster-demo 跑起来的时候,接收端看到的是那些花花绿绿的方块的原因。mediasoup-broadcaster-demo 还提供了另外一个用于创建视频的 Track 的接口, createVideoTrack(),这个接口创建的视频源是 webrtc::FakePeriodicVideoTrackSource,它也没有操作视频采集设备,而是在内存中构造了一些画面。

webrtc::FrameGeneratorCapturerVideoTrackSourcewebrtc::FakePeriodicVideoTrackSource 都是 webrtc 提供的测试基础设施的一部分,它们可以生成一些视频帧用于测试。

先来看 webrtc::FrameGeneratorCapturerVideoTrackSource。在 webrtc/api/test/frame_generator_interface.h 文件中定义了一个生成视频帧数据的接口 FrameGeneratorInterface

namespace webrtc {
namespace test {

class FrameGeneratorInterface {
 public:
  struct VideoFrameData {
    VideoFrameData(rtc::scoped_refptr buffer,
                   absl::optional update_rect)
        : buffer(std::move(buffer)), update_rect(update_rect) {}

    rtc::scoped_refptr buffer;
    absl::optional update_rect;
  };

  enum class OutputType { kI420, kI420A, kI010, kNV12 };
  static const char* OutputTypeToString(OutputType type);

  virtual ~FrameGeneratorInterface() = default;

  // Returns VideoFrameBuffer and area where most of update was done to set them
  // on the VideoFrame object.
  virtual VideoFrameData NextFrame() = 0;

  // Change the capture resolution.
  virtual void ChangeResolution(size_t width, size_t height) = 0;
};

}  // namespace test
}  // namespace webrtc

webrtc/api/test/create_frame_generator.h 头文件中声明了许多创建 FrameGeneratorInterface 接口对象的函数:

// Creates a frame generator that produces frames with small squares that
// move randomly towards the lower right corner.
// |type| has the default value FrameGeneratorInterface::OutputType::I420.
// |num_squares| has the default value 10.
std::unique_ptr CreateSquareFrameGenerator(
    int width,
    int height,
    absl::optional type,
    absl::optional num_squares);

// Creates a frame generator that repeatedly plays a set of yuv files.
// The frame_repeat_count determines how many times each frame is shown,
// with 1 = show each frame once, etc.
std::unique_ptr CreateFromYuvFileFrameGenerator(
    std::vector filenames,
    size_t width,
    size_t height,
    int frame_repeat_count);

// Creates a frame generator that repeatedly plays an ivf file.
std::unique_ptr CreateFromIvfFileFrameGenerator(
    std::string filename);

// Creates a frame generator which takes a set of yuv files (wrapping a
// frame generator created by CreateFromYuvFile() above), but outputs frames
// that have been cropped to specified resolution: source_width/source_height
// is the size of the source images, target_width/target_height is the size of
// the cropped output. For each source image read, the cropped viewport will
// be scrolled top to bottom/left to right for scroll_tim_ms milliseconds.
// After that the image will stay in place for pause_time_ms milliseconds,
// and then this will be repeated with the next file from the input set.
std::unique_ptr
CreateScrollingInputFromYuvFilesFrameGenerator(
    Clock* clock,
    std::vector filenames,
    size_t source_width,
    size_t source_height,
    size_t target_width,
    size_t target_height,
    int64_t scroll_time_ms,
    int64_t pause_time_ms);

// Creates a frame generator that produces randomly generated slides. It fills
// the frames with randomly sized and colored squares.
// |frame_repeat_count| determines how many times each slide is shown.
std::unique_ptr
CreateSlideFrameGenerator(int width, int height, int frame_repeat_count);

mediasoup-broadcaster-demo 中用到的 webrtc::FrameGeneratorCapturerVideoTrackSource 类用到了上面声明的 CreateSquareFrameGenerator() 函数。webrtc/api/test/create_frame_generator.cc 文件中有这些函数的定义:

#include "test/frame_generator.h"
#include "test/testsupport/ivf_video_frame_generator.h"

namespace webrtc {
namespace test {

std::unique_ptr CreateSquareFrameGenerator(
    int width,
    int height,
    absl::optional type,
    absl::optional num_squares) {
  return std::make_unique(
      width, height, type.value_or(FrameGeneratorInterface::OutputType::kI420),
      num_squares.value_or(10));
}

std::unique_ptr CreateFromYuvFileFrameGenerator(
    std::vector filenames,
    size_t width,
    size_t height,
    int frame_repeat_count) {
  RTC_DCHECK(!filenames.empty());
  std::vector files;
  for (const std::string& filename : filenames) {
    FILE* file = fopen(filename.c_str(), "rb");
    RTC_DCHECK(file != nullptr) << "Failed to open: '" << filename << "'\n";
    files.push_back(file);
  }

  return std::make_unique(files, width, height,
                                            frame_repeat_count);
}

std::unique_ptr CreateFromIvfFileFrameGenerator(
    std::string filename) {
  return std::make_unique(std::move(filename));
}

std::unique_ptr
CreateScrollingInputFromYuvFilesFrameGenerator(
    Clock* clock,
    std::vector filenames,
    size_t source_width,
    size_t source_height,
    size_t target_width,
    size_t target_height,
    int64_t scroll_time_ms,
    int64_t pause_time_ms) {
  RTC_DCHECK(!filenames.empty());
  std::vector files;
  for (const std::string& filename : filenames) {
    FILE* file = fopen(filename.c_str(), "rb");
    RTC_DCHECK(file != nullptr);
    files.push_back(file);
  }

  return std::make_unique(
      clock, files, source_width, source_height, target_width, target_height,
      scroll_time_ms, pause_time_ms);
}

std::unique_ptr
CreateSlideFrameGenerator(int width, int height, int frame_repeat_count) {
  return std::make_unique(width, height, frame_repeat_count);
}

}  // namespace test
}  // namespace webrtc

webrtc/api/test/create_frame_generator.cc 文件中的这些函数创建的实际实现了 FrameGeneratorInterface 接口的类的对象,在 webrtc/test/frame_generator.h 中声明:

// SquareGenerator is a FrameGenerator that draws a given amount of randomly
// sized and colored squares. Between each new generated frame, the squares
// are moved slightly towards the lower right corner.
class SquareGenerator : public FrameGeneratorInterface {
 public:
  SquareGenerator(int width, int height, OutputType type, int num_squares);

  void ChangeResolution(size_t width, size_t height) override;
  VideoFrameData NextFrame() override;

 private:
  rtc::scoped_refptr CreateI420Buffer(int width, int height);

  class Square {
   public:
    Square(int width, int height, int seed);

    void Draw(const rtc::scoped_refptr& frame_buffer);

   private:
    Random random_generator_;
    int x_;
    int y_;
    const int length_;
    const uint8_t yuv_y_;
    const uint8_t yuv_u_;
    const uint8_t yuv_v_;
    const uint8_t yuv_a_;
  };

  Mutex mutex_;
  const OutputType type_;
  int width_ RTC_GUARDED_BY(&mutex_);
  int height_ RTC_GUARDED_BY(&mutex_);
  std::vector> squares_ RTC_GUARDED_BY(&mutex_);
};

class YuvFileGenerator : public FrameGeneratorInterface {
 public:
  YuvFileGenerator(std::vector files,
                   size_t width,
                   size_t height,
                   int frame_repeat_count);

  ~YuvFileGenerator();

  VideoFrameData NextFrame() override;
  void ChangeResolution(size_t width, size_t height) override {
    RTC_NOTREACHED();
  }

 private:
  // Returns true if the new frame was loaded.
  // False only in case of a single file with a single frame in it.
  bool ReadNextFrame();

  size_t file_index_;
  size_t frame_index_;
  const std::vector files_;
  const size_t width_;
  const size_t height_;
  const size_t frame_size_;
  const std::unique_ptr frame_buffer_;
  const int frame_display_count_;
  int current_display_count_;
  rtc::scoped_refptr last_read_buffer_;
};

// SlideGenerator works similarly to YuvFileGenerator but it fills the frames
// with randomly sized and colored squares instead of reading their content
// from files.
class SlideGenerator : public FrameGeneratorInterface {
 public:
  SlideGenerator(int width, int height, int frame_repeat_count);

  VideoFrameData NextFrame() override;
  void ChangeResolution(size_t width, size_t height) override {
    RTC_NOTREACHED();
  }

 private:
  // Generates some randomly sized and colored squares scattered
  // over the frame.
  void GenerateNewFrame();

  const int width_;
  const int height_;
  const int frame_display_count_;
  int current_display_count_;
  Random random_generator_;
  rtc::scoped_refptr buffer_;
};

class ScrollingImageFrameGenerator : public FrameGeneratorInterface {
 public:
  ScrollingImageFrameGenerator(Clock* clock,
                               const std::vector& files,
                               size_t source_width,
                               size_t source_height,
                               size_t target_width,
                               size_t target_height,
                               int64_t scroll_time_ms,
                               int64_t pause_time_ms);
  ~ScrollingImageFrameGenerator() override = default;

  VideoFrameData NextFrame() override;
  void ChangeResolution(size_t width, size_t height) override {
    RTC_NOTREACHED();
  }

 private:
  void UpdateSourceFrame(size_t frame_num);
  void CropSourceToScrolledImage(double scroll_factor);

  Clock* const clock_;
  const int64_t start_time_;
  const int64_t scroll_time_;
  const int64_t pause_time_;
  const size_t num_frames_;
  const int target_width_;
  const int target_height_;

  size_t current_frame_num_;
  bool prev_frame_not_scrolled_;
  VideoFrameData current_source_frame_;
  VideoFrameData current_frame_;
  YuvFileGenerator file_generator_;
};

}  // namespace test
}  // namespace webrtc

上面这些类在 webrtc/test/frame_generator.cc 文件中定义。

mediasoup-broadcaster-demo 中用到的 FrameGeneratorCapturerVideoTrackSource 在文件 webrtc/pc/test/frame_generator_capturer_video_track_source.h 中定义:

namespace webrtc {

// Implements a VideoTrackSourceInterface to be used for creating VideoTracks.
// The video source is generated using a FrameGeneratorCapturer, specifically
// a SquareGenerator that generates frames with randomly sized and colored
// squares.
class FrameGeneratorCapturerVideoTrackSource : public VideoTrackSource {
 public:
  static const int kDefaultFramesPerSecond = 30;
  static const int kDefaultWidth = 640;
  static const int kDefaultHeight = 480;
  static const int kNumSquaresGenerated = 50;

  struct Config {
    int frames_per_second = kDefaultFramesPerSecond;
    int width = kDefaultWidth;
    int height = kDefaultHeight;
    int num_squares_generated = 50;
  };

  FrameGeneratorCapturerVideoTrackSource(Config config,
                                         Clock* clock,
                                         bool is_screencast)
      : VideoTrackSource(false /* remote */),
        task_queue_factory_(CreateDefaultTaskQueueFactory()),
        is_screencast_(is_screencast) {
    video_capturer_ = std::make_unique(
        clock,
        test::CreateSquareFrameGenerator(config.width, config.height,
                                         absl::nullopt,
                                         config.num_squares_generated),
        config.frames_per_second, *task_queue_factory_);
    video_capturer_->Init();
  }

  FrameGeneratorCapturerVideoTrackSource(
      std::unique_ptr video_capturer,
      bool is_screencast)
      : VideoTrackSource(false /* remote */),
        video_capturer_(std::move(video_capturer)),
        is_screencast_(is_screencast) {}

  ~FrameGeneratorCapturerVideoTrackSource() = default;

  void Start() { SetState(kLive); }

  void Stop() { SetState(kMuted); }

  bool is_screencast() const override { return is_screencast_; }

 protected:
  rtc::VideoSourceInterface* source() override {
    return video_capturer_.get();
  }

 private:
  const std::unique_ptr task_queue_factory_;
  std::unique_ptr video_capturer_;
  const bool is_screencast_;
};

}  // namespace webrtc

FrameGeneratorCapturerVideoTrackSource 包了一个 FrameGeneratorCapturer,后者包了一个 FrameGeneratorInterfaceFrameGeneratorCapturerwebrtc/test/frame_generator_capturer.hwebrtc/test/frame_generator_capturer.cc 中声明和定义。相关的这些组件的关系如下图:

1646735205460.jpg

接下来来看 mediasoup-broadcaster-demo 中用到的另一个测试基础设施, FakePeriodicVideoTrackSource,它在 webrtc/pc/test/fake_periodic_video_track_source.h 文件中定义:

namespace webrtc {

// A VideoTrackSource generating frames with configured size and frame interval.
class FakePeriodicVideoTrackSource : public VideoTrackSource {
 public:
  explicit FakePeriodicVideoTrackSource(bool remote)
      : FakePeriodicVideoTrackSource(FakePeriodicVideoSource::Config(),
                                     remote) {}

  FakePeriodicVideoTrackSource(FakePeriodicVideoSource::Config config,
                               bool remote)
      : VideoTrackSource(remote), source_(config) {}

  ~FakePeriodicVideoTrackSource() = default;

  const FakePeriodicVideoSource& fake_periodic_source() const {
    return source_;
  }

 protected:
  rtc::VideoSourceInterface* source() override { return &source_; }

 private:
  FakePeriodicVideoSource source_;
};

}  // namespace webrtc

FakePeriodicVideoTrackSource 封装了 FakePeriodicVideoSource,而后者则在 webrtc/pc/test/fake_periodic_video_source.h 中定义:

namespace webrtc {

class FakePeriodicVideoSource final
    : public rtc::VideoSourceInterface {
 public:
  static constexpr int kDefaultFrameIntervalMs = 33;
  static constexpr int kDefaultWidth = 640;
  static constexpr int kDefaultHeight = 480;

  struct Config {
    int width = kDefaultWidth;
    int height = kDefaultHeight;
    int frame_interval_ms = kDefaultFrameIntervalMs;
    VideoRotation rotation = kVideoRotation_0;
    int64_t timestamp_offset_ms = 0;
  };

  FakePeriodicVideoSource() : FakePeriodicVideoSource(Config()) {}
  explicit FakePeriodicVideoSource(Config config)
      : frame_source_(
            config.width,
            config.height,
            config.frame_interval_ms * rtc::kNumMicrosecsPerMillisec,
            config.timestamp_offset_ms * rtc::kNumMicrosecsPerMillisec),
        task_queue_(std::make_unique(
            "FakePeriodicVideoTrackSource")) {
    thread_checker_.Detach();
    frame_source_.SetRotation(config.rotation);

    TimeDelta frame_interval = TimeDelta::Millis(config.frame_interval_ms);
    RepeatingTaskHandle::Start(task_queue_->Get(), [this, frame_interval] {
      if (broadcaster_.wants().rotation_applied) {
        broadcaster_.OnFrame(frame_source_.GetFrameRotationApplied());
      } else {
        broadcaster_.OnFrame(frame_source_.GetFrame());
      }
      return frame_interval;
    });
  }

  rtc::VideoSinkWants wants() const {
    MutexLock lock(&mutex_);
    return wants_;
  }

  void RemoveSink(rtc::VideoSinkInterface* sink) override {
    RTC_DCHECK(thread_checker_.IsCurrent());
    broadcaster_.RemoveSink(sink);
  }

  void AddOrUpdateSink(rtc::VideoSinkInterface* sink,
                       const rtc::VideoSinkWants& wants) override {
    RTC_DCHECK(thread_checker_.IsCurrent());
    {
      MutexLock lock(&mutex_);
      wants_ = wants;
    }
    broadcaster_.AddOrUpdateSink(sink, wants);
  }

  void Stop() {
    RTC_DCHECK(task_queue_);
    task_queue_.reset();
  }

 private:
  rtc::ThreadChecker thread_checker_;

  rtc::VideoBroadcaster broadcaster_;
  cricket::FakeFrameSource frame_source_;
  mutable Mutex mutex_;
  rtc::VideoSinkWants wants_ RTC_GUARDED_BY(&mutex_);

  std::unique_ptr task_queue_;
};

FakePeriodicVideoSource 是对 FakeFrameSource 的封装,FakeFrameSource 在文件webrtc/media/base/fake_frame_source.h 中定义:

namespace cricket {

class FakeFrameSource {
 public:
  FakeFrameSource(int width,
                  int height,
                  int interval_us,
                  int64_t timestamp_offset_us);
  FakeFrameSource(int width, int height, int interval_us);

  webrtc::VideoRotation GetRotation() const;
  void SetRotation(webrtc::VideoRotation rotation);

  webrtc::VideoFrame GetFrame();
  webrtc::VideoFrame GetFrameRotationApplied();

  // Override configuration.
  webrtc::VideoFrame GetFrame(int width,
                              int height,
                              webrtc::VideoRotation rotation,
                              int interval_us);

 private:
  const int width_;
  const int height_;
  const int interval_us_;

  webrtc::VideoRotation rotation_ = webrtc::kVideoRotation_0;
  int64_t next_timestamp_us_;
};

}  // namespace cricket

并在文件 webrtc/media/base/fake_frame_source.cc 中实现。FakeFrameSource 在内存中创建一些视频帧。

相关的这些组件之间的关系大概如下图:

1646737630600.jpg

关于 mediasoup-broadcaster-demo 的更多内容可参考 mediasoup-broadcaster-demo。此外,笔者 fork 了这个 repo mediasoup-broadcaster-demo,会针对遇到的问题做一些修改。

参考资料

Npm can't find module "semver" error in Ubuntu 19.04
mediasoup-demo 实践
https://github.com/versatica/mediasoup-demo/blob/v3/README.md
https://mediasoup.discourse.group/t/mediasouperror-port-bind-failed-due-to-address-not-available-udp-1-2-3-4-attempt-1/32/6
https://github.com/mkhahani/mediasoup-sample-app/issues/1
https://stackoverflow.com/questions/7724569/debug-vs-release-in-cmake

你可能感兴趣的:(mediasoup-demo 运行实战)