首先,这篇文章比较多,会不定期更新添加新的内容;
从PeerConnectionFactory这个类中可知:
首先,webrtc中至少有三个非常重要的线程(当然还有别的线程):
rtc::Thread* network_thread_;
rtc::Thread* worker_thread_;
rtc::Thread* signaling_thread_;
// All methods with _n suffix must be called on network thread,
// methods with _w suffix on worker thread
// and methods with _s suffix on signaling thread.
// Network and worker threads may be the same thread.
一:webrtc线程中的创建:
1:
在webrtc中,许多函数的执行,会检测线程:如
RTC_DCHECK(!configuration_thread_checker_.CalledOnValidThread());
判断是否在同一个线程中的根本方法也比较简单,在Windows上直接用GetCurrentThreadId()比较;
简单理解,在T1线程中,创建了一部分类,然后再到T2线程中创建另一部分内容,可能会出错,就是上面的检测;或者执行无效;
如创建peerconnection在主线程中,然后自己创建一个workthread里面创建sdp,不会触发OnSuccess函数;
通常常见的问题,如:
一般创建UDP server,会创建一个workthread,一直recv数据, 接收到数据后,根据情况,可能会在创建MediaStream等情况;
但是在这个线程中直接创建的时候会错误,或无效;
要如何处理呢?
具体可以参考: FakeNetworkInterface
rtc::Thread详解
webrtc58\src\webrtc\base\thread.h
class LOCKABLE Thread : public MessageQueue
class MessageQueue {
void MessageQueue::Post(const Location& posted_from,
MessageHandler* phandler,
uint32_t id,
MessageData* pdata,
bool time_sensitive) {
if (IsQuitting())
return;
// Keep thread safe
// Add the message to the end of the queue
// Signal for the multiplexer to return
{
CritScope cs(&crit_);
Message msg;
msg.posted_from = posted_from;
msg.phandler = phandler;
msg.message_id = id;
msg.pdata = pdata;
if (time_sensitive) {
msg.ts_sensitive = TimeMillis() + kMaxMsgLatency;
}
msgq_.push_back(msg);
}
WakeUpSocketServer();
}
void MessageQueue::Dispatch(Message *pmsg) {
TRACE_EVENT2("webrtc", "MessageQueue::Dispatch", "src_file_and_line",
pmsg->posted_from.file_and_line(), "src_func",
pmsg->posted_from.function_name());
int64_t start_time = TimeMillis();
pmsg->phandler->OnMessage(pmsg);
int64_t end_time = TimeMillis();
int64_t diff = TimeDiff(end_time, start_time);
if (diff >= kSlowDispatchLoggingThreshold) {
LOG(LS_INFO) << "Message took " << diff << "ms to dispatch. Posted from: "
<< pmsg->posted_from.ToString();
}
}
MessageList msgq_ GUARDED_BY(crit_);
...
}
从上面的MessageQueue 可以看出,已经实现了Message类型,和消息链表msgq_ ;
通过OnMessage分发到相关继承实现OnMessage的类中;
class LOCKABLE Thread : public MessageQueue {
// Starts the execution of the thread.
bool Start(Runnable* runnable = nullptr); //可以实现自己的线程函数;
// Convenience method to invoke a functor on another thread. Caller must
// provide the |ReturnT| template argument, which cannot (easily) be deduced.
// Uses Send() internally, which blocks the current thread until execution
// is complete.
// Ex: bool result = thread.Invoke
// &MyFunctionReturningBool);
// NOTE: This function can only be called when synchronous calls are allowed.
// See ScopedDisallowBlockingCalls for details.
//优点就是阻塞执行了线程中的函数;
template
ReturnT Invoke(const Location& posted_from, const FunctorT& functor) {
FunctorMessageHandler
InvokeInternal(posted_from, &handler);
return handler.MoveResult();
bool Thread::ProcessMessages(int cmsLoop) {
int64_t msEnd = (kForever == cmsLoop) ? 0 : TimeAfter(cmsLoop);
int cmsNext = cmsLoop;
while (true) {
#if defined(WEBRTC_MAC)
// see: http://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/NSAutoreleasePool_Class/Reference/Reference.html
// Each thread is supposed to have an autorelease pool. Also for event loops
// like this, autorelease pool needs to be created and drained/released
// for each cycle.
ScopedAutoreleasePool pool;
#endif
Message msg;
if (!Get(&msg, cmsNext))
return !IsQuitting();
Dispatch(&msg); //具体执行了消息派遣;
if (cmsLoop != kForever) {
cmsNext = static_cast
if (cmsNext < 0)
return true;
}
}
}
...
}
2:
webrtc中的Message应用示例:
class Test_RtcThread : public rtc::MessageHandler
{
public:
Test_RtcThread()
{
thread_ = rtc::Thread::Current(); //不需要执行thread_->start();
或者:
thread_ = rtc::Thread::Create();
thread_->start(); //注意:这里会执行 ProcessMessages(kForever); 添加参数后就不执行,也就不会执行消息循环;
}
~Test_RtcThread()
{
}
public:
virtual void OnMessage(rtc::Message* msg) override
{
rtc::TypedMessageData
switch (msg->message_id)
{
case 1:
::MessageBeep(1);
break;
default:
break;
}
SAFE_DELETE(msg_data);
}
public:
void PostMessage_(int id)
{
rtc::CopyOnWriteBuffer packet(100);
thread_->Send(RTC_FROM_HERE, this, id, rtc::WrapMessageData(packet)); //
}
private:
rtc::Thread* thread_;
};
在OnMessage总触发相关函数就可以了;
更多详细示例,可以参考:thread_unittest.cc;
3: 如果是为了触发函数,还有更简单一点的方法:Invoke
// Convenience method to invoke a functor on another thread. Caller must
// provide the |ReturnT| template argument, which cannot (easily) be deduced.
// Uses Send() internally, which blocks the current thread until execution
// is complete.
// Ex: bool result = thread.Invoke
// &MyFunctionReturningBool);
// NOTE: This function can only be called when synchronous calls are allowed.
// See ScopedDisallowBlockingCalls for details.
template
ReturnT Invoke(const Location& posted_from, const FunctorT& functor) {
FunctorMessageHandler
InvokeInternal(posted_from, &handler);
return handler.MoveResult();
}
实际应用中,其实还是比较简单的:
1:创建:
建议用这个三个函数创建 rtc::Thread
static std::unique_ptr
static std::unique_ptr
static Thread* Current();
2:
消息循环:
其实不需要添加自定义线程函数循环处理;rtc::Thread 内部已经实现了消息链表和循环,通过接口添加到消息链表中,或直接添加不进队消息,自动执行;
为了简便,rtc::Thread 提供了两个简便的函数,;
具体是创建 rtc::Thread 后,可以通过
(1)
不建议直接用 send 函数,wenbrtc 建议内部用;其实 Invoke 函数内部执行的就是 send 函数;
template <class ReturnT, class FunctorT>
ReturnT Invoke(const Location& posted_from, FunctorT&& functor)
(2)
template <class FunctorT>
void PostTask(const Location& posted_from, FunctorT&& functor)
两个函数参数方法一样,一个直接执行,一个加入消息链表;
应用方式:
1:直接执行 lambda 表达式:
worker_thread_->Invoke<RtpParameters>(RTC_FROM_HERE, [&] {
return media_channel_->GetRtpReceiveParameters(*ssrc_);
或者:
auto functor = [this, strategy_raw]() {
call_->SetBitrateAllocationStrategy( absl::WrapUnique<rtc::BitrateAllocationStrategy>(strategy_raw));
};
worker_thread->Invoke<void>(RTC_FROM_HERE, functor);
或者:
// TODO(eladalon): In C++14, this can be done with a lambda.
// 通过结构体,重装 "()" 实现; 我觉得这个方式,代码更清楚;
struct Functor {
bool operator()() {
return pc->StartRtcEventLog_w(std::move(output), output_period_ms);
}
PeerConnection* const pc;
std::unique_ptr<RtcEventLogOutput> output;
const int64_t output_period_ms;
};
return worker_thread()->Invoke<bool>( RTC_FROM_HERE,
Functor{this, std::move(output), output_period_ms} //初始化,结构体,赋值;
);
2:执行 类 中的 成员函数:
worker_thread()->Invoke<void>( RTC_FROM_HERE, rtc::Bind(&PeerConnection::SetAudioRecording, this, recording));
二:webrtc中的workthread 工作线程;
用法比较简单:
ProcessThread;
webrtc58\src\webrtc\modules\utility\include\process_thread.h
具体的实现类是:ProcessThreadImpl;
ProcessThread这个线程类的优点是:有消息数据链表:
void ProcessThreadImpl::PostTask(std::unique_ptr
// Allowed to be called on any thread.
{
rtc::CritScope lock(&lock_);
queue_.push(task.release());
}
wake_up_->Set();
}
三:webrtc总的平台线程:
rtc::PlatformThread ;
在webrtc中,可以替代std::thread;
一般不建议平台相关线程;
这里还有一个特别的线程:rtc.TaskQueue { ...
class WorkerThread : public PlatformThread
... }
这里其实主要的是TaskQueue ,重点是Queue链表;
TaskQueue构造后会创建线程,处理链表中的QueueTask的run函数;
外面通过post将QueueTask放入TaskQueue链表中;
看不懂上面的,可以完全自己实现线程消循环:
线程 + while + list;
Message_thread_Demo
{
public:
std::vector
void run( void * pParam)
{
while(true)
{
for( int n = 0 ; n message * pMessage = mListMessage.at(n); proccess( pMessage ); } } } postMessage直接将消息放入链表就可以了; 如果要实现sendMessage,最根本的方法是,将消息放入消息链表第一个,然后while等待标记; 前面已经说明了webrtc中的线程,和消息循环; class RTC_LOCKABLE Thread : public MessageQueue MessageQueue:主要实现了 post + list + while + Dispatch(Message *pmsg){ pmsg->phandler->OnMessage(pmsg);}; Thread: 主要实现了:平台相关线程实现 + Send + Invoke; 这里的Invoke其实就是调用的Send; Send: 如果是当前同一个线程,直接调用OnMessage; if (IsCurrent()) { phandler->OnMessage(&msg); return; } 如果是同一个线程,就这么简单了;但是要是多线程呢? 当然可以 post,但是post直接进消息队列,不可以阻塞执行函数; 那么,Invoke是如何实现呢? webrtc中: _n 函数在network_thread()中执行; _w 函数在worker_thread()中执行; _s 函数在signaling_thread()中执行; 也就是说可以通过Invoke在不同线程中执行不同的函数;(当然了);同一个函数在不同的线程中执行,通过GetThreadID会获取当前执行的线程ID; 这句话很重要,后面会解释; 先看一下,Invoke的应用: 在同一个PeerConnection类中的示例; void PeerConnection::StopRtcEventLog() { 先来说一下Invoke的调用内部函数顺序:Invoke -> FunctorMessageHandler+InvokeInternal -> InvokeInternal -> Send ; 可以看出最后还是调用Send函数;前面已经说了,如果是同一个线程中,那么直接就调用了OnMessage; 下面,说明如果不是同一个线程的处理方法: void Thread::Send(const Location& posted_from, 这里来看一下别的线程中Invoke后,这里最主要的一个函数:current_thread->ReceiveSendsFromThread(this); 注意:1,是当前线程调用;2,参数也是当前线程指针; 那么,来看一下这个函数: void Thread::ReceiveSendsFromThread(const Thread* source) { 看到了吧,还是当前线程,执行了指定phandler->OnMessage消息;执行的消息就是函数内容; 关于webrtc中的PROXY; #define PROXY_METHOD0(r, method) \ template 下面看一下SynchronousMethodCall的实现; 可以看一下这篇文章:《WebRTC 的 PROXY - 如何解决应用中的线程乱入》 这简单说明一下Proxy.cc这个文件: #include "api/proxy.h" 注释: 1:e_->Wait(rtc::Event::kForever); 在Windows上就是 bool Event::Wait(int milliseconds) { 2: e_->Set(); 就是通知WaitForSingleObject; 很简单:说明一下思路: 将类的函数指针(或者任何函数指针都可以)放到指定线程的messageList中,然后等待执行完成; 这样就相当于在 指定线程中 阻塞 执行了指定的函数; 二:《webrtc线程间的invoke,send,post》
worker_thread()->Invoke
RTC_FROM_HERE, rtc::Bind(&PeerConnection::StopRtcEventLog_w, this));
}
MessageHandler* phandler,
uint32_t id,
MessageData* pdata) {
if (IsQuitting())
return;
// Sent messages are sent to the MessageHandler directly, in the context
// of "thread", like Win32 SendMessage. If in the right context,
// call the handler directly.
Message msg;
msg.posted_from = posted_from;
msg.phandler = phandler;
msg.message_id = id;
msg.pdata = pdata;
if (IsCurrent()) { //同一个线程中;
phandler->OnMessage(&msg);
return;
}
AssertBlockingIsAllowedOnCurrentThread();
AutoThread thread;
Thread *current_thread = Thread::Current();
RTC_DCHECK(current_thread != nullptr); // AutoThread ensures this
bool ready = false;
{
CritScope cs(&crit_);
_SendMessage smsg;
smsg.thread = current_thread;
smsg.msg = msg;
smsg.ready = &ready;
sendlist_.push_back(smsg);
}
// Wait for a reply
WakeUpSocketServer();
bool waited = false;
crit_.Enter();
while (!ready) {
crit_.Leave();
// We need to limit "ReceiveSends" to |this| thread to avoid an arbitrary
// thread invoking calls on the current thread.
current_thread->ReceiveSendsFromThread(this); //其他的线程中
current_thread->socketserver()->Wait(kForever, false);
waited = true;
crit_.Enter();
}
crit_.Leave();
// Our Wait loop above may have consumed some WakeUp events for this
// MessageQueue, that weren't relevant to this Send. Losing these WakeUps can
// cause problems for some SocketServers.
//
// Concrete example:
// Win32SocketServer on thread A calls Send on thread B. While processing the
// message, thread B Posts a message to A. We consume the wakeup for that
// Post while waiting for the Send to complete, which means that when we exit
// this loop, we need to issue another WakeUp, or else the Posted message
// won't be processed in a timely manner.
if (waited) {
current_thread->socketserver()->WakeUp();
}
}
// Receive a sent message. Cleanup scenarios:
// - thread sending exits: We don't allow this, since thread can exit
// only via Join, so Send must complete.
// - thread receiving exits: Wakeup/set ready in Thread::Clear()
// - object target cleared: Wakeup/set ready in Thread::Clear()
_SendMessage smsg;
crit_.Enter();
while (PopSendMessageFromThread(source, &smsg)) {
crit_.Leave();
smsg.msg.phandler->OnMessage(&smsg.msg); //主要函数;
crit_.Enter();
*smsg.ready = true;
smsg.thread->socketserver()->WakeUp();
}
crit_.Leave();
}
r method() override { \
MethodCall0
return call.Marshal(RTC_FROM_HERE, signaling_thread_); \
}
class MethodCall0 : public rtc::Message,
public rtc::MessageHandler {
public:
typedef R (C::*Method)();
MethodCall0(C* c, Method m) : c_(c), m_(m) {}
R Marshal(const rtc::Location& posted_from, rtc::Thread* t) {
internal::SynchronousMethodCall(this).Invoke(posted_from, t);
return r_.moved_result();
}
namespace webrtc {
namespace internal {
SynchronousMethodCall::SynchronousMethodCall(rtc::MessageHandler* proxy)
: e_(), proxy_(proxy) {}
SynchronousMethodCall::~SynchronousMethodCall() = default;
void SynchronousMethodCall::Invoke(const rtc::Location& posted_from,
rtc::Thread* t) {
if (t->IsCurrent()) {
proxy_->OnMessage(nullptr);
} else {
e_.reset(new rtc::Event(false, false));
t->Post(posted_from, this, 0);
e_->Wait(rtc::Event::kForever);
}
}
void SynchronousMethodCall::OnMessage(rtc::Message*) {
proxy_->OnMessage(nullptr);
e_->Set();
}
} // namespace internal
} // namespace webrtc
DWORD ms = (milliseconds == kForever) ? INFINITE : milliseconds;
return (WaitForSingleObject(event_handle_, ms) == WAIT_OBJECT_0);
}