owt webrtc 音频没有声音

项目场景:

基于 owt 的 webrtc 应用,windows exe 和 chrome 浏览器建立连接,做一个类似远程控制的工具(例如向日葵)

问题描述:

windows 应用和 chrome 建立连接后听不到声音。声音是通过业务层参数传下来 推流(publish) 或者 拉流 (subscribe) 的

原因分析:

owt webrtc 线程调度关系,有一个初始化线程,还有一个取数据线程,取数据的线程发现取数据异常,就退出了,导致一个音频包都没有发

 //=======打印显示音频采集打开了=========
(core_audio_input_win.cc:276): --- Input audio stream is alive ---  
(audio_device_buffer.cc:239): Size of recording buffer: 960
(render_delay_buffer.cc:362): Applying total delay of 5 blocks.
(matched_filter.cc:450): Filter 0: start: 0 ms, end: 128 ms.
(matched_filter.cc:450): Filter 1: start: 96 ms, end: 224 ms.
(matched_filter.cc:450): Filter 2: start: 192 ms, end: 320 ms.
(matched_filter.cc:450): Filter 3: start: 288 ms, end: 416 ms.
(matched_filter.cc:450): Filter 4: start: 384 ms, end: 512 ms.
(render_delay_buffer.cc:330): Receiving a first externally reported audio buffer delay of 16 ms.
// ==========发了第一个音频包=============
(rtp_sender_audio.cc:309): First audio RTP packet sent to pacer   
// ==========发了第一个视频包=============
(rtp_sender_video.cc:667): Sent first RTP packet of the first video frame (pre-pacer)   
(rtp_sender_video.cc:671): Sent last RTP packet of the first video frame (pre-pacer)
(video_send_stream_impl.cc:467): SignalEncoderActive, Encoder is active.   

异常的打印:(没有发第一个音频包的逻辑)

//========windows 音频 api 打开报错===========
(core_audio_base_win.cc:955): [Input] WASAPI streaming failed.
(channel.cc:821): Changing voice state, recv=0 send=1
(thread.cc:668): Message took 307ms to dispatch. Posted from: cricket::BaseChannel::UpdateMediaSendRecvState@../../third_party/webrtc/pc/channel.cc:800
(webrtc_video_engine.cc:1193): SetSend: true
(video_send_stream.cc:148): UpdateActiveSimulcastLayers: {1}
(bitrate_allocator.cc:523): UpdateAllocationLimits : total_requested_min_bitrate: 62 kbps, total_requested_padding_bitrate: 0 bps, total_requested_max_bitrate: 2532 kbps
(pacing_controller.cc:213): bwe:pacer_updated pacing_kbps=750 padding_budget_kbps=0
(video_stream_encoder.cc:1594): OnBitrateUpdated, bitrate 268000 stable bitrate = 268000 link allocation bitrate = 268000 packet loss 0 rtt 0
(video_stream_encoder.cc:1619): Video suspend state changed to: not suspended
(channel.cc:970): Changing video state, send=1
(video_stream_encoder.cc:1130): Encoder settings changed from EncoderInfo { ScalingSettings { min_pixels_per_frame = 57600 }, requested_resolution_alignment = 1, supports_native_handle = 0, implementation_name = 'unknown', has_trusted_rate_controller = 0, is_hardware_accelerated = 1, has_internal_source = 0, fps_allocation = [[ 1] ], resolution_bitrate_limits = [] , supports_simulcast = 0} to EncoderInfo { ScalingSettings { Thresholds { low = 29, high = 95}, min_pixels_per_frame = 57600 }, requested_resolution_alignment = 1, supports_native_handle = 0, implementation_name = 'libvpx', has_trusted_rate_controller = 0, is_hardware_accelerated = 0, has_internal_source = 0, fps_allocation = [[ 1] ], resolution_bitrate_limits = [] , supports_simulcast = 1}
// =========仅仅发了第一个视频包============
(rtp_sender_video.cc:667): Sent first RTP packet of the first video frame (pre-pacer)   
(rtp_sender_video.cc:671): Sent last RTP packet of the first video frame (pre-pacer)
(video_send_stream_impl.cc:467): SignalEncoderActive, Encoder is activ//yin

采集音频线程的逻辑:(core_audio_base_win.cc)

void Run(void* obj) {  //启动线程 
  RTC_DCHECK(obj);
  reinterpret_cast(obj)->ThreadRun();
}
void CoreAudioBase::ThreadRun() {   // 线程体
//...
  HANDLE wait_array[] = {stop_event_.Get(), restart_event_.Get(),
                         audio_samples_event_.Get()};
  // Keep streaming audio until the stop event or the stream-switch event
  // is signaled. An error event can also break the main thread loop.
  while (streaming && !error) {   //死循环执行,直到有错,立即退出线程
    // Wait for a close-down event, stream-switch event or a new render event.
    DWORD wait_result = WaitForMultipleObjects(arraysize(wait_array),
                                               wait_array, false, INFINITE);
    switch (wait_result) {
      case WAIT_OBJECT_0 + 0:
        // |stop_event_| has been set.
        streaming = false;
        break;
      case WAIT_OBJECT_0 + 1:
        // |restart_event_| has been set.
        error = !HandleRestartEvent();
        break;
      case WAIT_OBJECT_0 + 2:
      {
        // |audio_samples_event_| has been set.
        error = !on_data_callback_(device_frequency);
				if(!initialized_ || !is_active_)   // 原来没有这里的判断条件,修改的这里
				{
					RTC_LOG(INFO) << "audio base not init, initialized:" << initialized_ << " is_active_:" << is_active_;
					error = 0;
				}
        break;
      }
      default:
        error = true;
        break;
    }
  }
  if (streaming && error) {  //退出的逻辑
    RTC_LOG(LS_ERROR) << "[" << DirectionToString(direction())
                      << "] WASAPI streaming failed. streaming:" << streaming
                      << " error:" << error;
    // Stop audio streaming since something has gone wrong in our main thread
    // loop. Note that, we are still in a "started" state, hence a Stop() call
    // is required to join the thread properly.
    result = audio_client_->Stop();
    if (FAILED(result.Error())) {
      RTC_LOG(LS_ERROR) << "IAudioClient::Stop failed: "
                        << core_audio_utility::ErrorToString(result);
    }

    // TODO(henrika): notify clients that something has gone wrong and that
    // this stream should be destroyed instead of reused in the future.
  }
  RTC_DLOG(INFO) << "[" << DirectionToString(direction())
                 << "] ...ThreadRun stops";
}
//回调数据的线程
bool CoreAudioInput::OnDataCallback(uint64_t device_frequency) {
  RTC_DCHECK_RUN_ON(&thread_checker_audio_);

  if (!initialized_ || !is_active_) {  // 还没有初始化,则直接返回false,这里使用全局变量来判断
    // This is concurrent examination of state across multiple threads so will
    // be somewhat error prone, but we should still be defensive and not use
    // audio_capture_client_ if we know it's not there.
    RTC_LOG(INFO) << "data call back, initialized:" << initialized_ << " is_active_:" << is_active_;
    return false;
  }	
  // ... 
}

开启线程的逻辑:

CoreAudioBase::CoreAudioBase(Direction direction,
                             bool automatic_restart,
                             OnDataCallback data_callback,
                             OnErrorCallback error_callback)
    : format_(),
      direction_(direction),
      automatic_restart_(automatic_restart),
      on_data_callback_(data_callback),
      on_error_callback_(error_callback),
      device_index_(kUndefined),
      is_restarting_(false) {
  RTC_DLOG(INFO) << __FUNCTION__ << "[" << DirectionToString(direction) << "]";
  RTC_DLOG(INFO) << "Automatic restart: " << automatic_restart;
  RTC_DLOG(INFO) << "Windows version: " << rtc::rtc_win::GetVersion();

  // Create the event which the audio engine will signal each time a buffer
  // becomes ready to be processed by the client.
  // 从这里往下的三个条件变量要注意
  audio_samples_event_.Set(CreateEvent(nullptr, false, false, nullptr));
  RTC_DCHECK(audio_samples_event_.IsValid());

  // Event to be set in Stop() when rendering/capturing shall stop.
  stop_event_.Set(CreateEvent(nullptr, false, false, nullptr));
  RTC_DCHECK(stop_event_.IsValid());

  // Event to be set when it has been detected that an active device has been
  // invalidated or the stream format has changed.
  restart_event_.Set(CreateEvent(nullptr, false, false, nullptr));
  RTC_DCHECK(restart_event_.IsValid());

  enumerator_ = core_audio_utility::CreateDeviceEnumerator();
  enumerator_->RegisterEndpointNotificationCallback(this);
  RTC_LOG(INFO) << __FUNCTION__
                    << ":Registered endpoint notification callback.";
}



解决方案:

1.没有初始化成功不算返回报错,去掉这种错误。
2.如果上面的流程都是正确的,webrtc 的打印 log 显示音频通道正常,wireshark 发了音频 rtp 包,仍然听不到音频。可能是麦克风没有打卡,通话建立的时候往往会有一个麦克风的标志,检查下此标志是否打开。

你可能感兴趣的:(webrtc,webrtc)