1 介绍

video_loopback demo包含了webrtc 上层网络协议的剩余部分,没有sdp协商,p2p,srtp,实现了从call实现音视频互通的例子。对于动手能力比较强的公司,适合从这层开发,搭建自己的架构模型,实现自由调度,大并发等

2 分析源码

2.1 demo入口

启动入口:src/video/video_loopback_main.cc
配置参数:src/video/video_loopback.cc
这个文件主要是配置demo的参数,也是我们参考配置的方法
简单看几个配置项:

ABSL_FLAG(int, width, 1280, "Video width.");
ABSL_FLAG(int, height, 720, "Video height.");
ABSL_FLAG(int, fps, 30, "Frames per second.");
ABSL_FLAG(int, capture_device_index, 0, "Capture device to select");
ABSL_FLAG(int, min_bitrate, 1000, "Call and stream min bitrate in kbps.");
ABSL_FLAG(int, start_bitrate, 1000, "Call start bitrate in kbps.");
ABSL_FLAG(int, target_bitrate, 1200, "Stream target bitrate in kbps.");
ABSL_FLAG(int, max_bitrate, 1500, "Call and stream max bitrate in kbps.");

我们可以跟踪这些参数的设置地方

1 main
2 webrtc::RunLoopbackTest
3 void Loopback
4 VideoQualityTest::RunWithRenderers

2.2 call的使用流程

2.2.1 函数入口分析

void VideoQualityTest::RunWithRenderers(const Params& params) {RTC_LOG(INFO) << __FUNCTION__;num_video_streams_ = params.call.dual_video ? 2 : 1;std::unique_ptr<test::LayerFilteringTransport> send_transport;std::unique_ptr<test::DirectTransport> recv_transport;std::unique_ptr<test::VideoRenderer> local_preview;std::vector<std::unique_ptr<test::VideoRenderer>> loopback_renderers;loopback_renderers.emplace_back(test::VideoRenderer::Create("Loopback Video1", 640, 480));local_preview.reset(test::VideoRenderer::Create("Local Preview", 640, 480));if (!params.logging.rtc_event_log_name.empty()) {send_event_log_ = rtc_event_log_factory_.CreateRtcEventLog(RtcEventLog::EncodingType::Legacy);recv_event_log_ = rtc_event_log_factory_.CreateRtcEventLog(RtcEventLog::EncodingType::Legacy);std::unique_ptr<RtcEventLogOutputFile> send_output(std::make_unique<RtcEventLogOutputFile>(params.logging.rtc_event_log_name + "_send",RtcEventLog::kUnlimitedOutput));std::unique_ptr<RtcEventLogOutputFile> recv_output(std::make_unique<RtcEventLogOutputFile>(params.logging.rtc_event_log_name + "_recv",RtcEventLog::kUnlimitedOutput));bool event_log_started =send_event_log_->StartLogging(std::move(send_output),/*output_period_ms=*/5000) &&recv_event_log_->StartLogging(std::move(recv_output),/*output_period_ms=*/5000);RTC_DCHECK(event_log_started);} else {send_event_log_ = std::make_unique<RtcEventLogNull>();recv_event_log_ = std::make_unique<RtcEventLogNull>();}SendTask(RTC_FROM_HERE, task_queue(), [&]() {params_ = params;CheckParamsAndInjectionComponents();// TODO(ivica): Remove bitrate_config and use the default Call::Config(), to// match the full stack tests.Call::Config send_call_config(send_event_log_.get());send_call_config.bitrate_config = params_.call.call_bitrate_config;Call::Config recv_call_config(recv_event_log_.get());if (params_.audio.enabled)InitializeAudioDevice(&send_call_config, &recv_call_config,params_.audio.use_real_adm);CreateCalls(send_call_config, recv_call_config);// TODO(minyue): consider if this is a good transport even for audio only// calls.send_transport = CreateSendTransport();recv_transport = CreateReceiveTransport();// TODO(ivica): Use two calls to be able to merge with RunWithAnalyzer or at// least share as much code as possible. That way this test would also match// the full stack tests better.send_transport->SetReceiver(receiver_call_->Receiver());recv_transport->SetReceiver(sender_call_->Receiver());if (params_.video[0].enabled) {// Create video renderers.SetupVideo(send_transport.get(), recv_transport.get());size_t num_streams_processed = 0;for (size_t video_idx = 0; video_idx < num_video_streams_; ++video_idx) {const size_t selected_stream_id = params_.ss[video_idx].selected_stream;const size_t num_streams = params_.ss[video_idx].streams.size();if (selected_stream_id == num_streams) {for (size_t stream_id = 0; stream_id < num_streams; ++stream_id) {rtc::StringBuilder oss;oss << "Loopback Video #" << video_idx << " - Stream #"<< static_cast<int>(stream_id);/* loopback_renderers.emplace_back(test::VideoRenderer::Create(oss.str().c_str(),params_.ss[video_idx].streams[stream_id].width,params_.ss[video_idx].streams[stream_id].height));*/video_receive_configs_[stream_id + num_streams_processed].renderer =loopback_renderers.back().get();if (params_.audio.enabled && params_.audio.sync_video)video_receive_configs_[stream_id + num_streams_processed].sync_group = kSyncGroup;}} else {rtc::StringBuilder oss;oss << "Loopback Video #" << video_idx;/* loopback_renderers.emplace_back(test::VideoRenderer::Create(oss.str().c_str(),params_.ss[video_idx].streams[selected_stream_id].width,params_.ss[video_idx].streams[selected_stream_id].height));*/video_receive_configs_[selected_stream_id + num_streams_processed].renderer = loopback_renderers.back().get();if (params_.audio.enabled && params_.audio.sync_video)video_receive_configs_[num_streams_processed + selected_stream_id].sync_group = kSyncGroup;}num_streams_processed += num_streams;}CreateFlexfecStreams();CreateVideoStreams();CreateCapturers();if (params_.video[0].enabled) {// Create local preview/* local_preview.reset(test::VideoRenderer::Create("Local Preview", params_.video[0].width, params_.video[0].height));*/video_sources_[0]->AddOrUpdateSink(local_preview.get(),rtc::VideoSinkWants());}ConnectVideoSourcesToStreams();}if (params_.audio.enabled) {SetupAudio(send_transport.get());}Start();});MSG msg;while (GetMessage(&msg, NULL, 0, 0) > 0) {TranslateMessage(&msg);DispatchMessage(&msg);}PressEnterToContinue(task_queue());SendTask(RTC_FROM_HERE, task_queue(), [&]() {Stop();DestroyStreams();send_transport.reset();recv_transport.reset();local_preview.reset();loopback_renderers.clear();DestroyCalls();});
}

包含了几个重要节点函数

  • 创建渲染 local_preview, loopback_renderers
  • 初始化音频相关(混音,音频处理等) InitializeAudioDevice
  • 创建 CreateCalls,可以用一个call的,demo用了俩
  • 创建网络传输CreateSendTransport, CreateReceiveTransport,主要功能是通过udp传输 rtp包
  • 给Transport设置rtp数据的接收模块 send_transport->SetReceiver, recv_transport->SetReceiver
  • 设置视频编码,传输参数,SetupVideo
  • 创建视频流CreateVideoStreams
  • 创建视频采集CreateCapturers
  • 设置视频源渲染 video_sources_[0]->AddOrUpdateSink
  • 关联视频源和视频流 ConnectVideoSourcesToStreams
  • 设置音频 SetupAudio
  • 启动Start();
    接下来 我们详细看看调用实现

2.2.2 创建视频渲染 VideoRenderer

  std::unique_ptr<test::VideoRenderer> local_preview;std::vector<std::unique_ptr<test::VideoRenderer>> loopback_renderers;loopback_renderers.emplace_back(test::VideoRenderer::Create("Loopback Video1", 640, 480));local_preview.reset(test::VideoRenderer::Create("Local Preview", 640, 480));

底下是d3d实现的yuv渲染,固定的调用函数,不仔细看了
源码在 src/test/win/d3d_renderer.cc

2.2.3 call预设参数

Call::Config 是call 初始化需要的配置,有几个重要的项需要设置

  • bitrate_config 码率设置
  • audio_state 音频相关 音频设备,混音,音频处理
    Call::Config send_call_config(send_event_log_.get());send_call_config.bitrate_config = params_.call.call_bitrate_config;Call::Config recv_call_config(recv_event_log_.get());

send_call_config.bitrate_config 这个结构体设置了这个call的 最小码率 最大码率 初始码率

// TODO(srte): BitrateConstraints and BitrateSettings should be merged.
// Both represent the same kind data, but are using different default
// initializer and representation of unset values.
struct BitrateConstraints {int min_bitrate_bps = 0;int start_bitrate_bps = kDefaultStartBitrateBps;int max_bitrate_bps = -1;

2.2.4 音频相关初始化

音频相关设置要在call创建之前完成,因为多个call 可以用一个音频处理,adm,apm

void VideoQualityTest::InitializeAudioDevice(Call::Config* send_call_config,Call::Config* recv_call_config,bool use_real_adm) {rtc::scoped_refptr<AudioDeviceModule> audio_device;if (use_real_adm) {// Run test with real ADM (using default audio devices) if user has// explicitly set the --audio and --use_real_adm command-line flags.audio_device = CreateAudioDevice();} else {// By default, create a test ADM which fakes audio.audio_device = TestAudioDeviceModule::Create(task_queue_factory_.get(),TestAudioDeviceModule::CreatePulsedNoiseCapturer(32000, 48000),TestAudioDeviceModule::CreateDiscardRenderer(48000), 1.f);}RTC_CHECK(audio_device);AudioState::Config audio_state_config;audio_state_config.audio_mixer = AudioMixerImpl::Create();audio_state_config.audio_processing = AudioProcessingBuilder().Create();audio_state_config.audio_device_module = audio_device;send_call_config->audio_state = AudioState::Create(audio_state_config);recv_call_config->audio_state = AudioState::Create(audio_state_config);if (use_real_adm) {// The real ADM requires extra initialization: setting default devices,// setting up number of channels etc. Helper class also calls// AudioDeviceModule::Init().webrtc::adm_helpers::Init(audio_device.get());} else {audio_device->Init();}// Always initialize the ADM before injecting a valid audio transport.RTC_CHECK(audio_device->RegisterAudioCallback(send_call_config->audio_state->audio_transport()) == 0);
}

1、创建音频采集播放设备 CreateAudioDevice,源码在src/modules/audio_device/win/audio_device_core_win.cc
AudioDeviceModule 里面有遍历 选择音频设备

  // Device enumerationvirtual int16_t PlayoutDevices() = 0;virtual int16_t RecordingDevices() = 0;virtual int32_t PlayoutDeviceName(uint16_t index,char name[kAdmMaxDeviceNameSize],char guid[kAdmMaxGuidSize]) = 0;virtual int32_t RecordingDeviceName(uint16_t index,char name[kAdmMaxDeviceNameSize],char guid[kAdmMaxGuidSize]) = 0;// Device selectionvirtual int32_t SetPlayoutDevice(uint16_t index) = 0;virtual int32_t SetPlayoutDevice(WindowsDeviceType device) = 0;virtual int32_t SetRecordingDevice(uint16_t index) = 0;virtual int32_t SetRecordingDevice(WindowsDeviceType device) = 0;

可以用来遍历设备,选择设备
2、音频处理器配置 AudioState::Config
2.1 混音器配置 audio_state_config.audio_mixer = AudioMixerImpl::Create();
2.2 音频处理配置webrtc::AudioProcessing , audio_state_config.audio_processing = AudioProcessingBuilder().Create();
2.3 音频设备配置 audio_state_config.audio_device_module = audio_device;
2.4 音频处理创建 send_call_config->audio_state = AudioState::Create(audio_state_config);
这一步主要是在AudioState创建AudioTransportImpl,AudioTransport是音频采集播放 和 音频处理的桥梁

    AudioState::AudioState(const AudioState::Config& config): config_(config),audio_transport_(config_.audio_mixer,config_.audio_processing.get(),config_.async_audio_processing_factory.get()) {process_thread_checker_.Detach();RTC_DCHECK(config_.audio_mixer);RTC_DCHECK(config_.audio_device_module);
}
class AudioTransport {public://adm mic 采集pcm 通过调用这接口进入apmvirtual int32_t RecordedDataIsAvailable(const void* audioSamples,const size_t nSamples,const size_t nBytesPerSample,const size_t nChannels,const uint32_t samplesPerSec,const uint32_t totalDelayMS,const int32_t clockDrift,const uint32_t currentMicLevel,const bool keyPressed,uint32_t& newMicLevel) = 0;  // NOLINT//adm 播放时,从这接口获取pcm数据// Implementation has to setup safe values for all specified out parameters.virtual int32_t NeedMorePlayData(const size_t nSamples,const size_t nBytesPerSample,const size_t nChannels,const uint32_t samplesPerSec,void* audioSamples,size_t& nSamplesOut,  // NOLINTint64_t* elapsed_time_ms,int64_t* ntp_time_ms) = 0;  // NOLINT// Method to pull mixed render audio data from all active VoE channels.// The data will not be passed as reference for audio processing internally.virtual void PullRenderData(int bits_per_sample,int sample_rate,size_t number_of_channels,size_t number_of_frames,void* audio_data,int64_t* elapsed_time_ms,int64_t* ntp_time_ms) = 0;protected:virtual ~AudioTransport() {}
};

2.5 音频采集播放初始化 audio_device->Init();
2.6 设置音频采集播放设备的数据关联 audio_device->RegisterAudioCallback(send_call_config->audio_state->audio_transport()

2.2.5 创建call

CreateCalls(send_call_config, recv_call_config);
这里使用了两个call ,分别recv和send

void CallTest::CreateSenderCall(const Call::Config& config) {auto sender_config = config;sender_config.task_queue_factory = task_queue_factory_.get();sender_config.network_state_predictor_factory =network_state_predictor_factory_.get();sender_config.network_controller_factory = network_controller_factory_.get();sender_config.trials = &field_trials_;sender_call_.reset(Call::Create(sender_config));
}void CallTest::CreateReceiverCall(const Call::Config& config) {auto receiver_config = config;receiver_config.task_queue_factory = task_queue_factory_.get();receiver_config.trials = &field_trials_;receiver_call_.reset(Call::Create(receiver_config));
}

2.2.6 创建网络transport

send_transport = CreateSendTransport();
recv_transport = CreateReceiveTransport();

这transport就是单纯的网络发送,udp发送接收rtp,实现端到端的网络传出功能

class DirectTransport : public Transport {public:...// TODO(holmer): Look into moving this to the constructor.virtual void SetReceiver(PacketReceiver* receiver);bool SendRtp(const uint8_t* data,size_t length,const PacketOptions& options) override;bool SendRtcp(const uint8_t* data, size_t length) override;
...
};
  • 媒体流编码后封装成rtp会调用此接口 发送
  • 接收rtp/tcp后调用PacketReceiver进入rtp demux

2.2.7 设置视频编码信息

代码在 VideoQualityTest::SetupVideo
罗列重点说明

//网络相关,payloadtype,nack
std::vector<VideoSendStream::Config> video_send_configs_;
//编码器相关
std::vector<VideoEncoderConfig> video_encoder_configs_;
std::vector<VideoReceiveStream::Config> video_receive_configs_;
void VideoQualityTest::SetupVideo(Transport* send_transport,Transport* recv_transport) {......video_receive_configs_.clear();video_send_configs_.clear();video_encoder_configs_.clear();// 一个视频是可以发多个流,类似svc的实现,但这里不是svc分层,是通过编码了多个分辨率的流,平时我们只用一个for (size_t video_idx = 0; video_idx < num_video_streams_; ++video_idx) {//设置视频流的网络传输transportvideo_send_configs_.push_back(VideoSendStream::Config(send_transport));video_encoder_configs_.push_back(VideoEncoderConfig());num_video_substreams = params_.ss[video_idx].streams.size();RTC_CHECK_GT(num_video_substreams, 0);for (size_t i = 0; i < num_video_substreams; ++i) {//rtp ssrc设置,用于标识rtp流,详情看协议video_send_configs_[video_idx].rtp.ssrcs.push_back(kVideoSendSsrcs[total_streams_used + i]);}......//设置rtp中的pt,用于接收端接收到rtp后,rtp的组帧方式和解码器的选取if (params_.video[video_idx].codec == "H264") {payload_type = kPayloadTypeH264;}...... //设置编码器factoryvideo_send_configs_[video_idx].encoder_settings.encoder_factory =(video_idx == 0) ? &video_encoder_factory_with_analyzer_: &video_encoder_factory_;//码率控制      factory              video_send_configs_[video_idx].encoder_settings.bitrate_allocator_factory =video_bitrate_allocator_factory_.get();......//编码器名字video_send_configs_[video_idx].rtp.payload_name =params_.video[video_idx].codec;//rtp payload typevideo_send_configs_[video_idx].rtp.payload_type = payload_type;//nack 历史队列保存时间,不为0 时代表nack开启video_send_configs_[video_idx].rtp.nack.rtp_history_ms = kNackRtpHistoryMs;video_send_configs_[video_idx].rtp.rtx.payload_type = kSendRtxPayloadType;// 发送端代码估计,transport-cc需要开启这个,索引对应必须和接收端一致,否则可能引起崩溃if (params_.call.send_side_bwe) {video_send_configs_[video_idx].rtp.extensions.emplace_back(RtpExtension::kTransportSequenceNumberUri,kTransportSequenceNumberExtensionId);} else {//只接收也能计算rtt,接收端估计video_send_configs_[video_idx].rtp.extensions.emplace_back(RtpExtension::kAbsSendTimeUri, kAbsSendTimeExtensionId);}......//编码参数设置video_encoder_configs_[video_idx].video_format.name =params_.video[video_idx].codec;video_encoder_configs_[video_idx].max_bitrate_bps = 0;for (size_t i = 0; i < params_.ss[video_idx].streams.size(); ++i) {video_encoder_configs_[video_idx].max_bitrate_bps +=params_.ss[video_idx].streams[i].max_bitrate_bps;}  ......  video_encoder_configs_[video_idx].video_stream_factory =rtc::make_ref_counted<cricket::EncoderStreamFactory>(params_.video[video_idx].codec,params_.ss[video_idx].streams[0].max_qp,params_.screenshare[video_idx].enabled, true);

2.2.8 创建视频流

void CallTest::CreateVideoStreams() {RTC_DCHECK(video_receive_streams_.empty());CreateVideoSendStreams();for (size_t i = 0; i < video_receive_configs_.size(); ++i) {video_receive_streams_.push_back(receiver_call_->CreateVideoReceiveStream(video_receive_configs_[i].Copy()));}
}
void CallTest::CreateVideoSendStreams() {......for (size_t i : streams_creation_order) {if (fec_controller_factory_.get()) {video_send_streams_[i] = sender_call_->CreateVideoSendStream(video_send_configs_[i].Copy(), video_encoder_configs_[i].Copy(),fec_controller_factory_->CreateFecController());} else {video_send_streams_[i] = sender_call_->CreateVideoSendStream(video_send_configs_[i].Copy(), video_encoder_configs_[i].Copy());}}......
}

通过call 创建视频发送的流和视频接收的流

2.2.8 创建视频capture

1、 CreateCapturers();
2、关联capture和流

void CallTest::ConnectVideoSourcesToStreams() {for (size_t i = 0; i < video_sources_.size(); ++i)video_send_streams_[i]->SetSource(video_sources_[i].get(),degradation_preference_);
}

2.2.9 配置并且创建音频发送接收

SetupAudio(send_transport.get());

void VideoQualityTest::SetupAudio(Transport* transport) {AudioSendStream::Config audio_send_config(transport);audio_send_config.rtp.ssrc = kAudioSendSsrc;// Add extension to enable audio send side BWE, and allow audio bit rate// adaptation.audio_send_config.rtp.extensions.clear();audio_send_config.send_codec_spec = AudioSendStream::Config::SendCodecSpec(kAudioSendPayloadType,{"OPUS",48000,2,{{"usedtx", (params_.audio.dtx ? "1" : "0")}, {"stereo", "1"}}});if (params_.call.send_side_bwe) {audio_send_config.rtp.extensions.push_back(webrtc::RtpExtension(webrtc::RtpExtension::kTransportSequenceNumberUri,kTransportSequenceNumberExtensionId));audio_send_config.min_bitrate_bps = kOpusMinBitrateBps;audio_send_config.max_bitrate_bps = kOpusBitrateFbBps;audio_send_config.send_codec_spec->transport_cc_enabled = true;// Only allow ANA when send-side BWE is enabled.audio_send_config.audio_network_adaptor_config = params_.audio.ana_config;}audio_send_config.encoder_factory = audio_encoder_factory_;SetAudioConfig(audio_send_config);//音视频同步设置,只要音视频设置的字符串一样 就会配对std::string sync_group;if (params_.video[0].enabled && params_.audio.sync_video)sync_group = kSyncGroup;CreateMatchingAudioConfigs(transport, sync_group);CreateAudioStreams();
}
AudioReceiveStream::Config CallTest::CreateMatchingAudioConfig(const AudioSendStream::Config& send_config,rtc::scoped_refptr<AudioDecoderFactory> audio_decoder_factory,Transport* transport,std::string sync_group) {AudioReceiveStream::Config audio_config;audio_config.rtp.local_ssrc = kReceiverLocalAudioSsrc;audio_config.rtcp_send_transport = transport;//接收的流的ssrc,只会接收这一个流,是对应的audio_config.rtp.remote_ssrc = send_config.rtp.ssrc;audio_config.rtp.transport_cc =send_config.send_codec_spec? send_config.send_codec_spec->transport_cc_enabled: false;audio_config.rtp.extensions = send_config.rtp.extensions;audio_config.decoder_factory = audio_decoder_factory;//这个需要和发送的对应,根据paylaod和解码器的对应关系audio_config.decoder_map = {{kAudioSendPayloadType, {"opus", 48000, 2}}};//音视频同步配对audio_config.sync_group = sync_group;return audio_config;
}

创建音频发送和接收

void CallTest::CreateAudioStreams() {RTC_DCHECK(audio_send_stream_ == nullptr);RTC_DCHECK(audio_receive_streams_.empty());audio_send_stream_ = sender_call_->CreateAudioSendStream(audio_send_config_);for (size_t i = 0; i < audio_receive_configs_.size(); ++i) {audio_receive_streams_.push_back(receiver_call_->CreateAudioReceiveStream(audio_receive_configs_[i]));}
}

2.2.9 start

void CallTest::Start() {StartVideoStreams();if (audio_send_stream_) {audio_send_stream_->Start();}for (AudioReceiveStream* audio_recv_stream : audio_receive_streams_)audio_recv_stream->Start();
}
void CallTest::StartVideoStreams() {for (VideoSendStream* video_send_stream : video_send_streams_)video_send_stream->Start();for (VideoReceiveStream* video_recv_stream : video_receive_streams_)video_recv_stream->Start();
}

调一下stream的start

2.3 补充

分析一下这些主要成员的作用

  std::vector<std::unique_ptr<rtc::VideoSourceInterface<VideoFrame>>>thumbnail_capturers_;Clock* const clock_;const std::unique_ptr<TaskQueueFactory> task_queue_factory_;RtcEventLogFactory rtc_event_log_factory_;test::FunctionVideoDecoderFactory video_decoder_factory_;std::unique_ptr<VideoDecoderFactory> decoder_factory_;test::FunctionVideoEncoderFactory video_encoder_factory_;test::FunctionVideoEncoderFactory video_encoder_factory_with_analyzer_;std::unique_ptr<VideoBitrateAllocatorFactory>video_bitrate_allocator_factory_;std::unique_ptr<VideoEncoderFactory> encoder_factory_;std::vector<VideoSendStream::Config> thumbnail_send_configs_;std::vector<VideoEncoderConfig> thumbnail_encoder_configs_;std::vector<VideoSendStream*> thumbnail_send_streams_;std::vector<VideoReceiveStream::Config> thumbnail_receive_configs_;std::vector<VideoReceiveStream*> thumbnail_receive_streams_;

1、 capture,视频采集

std::vector<std::unique_ptr<rtc::VideoSourceInterface>> thumbnail_capturers_;

这个其实是vcm 和 编码器的结合,将vcm采集出的yuv传递给后续的编码器

class VideoSourceInterface {public:virtual ~VideoSourceInterface() = default;virtual void AddOrUpdateSink(VideoSinkInterface<VideoFrameT>* sink,const VideoSinkWants& wants) = 0;// RemoveSink must guarantee that at the time the method returns,// there is no current and no future calls to VideoSinkInterface::OnFrame.virtual void RemoveSink(VideoSinkInterface<VideoFrameT>* sink) = 0;
};

这里面 通过AddOrUpdateSink设置yuv数据的关联

class VideoSinkInterface {public:virtual ~VideoSinkInterface() = default;virtual void OnFrame(const VideoFrameT& frame) = 0;// Should be called by the source when it discards the frame due to rate// limiting.virtual void OnDiscardedFrame() {}
};

2、视频解码

  test::FunctionVideoDecoderFactory video_decoder_factory_;std::unique_ptr<VideoDecoderFactory> decoder_factory_;

VideoDecoderFactory 创建解码器的工厂

class RTC_EXPORT VideoDecoderFactory {public:struct CodecSupport {bool is_supported = false;bool is_power_efficient = false;};virtual std::vector<SdpVideoFormat> GetSupportedFormats() const = 0;virtual CodecSupport QueryCodecSupport(const SdpVideoFormat& format,bool reference_scaling) const {CodecSupport codec_support;codec_support.is_supported =!reference_scaling && format.IsCodecInList(GetSupportedFormats());return codec_support;}virtual CodecSupport QueryCodecSupport(const SdpVideoFormat& format,absl::optional<std::string> scalability_mode) const {CodecSupport codec_support;if (!scalability_mode) {codec_support.is_supported = format.IsCodecInList(GetSupportedFormats());}return codec_support;}// Creates a VideoDecoder for the specified format.virtual std::unique_ptr<VideoDecoder> CreateVideoDecoder(const SdpVideoFormat& format) = 0;virtual ~VideoDecoderFactory() {}
};

最主要的就是

virtual std::unique_ptr<VideoDecoder> CreateVideoDecoder(const SdpVideoFormat& format) = 0;

调用地方在
//初始化

VideoQualityTest::VideoQualityTest(std::unique_ptr<InjectionComponents> injection_components)......
{if (injection_components_ == nullptr) {injection_components_ = std::make_unique<InjectionComponents>();}if (injection_components_->video_decoder_factory != nullptr) {decoder_factory_ = std::move(injection_components_->video_decoder_factory);} else {decoder_factory_ = std::make_unique<InternalDecoderFactory>();}......

调用创建解码器

std::unique_ptr<VideoDecoder> VideoQualityTest::CreateVideoDecoder(const SdpVideoFormat& format) {std::unique_ptr<VideoDecoder> decoder;if (format.name == "multiplex") {decoder = std::make_unique<MultiplexDecoderAdapter>(decoder_factory_.get(), SdpVideoFormat(cricket::kVp9CodecName));} else if (format.name == "FakeCodec") {decoder = webrtc::FakeVideoDecoderFactory::CreateVideoDecoder();} else {decoder = decoder_factory_->CreateVideoDecoder(format);}......

看一下这个的实现
根据解码器名字,创建解码器

std::unique_ptr<VideoDecoder> InternalDecoderFactory::CreateVideoDecoder(const SdpVideoFormat& format) {if (!format.IsCodecInList(GetSupportedFormats())) {return nullptr;}if (absl::EqualsIgnoreCase(format.name, cricket::kVp8CodecName))return VP8Decoder::Create();if (absl::EqualsIgnoreCase(format.name, cricket::kVp9CodecName))return VP9Decoder::Create();if (absl::EqualsIgnoreCase(format.name, cricket::kH264CodecName))return H264Decoder::Create();if (kIsLibaomAv1DecoderSupported &&absl::EqualsIgnoreCase(format.name, cricket::kAv1CodecName))return CreateLibaomAv1Decoder();RTC_NOTREACHED();return nullptr;
}

3 视频编码器工厂

VideoEncoderFactory encoder_factory_
和解码器工厂差不多一样

if (injection_components_->video_encoder_factory != nullptr) {encoder_factory_ = std::move(injection_components_->video_encoder_factory);} else {encoder_factory_ = std::make_unique<InternalEncoderFactory>();}

创建编码器

std::unique_ptr<VideoEncoder> VideoQualityTest::CreateVideoEncoder(const SdpVideoFormat& format,VideoAnalyzer* analyzer) {std::unique_ptr<VideoEncoder> encoder;if (format.name == "VP8") {encoder =std::make_unique<EncoderSimulcastProxy>(encoder_factory_.get(), format);} else if (format.name == "multiplex") {encoder = std::make_unique<MultiplexEncoderAdapter>(encoder_factory_.get(), SdpVideoFormat(cricket::kVp9CodecName));} else if (format.name == "FakeCodec") {encoder = webrtc::FakeVideoEncoderFactory::CreateVideoEncoder();} else {encoder = encoder_factory_->CreateVideoEncoder(format);}

实现

std::unique_ptr<VideoEncoder> InternalEncoderFactory::CreateVideoEncoder(const SdpVideoFormat& format) {if (absl::EqualsIgnoreCase(format.name, cricket::kH264CodecName))return H264Encoder::Create(cricket::VideoCodec(format));if (absl::EqualsIgnoreCase(format.name, cricket::kVp8CodecName))return VP8Encoder::Create();if (absl::EqualsIgnoreCase(format.name, cricket::kVp9CodecName))return VP9Encoder::Create(cricket::VideoCodec(format));if (kIsLibaomAv1EncoderSupported &&absl::EqualsIgnoreCase(format.name, cricket::kAv1CodecName))return CreateLibaomAv1Encoder();RTC_LOG(LS_ERROR) << "Trying to created encoder of unsupported format "<< format.name;return nullptr;
}

test::FunctionVideoEncoderFactory video_encoder_factory_;
这个就是把调用转移到了闭包函数

class FunctionVideoEncoderFactory final : public VideoEncoderFactory {public:explicit FunctionVideoEncoderFactory(std::function<std::unique_ptr<VideoEncoder>()> create): create_([create = std::move(create)](const SdpVideoFormat&) {return create();}) {}explicit FunctionVideoEncoderFactory(std::function<std::unique_ptr<VideoEncoder>(const SdpVideoFormat&)>create): create_(std::move(create)) {}// Unused by tests.std::vector<SdpVideoFormat> GetSupportedFormats() const override {RTC_NOTREACHED();return {};}std::unique_ptr<VideoEncoder> CreateVideoEncoder(const SdpVideoFormat& format) override {return create_(format);}private:const std::function<std::unique_ptr<VideoEncoder>(const SdpVideoFormat&)>create_;
};
 video_decoder_factory_([this](const SdpVideoFormat& format) {return this->CreateVideoDecoder(format);}),video_encoder_factory_([this](const SdpVideoFormat& format) {return this->CreateVideoEncoder(format, nullptr);}),

webrtc源码分析 vieo_loopback分析相关推荐

  1. WebRTC源码分析-呼叫建立过程之五(创建Offer,CreateOffer,上篇)

    目录 1. 引言 2 CreateOffer声明 && 两个参数 2.1 CreateOffer声明 2.2 参数CreateSessionDescriptionObserver 2. ...

  2. WebRTC源码分析-呼叫建立过程之四(上)(创建并添加本地音频轨到PeerConnection)

    目录 1. 引言 2. 音频轨创建和添加 2.1 音频源AudioSource的创建 2.1.1 音频源继承树 2.1.2 近端音频源LocalAudioSource 2.1.3 远端音频源Remot ...

  3. webrtc源码分析之-从视频采集到编码流程

    peer_connection中从视频采集到编码的流程 摘要:本篇文章主要讲述当我们通过peer_connection完成推流时,视频从采集到编码是如何衔接的. 既,视频采集后如何传送到编码器.重点分 ...

  4. JAVA源码优化、分析工具

    JAVA源码优化.分析工具 一.11款用于优化.分析源代码的Java工具 1. PMD from http://pmd.sourceforge.net/ PMD能够扫描Java 源代码,查找类似以下的 ...

  5. Dalvik解释器源码到VMP分析

    前言 学习这块的主要目的还是想知道vmp是如何实现的,如何与系统本身的虚拟机配合工作,所以简单的学习了Dalvik的源码并对比分析了数字公司的解释器.笔记结构如下: dalvik解释器分析 dalvi ...

  6. ReviewForJob——最小生成树(prim + kruskal)源码实现和分析

    [0]README 1)本文旨在给出 ReviewForJob--最小生成树(prim + kruskal)源码实现和分析, 还会对其用到的 技术 做介绍: 2)最小生成树是对无向图而言的:一个无向图 ...

  7. strings.Builder 源码阅读与分析

    strings.Builder源码阅读与分析 背景之字符串拼接 在 Go 语言中,对于字符串的拼接处理有很多种方法,那么那种方法才是效率最高的呢? str := []string{"aa&q ...

  8. 【PX4-AutoPilot教程-1】PX4源码文件目录架构分析

    PX4源码文件目录架构分析 PX4源代码的结构复杂,这是源代码的总目录结构(以v1.13.0为例): Firmware ├─boards ├─build ├─cmake ├─Documentation ...

  9. MediaPlayer源码流程简要分析

    涉及文件目录: \frameworks\base\media\java\android\media\MediaPlayer.java \frameworks\base\media\jni\androi ...

  10. java sofa rpc_sofa-rpc服务端源码的详细分析(附流程图)

    本篇文章给大家带来的内容是关于sofa-rpc服务端源码的详细分析(附流程图),有一定的参考价值,有需要的朋友可以参考一下,希望对你有所帮助. sofa-rpc是阿里开源的一款高性能的rpc框架,这篇 ...

最新文章

  1. sublime text3注册激活及失效解决办法
  2. 一分钟了解spark的调优
  3. [深度学习] 自然语言处理 --- Huggingface-Pytorch中文语言Bert模型预训练
  4. Spring MVC框架-持久层用hibernate自动化(1)
  5. elasticsearch设置_search的size
  6. 考研清华985信号与系统参考书籍(郑君里)重点
  7. 怎样能看懂matlab中的代码,初学者怎样能看懂代码
  8. java编程 科学计算器_可编程科学计算器下载-可编程科学计算器(Scientific Calculator Plus) 安卓版v1.7.2.60-pc6手机下载...
  9. 北理工计算机学院沈建斌,中国高校计算机大赛-团体程序设计天梯赛全国总决赛获奖.doc...
  10. Linux实验13_进程管理及任务计划.docx
  11. Spring Data ElasticSearch analyzer 定义 @Filed失效 @Mapping失效 创建索引 无效 解决办法 ElasticsearchRestTemplate
  12. 墨海醉笔,又流逝了多少华年?
  13. 计算机乱七八糟小知识备忘录
  14. 利用全长转录组多重阵列测序检测同源异构体
  15. Google的垂直搜索
  16. 当代超吸金的行业“Python工程师”,如何快速从Pytho入门到初级Python工程师?
  17. 识别最优的数据驱动特征选择方法以提高分类任务的可重复性
  18. 计算机毕业论文乐谱播放器,给大家推荐一个超强的播放器!我刚发现的。居然显示乐谱...
  19. 推荐系统1--协同过滤
  20. C++常用术语及其英文翻译的含义和简单用途总结(六)

热门文章

  1. 微信扫一扫二维码跳转到浏览器打开下载链接怎么做的
  2. Python爬虫案例:结合Matplotlib分析天气数据
  3. 关于linux系统笔记本电池的放电时间显示错误的问题
  4. nerfstudio介绍及在windows上的配置、使用
  5. 数组和指针——拿来主义
  6. 从“大数据”到“小数据〞,“隐语”开源SCQL助力不同规模数据安全分析
  7. Spark存储机制源码剖析
  8. 大数据技术——从海量数据的存储到海量数据的计算
  9. 「第一部:容器和Docker」(2) 什么是Docker
  10. 从MFQ方法到需求分析