WebRTC simulcast: who decides which layer to choose on the receiving end?

In WebRTC, the decision of which layer to choose on the receiving side is usually handled by the WebRTC implementation and the underlying media stack. Selection of an appropriate layer depends on various factors such as network conditions, available bandwidth, and capabilities of the receiver.

Layer selection is usually performed dynamically using feedback mechanisms and congestion control algorithms. These algorithms monitor factors such as network congestion, packet loss, available bandwidth, and receiver buffer status to determine the best layer to decode and display.

The capability of the receiver also plays a role in the layer selection process. A receiver can indicate its supported decoding capabilities during the negotiation phase, such as supported video codecs and maximum resolutions. Based on this information, the sender can adjust its encoding settings and select the appropriate layer that matches the capabilities of the receiver.

Here's a high-level overview of how the layer selection process works in this setup:

  • Signaling phase (Centrifugo): In the signaling phase, Centrifugo facilitates the exchange of signaling messages between the sender and receiver. This includes SDP negotiation and the exchange of session descriptions containing media capabilities and preferences.
  • Media server functionality (mediasoup): As a media server, mediasoup plays an important role in determining the available video layers and their configuration. When a sender establishes a connection with mediasoup, it provides information about available media capabilities and configured simulcast layers.
  • SDP Negotiation (WebRTC Client Library): A WebRTC client library or framework integrated into the sender and receiver applications that handles the SDP negotiation process. Negotiation includes exchanging a session description, which contains information about supported codecs, media formats, and simulcast settings.
  • Layer selection (WebRTC client library + mediasoup): The actual layer selection process happens on the receiver side within the used WebRTC client library or framework. It takes into account network conditions, available bandwidth, receiver capabilities, and simulcast information received from the mediasoup media server.

manual layer selection

// Create a WebRTC PeerConnection
const peerConnection = new RTCPeerConnection();

// Function to handle received video tracks
function handleVideoTrack(event) {
  const receivedVideoTrack = event.track;

  // Attach the received video track to a video element for rendering
  const videoElement = document.getElementById('videoElement');
  videoElement.srcObject = new MediaStream([receivedVideoTrack]);
}

// Handle the SDP negotiation and add received tracks to the PeerConnection
peerConnection.ontrack = handleVideoTrack;

// Function to dynamically adjust bitrate based on network conditions
function adaptBitrate(networkConditions) {
  // Get the sender's video transceiver
  const videoTransceiver = peerConnection.getTransceivers()
 .find(transceiver => transceiver.sender.track.kind === 'video');

  // Get the list of available encodings for the sender's video track
  const sendEncodings = videoTransceiver.sender.getParameters().encodings;

  // Choose the appropriate video layer based on network conditions
  let selectedLayer;
  if (networkConditions.availableBandwidth < 500000) {
 selectedLayer = 'low'; // Lower bitrate for low bandwidth
  } else if (networkConditions.availableBandwidth < 1000000) {
 selectedLayer = 'medium'; // Medium bitrate for moderate bandwidth
  } else {
 selectedLayer = 'high'; // Higher bitrate for high bandwidth
  }

  // Update the sender's encoding parameters to prioritize the selected layer
  sendEncodings.forEach(encoding => {
 if (encoding.rid === selectedLayer) {
   encoding.active = true;
 } else {
   encoding.active = false;
 }
  });

  // Apply the updated encoding parameters to the sender's video transceiver
  videoTransceiver.sender.setParameters({ encodings: sendEncodings });
}

// Function to monitor network conditions and trigger adaptive bitrate control
function monitorNetworkConditions() {
  // Replace with your own logic to monitor network conditions
  const networkConditions = {
 availableBandwidth: getAvailableBandwidth(),
 // Other network condition parameters...
  };

  adaptBitrate(networkConditions);
}

// Example usage: Monitor network conditions periodically
setInterval(monitorNetworkConditions, 5000);

The benefits of this article, free C++ audio and video learning materials package, technical video/code, including (audio and video development, interview questions, FFmpeg, webRTC, rtmp, hls, rtsp, ffplay, codec, push-pull stream, srs)↓↓↓ ↓↓↓See below↓↓Click at the bottom of the article to get it for free↓↓

Guess you like

Origin blog.csdn.net/m0_60259116/article/details/131688957