Opened 8 years ago

#20 new defect

Section 12.2: Multiple Sources

Reported by: bernard_aboba@… Owned by: draft-ietf-rtcweb-rtp-usage@…
Priority: major Milestone: milestone1
Component: rtp-usage Version: 1.0
Severity: Active WG Document Keywords:


12.2. Multiple Sources

A WebRTC end-point might have multiple cameras, microphones or audio
inputs and thus a single end-point can source multiple RTP media
streams of the same media type concurrently. Even if an end-point
does not have multiple media sources of the same media type it has to
support transmission using multiple SSRCs concurrently in the same
RTP session. This is due to the requirement on an WebRTC end-point
to support multiple media types in one RTP session. For example, one
audio and one video source can result in the end-point sending with
two different SSRCs in the same RTP session. As multi-party
conferences are supported, as discussed below in Section 12.3, a
WebRTC end-point will need to be capable of receiving, decoding and
play out multiple RTP media streams of the same type concurrently.

tbd: Are any mechanism needed to signal limitations in the number of
active SSRC that an end-point can handle?

[BA] There are some additional nasty problems that come up here in the case where there are multiple sources that are each utilizing simulcast/layered coding. Among other things, there needs to be a way in RTP (not just SDP) to make it clear how the RTP streams relate. For example, imagine a mixer that is receiving simulcast/layered coding from multiple sources. What set of SSRCs issue from that mixer, and what is contained in the SDES packet that lets a receiver know that the multiple SSRCs represent layered coding from a single source? I presume the SRCNAME document was one attempt at handling this problem, but since that didn't become a WG work item, we're still left with an unfilled need. And no, this isn't handled in any of the GRUMBLE/FUMBLE/MUMBLE proposals.

Change History (0)

Note: See TracTickets for help on using tickets.