Internet-Draft | Media Streaming Ops | July 2020 |
Holland, et al. | Expires 13 January 2021 | [Page] |
This document provides an overview of operational networking issues that pertain to quality of experience in delivery of video and other high-bitrate media over the internet.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 13 January 2021.¶
Copyright (c) 2020 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.¶
As the internet has grown, an increasingly large share of the traffic delivered to end users has become video. Estimates put the total share of internet video traffic at 75% in 2019, expected to grow to 82% by 2022. What's more, this estimate projects the gross volume of video traffic will more than double during this time, based on a compound annual growth rate continuing at 34% (from Appendix D of [CVNI]).¶
In many contexts, video traffic can be handled transparently as generic application-level traffic. However, as the volume of video traffic continues to grow, it's becoming increasingly important to consider the effects of network design decisions on application-level performance, with considerations for the impact on video delivery.¶
This document aims to provide a taxonomy of networking issues as they relate to quality of experience in internet video delivery. The focus is on capturing characteristics of video delivery that have surprised network designers or transport experts without specific video expertise, since these highlight key differences between common assumptions in existing networking documents and observations of video delivery issues in practice.¶
Making specific recommendations for mitigating these issues is out of scope, though some existing mitigations are mentioned in passing. The intent is to provide a point of reference for future solution proposals to use in describing how new technologies address or avoid these existing observed problems.¶
Note to RFC Editor: Please remove this section and its subsections before publication.¶
This section is to provide references to make it easier to review the development and discussion on the draft so far.¶
This document is in the Github repository at:¶
https://github.com/ietf-wg-mops/draft-ietf-mops-streaming-opcons¶
Readers are welcome to open issues and send pull requests for this document.¶
Substantial discussion of this document should take place on the MOPS working group mailing list (mops@ietf.org).¶
Contributions are solicited regarding issues and considerations that have an impact on media streaming operations.¶
Please note that contributions may be merged and substantially edited, and as a reminder, please carefully consider the Note Well before contributing: https://datatracker.ietf.org/submit/note-well/¶
Contributions can be emailed to mops@ietf.org, submitted as issues to the issue tracker of the repository in Section 1.1.1, or emailed to the document authors at draft-ietf-mops-streaming-opcons@ietf.org.¶
Contributors describing an issue not yet addressed in the draft are requested to provide the following information, where applicable:¶
a short description of the nature of the issue and its impact on media quality of service, including:¶
a list of known mitigation techniques, with (for each known mitigation):¶
Video bitrate selection depends on many variables. Different providers give different guidelines, but an equation that approximately matches the bandwidth requirement estimates from several video providers is given in [MSOD]:¶
Kbps = (HEIGHT * WIDTH * FRAME_RATE) / (15 * 1024)¶
Height and width are in pixels, and frame rate is in frames per second. The actual bitrate required for a specific video will also depend on the codec used, fidelity desired and some other characteristics of the video itself, such as the amount and frequency of high-detail motion, which may influence the compressability of the content, but this equation provides a rough estimate.¶
Here are a few common resolutions used for video content, with their typical per-user bandwidth requirements according to this formula:¶
Name | Width x Height | Approximate Bitrate for 60fps |
---|---|---|
DVD | 720 x 480 | 1.3 Mbps |
720p (1K) | 1280 x 720 | 3.6 Mbps |
1080p (2K) | 1920 x 1080 | 8.1 Mbps |
2160p (4k) | 3840 x 2160 | 32 Mbps |
Even the basic virtual reality (360-degree) videos (that allow users to look around freely, referred to as three degrees of freedom - 3DoF) require substantially larger bitrates when they are captured and encoded as such videos require multiple fields of view of the scene. The typical multiplication factor is 8 to 10. Yet, due to smart delivery methods such as viewport-based or tiled-based streaming, we do not need to send the whole scene to the user. Instead, the user needs only the portion corresponding to its viewpoint at any given time.¶
In more immersive applications, where basic user movement (3DoF+) or full user movement (6DoF) is allowed, the required bitrate grows even further. In this case, the immersive content is typically referred to as volumetric media. One way to represent the volumetric media is to use point clouds, where streaming a single object may easily require a bitrate of 30 Mbps or higher. Refer to [PCC] for more details.¶
The bitrate requirements in Section 2.1 are per end-user actively consuming a media feed, so in the worst case, the bitrate demands can be multiplied by the number of simultaneous users to find the bandwidth requirements for a router on the delivery path with that number of users downstream. For example, at a node with 10,000 downstream users simultaneously consuming video streams, approximately up to 80 Gbps would be necessary in order for all of them to get 1080p resolution at 60 fps.¶
However, when there is some overlap in the feeds being consumed by end users, it is sometimes possible to reduce the bandwidth provisioning requirements for the network by performing some kind of replication within the network. This can be achieved via object caching with delivery of replicated objects over individual connections, and/or by packet-level replication using multicast.¶
To the extent that replication of popular content can be performed, bandwidth requirements at peering or ingest points can be reduced to as low as a per-feed requirement instead of a per-user requirement.¶
TBD: pros, cons, tradeoffs of caching designs at different locations within the network?¶
Peak vs. average provisioning, and effects on peering point congestion under peak load?¶
Provisioning issues for caching systems?¶
Historical data shows that users consume more video and videos at higher bitrates than they did in the past on their connected devices. Improvements in the codecs that help with reducing the encoding bitrates with better compression algorithms could not have offset the increase in the demand for the higher quality video (higher resolution, higher frame rate, better color gamut, better dynamic range, etc.). In particular, mobile data usage has shown a large jump over the years due to increased consumption of entertainement as well as conversational video.¶
TBD: insert charts showing historical relative data usage patterns with error bars by time of day in consumer networks?¶
Cross-ref vs. video quality by time of day in practice for some case study? Not sure if there's a good way to capture a generalized insight here, but it seems worth making the point that demand projections can be used to help with e.g. power consumption with routing architectures that provide for modular scalability.¶
Although TCP/IP has been used with a number of widely used applications that have symmetric bandwidth requirements (similar bandwidth requirements in each direction between endpoints), many widely-used Internet applications operate in client-server roles, with asymmetric bandwidth requirements. A common example might be an HTTP GET operation, where a client sends a relatively small HTTP GET request for a resource to an HTTP server, and often receives a significantly larger response carrying the requested resource. When HTTP is commonly used to stream movie-length video, the ratio between response size and request size can become quite large.¶
For this reason, operators may pay more attention to downstream bandwidth utilization when planning and managing capacity. In addition, operators have been able to deploy access networks for end users using underlying technologies that are inherently asymetric, favoring downstream bandwidth (e.g. ADSL, cellular technologies, most IEEE 802.11 variants), assuming that users will need less upstream bandwidth than downstream bandwidth. This strategy usually works, except when it does not, because application bandwidth usage patterns have changed.¶
One example of this type of change was when peer-to-peer file sharing applications gained popularity in the early 2000s. To take one well-documented case ([RFC5594]), the Bittorrent application created "swarms" of hosts, uploading and downloading files to each other, rather than communicating with a server. Bittorrent favored peers who uploaded as much as they downloaded, so that new Bittorrent users had an incentive to significantly increase their upstream bandwidth utilization.¶
The combination of the large volume of "torrents" and the peer-to-peer characteristic of swarm transfers meant that end user hosts were suddenly uploading higher volumes of traffic to more destinations than was the case before Bittorrent. This caused at least one large ISP to attempt to "throttle" these transfers, to mitigate the load that these hosts placed on their network. These efforts were met by increased use of encryption in Bittorrent, similar to an arms race, and set off discussions about "Net Neutrality" and calls for regulatory action.¶
Especially as end users increase use of video-based social networking applications, it will be helpful for access network providers to watch for increasing numbers of end users uploading significant amounts of content.¶
The causes of unpredictable usage described in Section 2.5 were more or less the result of human choices, but we were reminded during a post-IETF 107 meeting that humans are not always in control, and forces of nature can cause enormous fluctuations in traffic patterns.¶
In his talk, Sanjay Mishra [Mishra] reported that after the CoViD-19 pandemic broke out in early 2020,¶
We note that other operators saw similar spikes during this time period. Craig Labowitz [Labovitz] reported¶
Adaptive BitRate (ABR) is a sort of application-level response strategy in which the receiving media player attempts to detect the available bandwidth of the network path by experiment or by observing the successful application-layer download speed, then chooses a video bitrate (among the limited number of available options) that fits within that bandwidth, typically adjusting as changes in available bandwidth occur in the network or changes in capabilities occur in the player (such as available memory, CPU, display size, etc.).¶
The choice of bitrate occurs within the context of optimizing for some metric monitored by the video player, such as highest achievable video quality, or lowest rate of expected rebuffering events.¶
ABR playback is commonly implemented by video players using HLS [RFC8216] or DASH [DASH] to perform a reliable segmented delivery of video data over HTTP. Different player implementations and receiving devices use different strategies, often proprietary algorithms (called rate adaptation or bitrate selection algorithms), to perform available bandwidth estimation/prediction and the bitrate selection. Most players only use passive observations, i.e., they do not generate probe traffic to measure the available bandwidth.¶
This kind of bandwidth-measurement systems can experience trouble in several ways that can be affected by networking design choices.¶
When the bitrate selection is successfully chosen below the available capacity of the network path, the response to a segment request will typically complete in less absolute time than the duration of the requested segment. The resulting idle time within the connection carrying the segments has a few surprising consequences:¶
A detailed investigation of this phenomenon is available in [NOSSDAV12].¶
In the event of a lost packet on a TCP connection with SACK support (a common case for segmented delivery in practice), loss of a packet can provide a confusing bandwidth signal to the receiving application. Because of the sliding window in TCP, many packets may be accepted by the receiver without being available to the application until the missing packet arrives. Upon arrival of the one missing packet after retransmit, the receiver will suddenly get access to a lot of data at the same time.¶
To a receiver measuring bytes received per unit time at the application layer, and interpreting it as an estimate of the available network bandwidth, this appears as a high jitter in the goodput measurement.¶
Active Queue Management (AQM) systems such as PIE [RFC8033] or variants of RED [RFC2309] that induce early random loss under congestion can mitigate this by using ECN [RFC3168] where available. ECN provides a congestion signal and induce a similar backoff in flows that use Explicit Congestion Notification-capable transport, but by avoiding loss avoids inducing head-of-line blocking effects in TCP connections.¶
In contrast to segmented delivery, several applications use UDP or unreliable SCTP to deliver RTP or raw TS-formatted video.¶
Under congestion and loss, this approach generally experiences more video artifacts with fewer delay or head-of-line blocking effects. Often one of the key goals is to reduce latency, to better support applications like videoconferencing, or for other live-action video with interactive components, such as some sporting events.¶
Congestion avoidance strategies for this kind of deployment vary widely in practice, ranging from some streams that are entirely unresponsive to using feedback signaling to change encoder settings (as in [RFC5762]), or to use fewer enhancement layers (as in [RFC6190]), to proprietary methods for detecting quality of experience issues and cutting off video.¶
This document requires no actions from IANA.¶
This document introduces no new security issues.¶
Thanks to Mark Nottingham, Glenn Deen, Dave Oran, Aaron Falk, Kyle Rose, and Leslie Daigle for their very helpful reviews and comments.¶