Internet-Draft | APN for Media Service | October 2023 |
Peng & Geng | Expires 25 April 2024 | [Page] |
This draft explores the requirements and benefits of carrying media metadata in the network layer (i.e. IP packets) by following the Application-aware Networking (APN) framework with extension for the application side, and defines the specific carrying information and format.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 25 April 2024.¶
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Media services are highly demanding but have very wide applications, especially in the new era, such as extended reality (XR) and cloud gaming. Metaverse has been in various ways referring to broader implication of extended reality. For providing more immersing experience, some advanced XR may include more modalities besides video and audio stream, such as haptic data or sensor data. The rapid development of extended reality technology and computer graphics has created the technical basis for the development of various media services.¶
To facilitate the media service performance, necessary metadata is desired to be exchanged among media applications and network devices.¶
The Application-aware Networking (APN) framework [I-D.li-apn-framework] defines that application-aware information (i.e. APN attribute) including APN identification (ID) and/or APN parameters (e.g. network performance requirements) is encapsulated at network edge devices and carried in packets traversing an APN domain in order to facilitate service provisioning, perform fine-granularity traffic steering and network resource adjustment. [I-D.li-rtgwg-apn-app-side-framework] defines the extension of the APN framework for the application side. In this extension, the APN resources of an APN domain is allocated to applications which compose and encapsulate the APN attribute in packets.¶
This draft explores the requirements and benefits of carrying media metadata in the network layer (i.e. IP packets), and defines the specific carrying information and format.¶
Necessary metadata is desired to be exchanged among media applications and network devices.¶
The corresponding mechanisms for exchanging the necessary metadata are desired.¶
This metadata needs to be designed following the principles as specified in RFC 9419 [RFC9419]. The metadata being carried needs to be minimal, compact and has low processing overhead per-packet for encoding and retrieval.¶
Extended reality (XR) refers to all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables. It includes representative forms such as AR, MR and VR and the areas interpolated among them. For providing more immersing experience, some advanced XR may include more modalities besides video and audio stream, such as haptic data or sensor data.¶
Cloud XR migrates the computing resource-intensive tasks, such as video rendering, computing acceleration and other tasks with high requirements for hardware, from terminals to the data center for processing. In this way, client act only as a video player, which improves the mobility and flexibility of XR, and greatly reduces terminal costs.¶
Cloud gaming is to deploy the game application in the data center, and realize the functions includes the logical process of game command control, as well as the tasks of game acceleration, video rendering and other tasks with high requirements for chips. In this way, the terminal is a video player. Users can get a good game experience without the support of high-end system and chips.¶
Compared with the traditional game mode, there are several advantages of cloud game, such as no installation, no upgrade, no repair, quick to play and reduce the terminal cost, so it will have stronger promotion.¶
The term, metaverse, refer to a persistent, shared, perceived set of interactive perceived spaces, which is facilitated by integrating various new technologies, such as extended reality, digital twin, and blockchain. Users can be allowed to produce and edit content in the metaverse which combines the virtual world with the real world in economic systems, social systems, and identity systems. Metaverse has been in various ways to refer to the broader implications of extended reality, and it in diverse sectors evokes a number of possible new experiences, products and services that may emerge once metaverse-related technologies become commonly available and find application in our work, leisure and other activities.¶
The rapid development of extended reality technology and computer graphics created the technical basis for the development of the Metaverse. At the primary level, metaverse is still in its infancy and its business model is immature.¶
The high level architecture of 5G network is depicted as the following figure.¶
+----+ +-----+ +-----+ +----+ | AMF|-NG11-| SMF |- NG7-| PCF |-NG6-| AF | +----+ +-----+ +-----+ +----+ | | | +-----+ | | NG1 NG2 NG4 | | | +--+-+/ +-----+/ +---+-+ +-----+ | UE |----| RAN |-NG3-| UPF |--NG6--| DN | |----+ +- --+ +-----+ +-----+ Overview of 5G Network Architecture¶
The 5G network includes Radio access network (RAN) and Core network (CN). The RAN provides network access capability for the client with wireless interface, i.e., the 5G NR interface.¶
The CN includes user plane function (UPF) and control plane function (CPF). The UPF provides service delivery related function, e.g. IP packet routing & forwarding. The CPF provide signaling control related function, e.g. session establishment, mobility management. The CPFs include many control plane elements, e.g. Access and Mobility Management Function (AMF), Policy Control Function (PCF), Session Management Function (SMF) and Network Exposure Function (NEF).¶
The media delivery may benefit from the 5G architectural functions, e.g. quality of service (QoS) and edge computing.¶
The 5G QoS model is based on QoS Flows. The 5G QoS model supports both QoS Flows that require guaranteed flow bit rate (GBR QoS Flows) and QoS Flows that do not require guaranteed flow bit rate (Non-GBR QoS Flows). A QoS Flow ID (QFI) is used to identify a QoS Flow in the 5G System. User Plane traffic with the same QFI receives the same traffic forwarding treatment (e.g. scheduling, admission threshold). For real time media service, e.g. the cloud VR, the 5G network may provide the necessary QoS handling with appropriate bit rate and delay.¶
Edge computing enables operator and 3rd party services to be hosted close to the UE's access point of attachment, so as to achieve an efficient service delivery through the reduced end-to-end latency and load on the transport network. Edge computing can be supported by one or a combination of the following enablers:¶
- User plane (re)selection: the 5G Core Network (re)selects UPF to route the user traffic to the local Data Network.¶
- Local Routing and Traffic Steering: the 5G Core Network selects the traffic to be routed to the applications in the local Data Network.¶
- Session and service continuity to enable UE and application mobility.¶
- The application may influence UPF (re)selection and traffic routing via PCF or NEF.¶
- Network capability exposure: 5G Core Network and application to provide information to each other via NEF or directly.¶
- QoS and Charging: PCF provides rules for QoS Control and Charging for the traffic routed to the local Data Network.¶
- Support of Local Area Data Network: 5G Core Network provides support to connect to the LADN in a certain area where the applications are deployed.¶
The media traffic, e.g. cloud XR and cloud gaming, has the characteristics of high throughput, low latency, and high reliability requirement.¶
Considering the user experience, cloud XR usually needs a high bandwidth, e.g. 100Mbps, due to the downlink video/haptic feedback data, and a low end-to-end latency less than 20ms. With introducing the cloud server, the transmission distance and downlink traffic load are extended compared with the traditional XR mode. Therefore, cloud XR imposes strict requirements on the latency, network bandwidth, and reliability of the entire communication process.¶
Currently, it can only support limited XR capacity in 5G network due to high requirement on data rate, reliability and latency. As evaluated in 3GPP, one cell with 100MHz bandwidth could just support 5 XR users. It is a big challenge how to improve the system capacity to support more XR users.¶
To provide good service experience for users, the XR services with real-time interaction typically require very low motion-to-photon (MTP) latency. Poor MTP latency performance leads to spatial disorientation, motion sickness and dizziness. It is a big challenge how to meet the very low RTT latency requirement in variable wireless networks.¶
All media traffic, in spite of which codec was used, have some common characteristics. These characteristics can be very useful for better transmission control and efficiency. However, currently 5GS uses common QoS mechanisms to handle media services together with other data services without taking full advantage of these information.¶
In order to cope with the challenges of media delivery, it is a possible way to make the network learn more information of media service to enhance the experience of these media services.¶
[I-D.li-apn-framework] proposes the framework of Application-aware Networking (APN), where application-aware information (APN attribute) including application-aware identification (APN ID) and application-aware parameters (APN Parameters), is encapsulated at network edge devices and carried along with the encapsulation of the tunnel used by the packet when traversing the APN domain. By APN domain we intend the operator infrastructure where APN is used from edge to edge (ingress to egress) and where the packet is encapsulated using an outer header incorporating the APN information. The APN attribute will facilitate service provisioning and provide fine-granularity services in the APN domain.¶
[I-D.li-apn-framework] defines the extension of the APN framework for the application side. APN framework can be adopted to provide more application-aware information of media services to the network. Then the network can take use of these application-aware information to provide enhanced network services to improve the experience of media services.¶
APN Attribute can carry the packet dependency information for the media service. Packets within a frame have dependency with each other since the application needs all of these packets for decoding the frame. Hence one packet loss will make other correlative packets useless even if they are successfully transmitted.¶
[REQ11] APN SHOULD be extended to carry the packet dependency information.¶
Media packets have different importance. Packets of the same video stream but different frame types (I/P frame) or even different positions in the GoP (Group of Picture) are of different contributions to user experience, so a layered QoS handling within the video stream can potentially relax the requirement thus lead to higher efficiency. APN Attribute can be adopted to carry information about the frame types and positions in the GoP.¶
[REQ21] APN SHOULD be extended to carry information about frame types and positions in the GoP.¶
The XR/media traffic has natural interval between periodic video/audio frames. It would be possible to enhance power saving mechanisms (e.g. CDRX) considering the XR/media traffic pattern. APN Attribute can be used to carry such information.¶
[REQ31] APN SHOULD be extended to carry informaton about XR/media traffic pattern.¶
This Media Metadata parameter indicates the media application-aware information requested by the APN traffic to satisfy the potential requirements raised above, e.g. packet dependency, frame types, and so on. A format example of this parameter is shown in the following diagram:¶
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Media Metadata | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+¶
The detailed design of this metadata parameter proposed by use cases of APN for media services as well as its encapsulation will be defined in the future version of the draft.¶
TBD.¶