RTCWEB Working Group | C.H. Holmberg |
Internet-Draft | S.H. Hakansson |
Intended status: Informational | G.E. Eriksson |
Expires: April 06, 2012 | Ericsson |
October 04, 2011 |
Web Real-Time Communication Use-cases and Requirements
draft-ietf-rtcweb-use-cases-and-requirements-06.txt
This document describes web based real-time communication use-cases. Based on the use-cases, the document also derives requirements related to the browser, and the API used by web applications to request and control media stream services provided by the browser.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 06, 2012.
Copyright (c) 2011 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
This document presents a few use-cases of web applications that are executed in a browser and use real-time communication capabilities. Based on the use-cases, the document derives requirements related to the browser and the API used by web applications in the browser.
The requirements related to the browser are named "Fn" and are described in Section 5.2
The requirements related to the API are named "An" and are described in Section 5.3
The document focuses on requirements related to real-time media streams. Requirements related to privacy, signalling between the browser and web server etc. are currently not considered.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14, RFC 2119 [RFC2119].
TBD
This section describes web based real-time communication use-cases, from which requirements are derived.
The following considerations are applicable to all use cases:
Two or more users have loaded a video communication web application into their browsers, provided by the same service provider, and logged into the service it provides. The web service publishes information about user login status by pushing updates to the web application in the browsers. When one online user selects a peer online user, a 1-1 video communication session between the browsers of the two peers is initiated. The invited user might accept or reject the session.
During session establishment a self-view is displayed, and once the session has been established the video sent from the remote peer is displayed in addition to the self-view. During the session, each user can select to remove and re-insert the self-view as often as desired. Each user can also change the sizes of his/her two video displays during the session. Each user can also pause sending of media (audio, video, or both) and mute incoming media
It is essential that the communication cannot be eavesdropped.
Any session participant can end the session at any time.
The two users may be using communication devices of different makes, with different operating systems and browsers from different vendors.
One user has an unreliable Internet connection. It sometimes loses packets, and sometimes goes down completely.
One user is located behind a Network Address Translator (NAT).
F1, F2, F3, F4, F5, F6, F8, F9, F10, F20, F25, F28
A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12
This use-case is almost identical to the Simple Video Communication Service use-case (Section 4.2.1). The difference is that one of the users is behind a NAT that blocks UDP traffic.
F1, F2, F3, F4, F5, F6, F8, F9, F10, F20, F25, F28, F29
A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12
This use-case is almost identical to the Simple Video Communication Service use-case (Section 4.2.1).
What is added is that the service provider is operating over large geographical areas (or even globally).
Assuming that ICE will be used, this means that the service provider would like to be able to provide several STUN and TURN servers (via the app) to the browser; selection of which one(s) to use is part of the ICE processing. Other reasons for wanting to provide several STUN and TURN servers include support for IPv4 and IPv6, load balancing and redundancy.
Note that the additional requirements derived are termed FaI/AaI where aI means "assuming ICE".
F1, F2, F3, F4, F5, F6, F8, F9, F10, F20, F25, F28
FaI1
A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12
AaI1
This use-case is similar to the Simple Video Communication Service use-case (Section 4.2.1).
What is added is aspects when using the service in enterprises. ICE is assumed in the further description of this use-case.
An enterprise that uses a RTCWEB based web application for communication desires to audit all RTCWEB based application session used from inside the company towards any external peer. To be able to do this they deploy a TURN server that straddle the boundary between the internal network and the external.
The firewall will block all attempts to use STUN with an external destination unless they go to the enterprise auditing TURN server. In cases where employees are using RTCWEB applications provided by an external service provider they still want to have the traffic to stay inside their internal network and in addition not load the straddling TURN server, thus they deploy a STUN server allowing the RTCWEB client to determine its server reflexive address on the internal side. Thus enabling cases where peers are both on the internal side to connect without the traffic leaving the internal network. It must be possibele to configure the browsers used in the enterprise with network specific STUN and TURN servers. This should be possible to achieve by autoconfiguration methods. The RTCWEB functionality will need to utilize both network specific STUN and TURN resources and STUN and TURN servers provisioned by the web application.
Note that the additional requirements derived are termed FaI/AaI where aI means "assuming ICE".
F1, F2, F3, F4, F5, F6, F8, F9, F10, F20, F25, F28
FaI2
A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12
This use-case is almost identical to the Simple Video Communication Service use-case (Section 4.2.1).The difference is that the user changes network access during the session:
The communication device used by one of the users have several network adapters (Ethernet, WiFi, Cellular). The communication device is accessing the Internet using Ethernet, but the user has to start a trip during the session. The communication device automatically changes to use WiFi when the Ethernet cable is removed and then moves to cellular access to the Internet when moving out of WiFi coverage. The session continues even though the access method changes.
F1, F2, F3, F4, F5, F6, F8, F9, F10, F20, F25, F26, F28
A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12
This use-case is almost identical to the Simple Video Communication Service, access change use-case (Section 4.2.5). The use of Quality of Service (QoS) capabilities is added:
The user in the previous use case that starts a trip is behind a common residential router that supports prioritization of traffic. In addition, the user's provider of cellular access has QoS support enabled. The user is able to take advantage of the QoS support both when accessing via the residential router and when using cellular.
F1, F2, F3, F4, F5, F6, F8, F9, F10, F20, F24, F25, F26, F28
A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12
This use-case has the audio and video communication of the Simple Video Communication Service use-case (Section 4.2.1).
But in addition to this, one of the users can share what is being displayed on her/his screen with a peer. The user can choose to share the entire screen, part of the screen (part selected by the user) or what a selected applicaton displays with the peer.
F1, F2, F3, F4, F5, F6, F8, F9, F10, F20, F25, F28, F30
A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A21
Two users have logged into two different web applications, provided by different service providers.
The service providers are interconnected by some means, but exchange no more information about the users than what can be carried using SIP.
NOTE: More profiling of what this means may be needed.
For each user Alice who has authorized another user Bob to receive login status information, Alice's service publishes Alice's login status information to Bob. How this authorization is defined and established is out of scope.
The same functionality as in the the Simple Video Communication Service use-case (Section 4.2.1) is available.
The same issues with connectivity apply.
F1, F2, F3, F4, F5, F6, F8, F9, F10, F20, F25, F27, F28
A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A20
An ice-hockey club uses an application that enables talent scouts to, in real-time, show and discuss games and players with the club manager. The talent scouts use a mobile phone with two cameras, one front facing and one rear facing.
The club manager uses a desktop, equipped with one camera, for viewing the game and discussing with the talent scout.
Before the game starts, and during game breaks, the talent scout and the manager have a 1-1 video communication. Only the rear facing camera of the mobile phone is used. On the display of the mobile phone, the video of the club manager is shown with a picture-in-picture thumbnail of the rear facing camera (self-view). On the display of the desktop, the video of the talent scout is shown with a picture-in-picture thumbnail of the desktop camera (self-view).
When the game is on-going, the talent scout activates the use of the front facing camera, and that stream is sent to the desktop (the stream from the rear facing camera continues to be sent all the time). The video stream captured by the front facing camera (that is capturing the game) of the mobile phone is shown in a big window on the desktop screen, with picture-in-picture thumbnails of the rear facing camera and the desktop camera (self-view). On the display of the mobile phone the game is shown (front facing camera) with picture-in-picture thumbnails of the rear facing camera (self-view) and the desktop camera.
It is essential that the communication cannot be eavesdropped.
F1, F2, F3, F4, F5, F6, F8, F9, F10, F17, F20
A1, A2, A3, A4, A5, A7, A8, A9, A10, A11, A12, A17
In this use-case is the Simple Video Communication Service use-case (Section 4.2.1) is extended by allowing multiparty sessions. No central server is involved - the browser of each participant sends and receives streams to and from all other session participants. The web application in the browser of each user is responsible for setting up streams to all receivers.
In order to enhance intelligibility, the web application pans the audio from different participants differently when rendering the audio. This is done automatically, but users can change how the different participants are placed in the (virtual) room. In addition the levels in the audio signals are adjusted before mixing.
Another feature intended to enhance the use experience is that the video window that displays the video of the currently speaking peer is highlighted.
Each video stream received is by default displayed in a thumbnail frame within the browser, but users can change the display size.
It is essential that the communication cannot be eavesdropped.
Note: What this use-case adds in terms of requirements is capabilities to send streams to and receive streams from several peers concurrently, as well as the capabilities to render the video from all recevied streams and be able to spatialize, level adjust and mix the audio from all received streams locally in the browser. It also adds the capability to measure the audio level/activity.
F1, F2, F3, F4, F5, F6, F8, F9, F10, F11, F12, F13, F14, F15, F16, F17, F20, F25
A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17
This use case is based on the previous one. In this use-case, the voice part of the multiparty video communication use case is used in the context of an on-line game. The received voice audio media is rendered together with game sound objects. For example, the sound of a tank moving from left to right over the screen must be rendered and played to the user together with the voice media.
Quick updates of the game state is required.
It is essential that the communication cannot be eavesdropped.
Note: the difference regarding local audio processing compared to the "Multiparty video communication" use-case is that other sound objects than the streams must be possible to be included in the spatialization and mixing. "Other sound objects" could for example be a file with the sound of the tank; that file could be stored locally or remotely.
F1, F2, F3, F4, F5, F6, F8, F9, F11, F12, F13, F14, F15, F16, F18, F20, F23
A1, A2, A3, A4, A5, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18
In this use-case, a music band is playing music while the members are at different physical locations. No central server is used, instead all streams are set up in a mesh fashion.
Discussion: This use-case was briefly discussed at the Quebec webrtc meeting and it got support. So far the only concrete requirement (A17) derived is that the application must be able to ask the browser to treat the audio signal as audio (in contrast to speech). However, the use case should be further analysed to determine other requirements (could be e.g. on delay mic->speaker, level control of audio signals, etc.).
F1, F2, F3, F4, F5, F6, F8, F9, F11, F12, F13, F14, F15, F16
A1, A2, A3, A4, A5, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A19
A mobile telephony operator allows its customers to use a web browser to access their services. After a simple log in the user can place and receive calls in the same way as when using a normal mobile phone. When a call is received or placed, the identity is shown in the same manner as when a mobile phone is used.
It is essential that the communication cannot be eavesdropped.
Note: With "place and receive calls in the same way as when using a normal mobile phone" it is meant that you can dial a number, and that your mobile telephony operator has made available your phone contacts on line, so they are available and can be clicked to call, and be used to present the identity of an incoming call. If the callee is not in your phone contacts the number is displayed. Furthermore, your call logs are available, and updated with the calls made/received from the browser. And for people receiving calls made from the web browser the usual identity (i.e. the phone number of the mobile phone) will be presented.
F1, F2, F3, F4, F5, F6, F8, F9, F10, F20, F21
A1, A2, A3, A4, A7, A8, A9, A10, A11, A12
Alice uses her web browser with a service something like Skype to be able to phone PSTN numbers. Alice calls 1-800-gofedex. Alice should be able to hear the initial prompts from the fedex IVR and when the IVR says press 1, there should be a way for Alice to navigate the IVR.
F1, F2, F3, F4, F5, F6, F8, F9, F10, F21, F22
A1, A2, A3, A4, A7, A8, A9, A10, A11, A12
An organization uses a video communication system that supports the establishment of multiparty video sessions using a central conference server.
The browser of each participant send an audio stream (type in terms of mono, stereo, 5.1, ... depending on the equipment of the participant) to the central server. The central server mixes the audio streams (and can in the mixing process naturally add effects such as spatialization) and sends towards each participant a mixed audio stream which is played to the user.
The browser of each participant sends video towards the server. For each participant one high resolution video is displayed in a large window, while a number of low resolution videos are displayed in smaller windows. The server selects what video streams to be forwarded as main- and thumbnail videos respectively, based on speech activity. As the video streams to display can change quite frequently (as the conversation flows) it is important that the delay from when a video stream is selected for display until the video can be displayed is short.
The organization has an internal network set up with an aggressive firewall handling access to the Internet. If users cannot physically access the internal network, they can establish a Virtual Private Network (VPN).
It is essential that the communication cannot be eavesdropped.
All participants are authenticated by the central server, and authorized to connect to the central server. The participants are identified to each other by the central server, and the participants do not have access to each others' credentials such as e-mail addresses or login IDs.
Note: This use-case adds requirements on support for fast stream switches F7, on encryption of media and on ability to traverse very restrictive FWs. There exist several solutions that enable the server to forward one high resolution and several low resolution video streams: a) each browser could send a high resolution, but scalable stream, and the server could send just the base layer for the low resolution streams, b) each browser could in a simulcast fashion send one high resolution and one low resolution stream, and the server just selects or c) each browser sends just a high resolution stream, the server transcodes into low resolution streams as required.
F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, F17, F19, F20
A1, A2, A3, A4, A5, A7, A8, A9, A10, A11, A12, A17
This section contains the requirements derived from the use-cases in section 4.
NOTE: It is assumed that the user applications are executed on a browser. Whether the capabilities to implement specific browser requirements are implemented by the browser application, or are provided to the browser application by the underlying operating system, is outside the scope of this document.
REQ-ID DESCRIPTION --------------------------------------------------------------- F1 The browser MUST be able to use microphones and cameras as input devices to generate streams. ---------------------------------------------------------------- F2 The browser MUST be able to send streams to a peer in the presence of NATs. ---------------------------------------------------------------- F3 Transmitted streams MUST be rate controlled. ---------------------------------------------------------------- F4 The browser MUST be able to receive, process and render streams from peers. ---------------------------------------------------------------- F5 The browser MUST be able to render good quality audio and video even in the presence of reasonable levels of jitter and packet losses. TBD: What is a reasonable level? ---------------------------------------------------------------- F6 The browser MUST be able to handle high loss and jitter levels in a graceful way. ---------------------------------------------------------------- F7 The browser MUST support fast stream switches. ---------------------------------------------------------------- F8 The browser MUST detect when a stream from a peer is not received anymore ---------------------------------------------------------------- F9 When there are both incoming and outgoing audio streams, echo cancellation MUST be made available to avoid disturbing echo during conversation. QUESTION: How much control should be left to the web application? ---------------------------------------------------------------- F10 The browser MUST support synchronization of audio and video. QUESTION: How much control should be left to the web application? ---------------------------------------------------------------- F11 The browser MUST be able to transmit streams to several peers concurrently. ---------------------------------------------------------------- F12 The browser MUST be able to receive streams from multiple peers concurrently. ---------------------------------------------------------------- F13 The browser MUST be able to apply spatialization effects to audio streams. ---------------------------------------------------------------- F14 The browser MUST be able to measure the level in audio streams. ---------------------------------------------------------------- F15 The browser MUST be able to change the level in audio streams. ---------------------------------------------------------------- F16 The browser MUST be able to render several concurrent video streams ---------------------------------------------------------------- F17 The browser MUST be able to mix several audio streams. ---------------------------------------------------------------- F18 The browser MUST be able to process and mix sound objects (media that is retrieved from another source than the established media stream(s) with the peer(s) with audio streams. ---------------------------------------------------------------- F19 Streams MUST be able to pass through restrictive firewalls. ---------------------------------------------------------------- F20 It MUST be possible to protect streams from eavesdropping. ---------------------------------------------------------------- F21 The browser MUST support an audio media format (codec) that is commonly supported by existing telephony services. QUESTION: G.711? ---------------------------------------------------------------- F22 There should be a way to navigate the IVR ---------------------------------------------------------------- F23 The browser must be able to send short latency datagram traffic to a peer browser ---------------------------------------------------------------- F24 The browser MUST be able to take advantage of capabilities to prioritize voice and video appropriately. ---------------------------------------------------------------- F25 The browser SHOULD use encoding of streams suitable for the current rendering (e.g. video display size) and SHOULD change parameters if the rendering changes during the session ---------------------------------------------------------------- F26 It MUST be possible to move from one network interface to another one ---------------------------------------------------------------- F27 The browser MUST be able to initiate and accept a media session where the data needed for establishment can be carried in SIP. ---------------------------------------------------------------- F28 The browser MUST support a baseline audio and video codec ---------------------------------------------------------------- F29 The browser MUST be able to send streams to a peer in the presence of NATs that block UDP traffic. ---------------------------------------------------------------- F30 The browser MUST be able to use the screen (or a specific area of the screen) or what a certain application displays on the screen to generate streams. ---------------------------------------------------------------- FaI1 The browser MUST be able to use several STUN and TURN servers ---------------------------------------------------------------- FaI2 There browser MUST support that STUN and TURN servers to use are supplied by other entities than the service provided (i.e. the network provider) ----------------------------------------------------------------
REQ-ID DESCRIPTION ---------------------------------------------------------------- A1 The Web API MUST provide means for the application to ask the browser for permission to use cameras and microphones as input devices. ---------------------------------------------------------------- A2 The Web API MUST provide means for the web application to control how streams generated by input devices are used. ---------------------------------------------------------------- A3 The Web API MUST provide means for the web application to control the local rendering of streams (locally generated streams and streams received from a peer). ---------------------------------------------------------------- A4 The Web API MUST provide means for the web application to initiate sending of stream/stream components to a peer. ---------------------------------------------------------------- A5 The Web API MUST provide means for the web application to control the media format (codec) to be used for the streams sent to a peer. NOTE: The level of control depends on whether the codec negotiation is handled by the browser or the web application. ---------------------------------------------------------------- A6 The Web API MUST provide means for the web application to modify the media format for streams sent to a peer after a media stream has been established. ---------------------------------------------------------------- A7 The Web API MUST provide means for informing the web application of whether the establishment of a stream with a peer was successful or not. ---------------------------------------------------------------- A8 The Web API MUST provide means for the web application to mute/unmute a stream or stream component(s). When a stream is sent to a peer mute status must be preserved in the stream received by the peer. ---------------------------------------------------------------- A9 The Web API MUST provide means for the web application to cease the sending of a stream to a peer. ---------------------------------------------------------------- A10 The Web API MUST provide means for the web application to cease processing and rendering of a stream received from a peer. ---------------------------------------------------------------- A11 The Web API MUST provide means for informing the web application when a stream from a peer is no longer received. ---------------------------------------------------------------- A12 The Web API MUST provide means for informing the web application when high loss rates occur. ---------------------------------------------------------------- A13 The Web API MUST provide means for the web application to apply spatialization effects to audio streams. ---------------------------------------------------------------- A14 The Web API MUST provide means for the web application to detect the level in audio streams. ---------------------------------------------------------------- A15 The Web API MUST provide means for the web application to adjust the level in audio streams. ---------------------------------------------------------------- A16 The Web API MUST provide means for the web application to mix audio streams. ---------------------------------------------------------------- A17 For each stream generated, the Web API MUST provide an identifier that is accessible by the application. The identifier MUST be accessible also for a peer receiving that stream and MUST be unique relative to all other stream identifiers in use by either party. ---------------------------------------------------------------- A18 In addition to the streams listed elsewhere, the Web API MUST provide a mechanism for sending and receiving isolated discrete chunks of data. ---------------------------------------------------------------- A19 The Web API MUST provide means for the web application indicate the type of audio signal (speech, audio)for audio stream(s)/stream component(s). ---------------------------------------------------------------- A20 It must be possible for an initiator or a responder Web application to indicate the types of media he's willing to accept incoming streams for when setting up a connection (audio, video, other). The types of media he's willing to accept can be a subset of the types of media the browser is able to accept. ---------------------------------------------------------------- A21 The Web API MUST provide means for the application to ask the browser for permission to the screen, a certain area on the screen or what a certain application displays on the screen as input to streams. ---------------------------------------------------------------- AaI1 The Web API MUST provide means for the application to specify several STUN and/or TURN servers to use. ----------------------------------------------------------------
TBD
A malicious web application might use the browser to perform Denial Of Service (DOS) attacks on NAT infrastructure, or on peer devices. Also, a malicious web application might silently establish outgoing, and accept incoming, streams on an already established connection.
Based on the identified security risks, this section will describe security considerations for the browser and web application.
The browser is expected to provide mechanisms for getting user consent to use device resources such as camera and microphone.
The browser is expected to provide mechanisms for informing the user that device resources such as camera and microphone are in use ("hot").
The browser is expected to provide mechanisms for users to revise and even completely revoke consent to use device resources such as camera and microphone.
The browser is expected to provide mechanisms for getting user consent to use the screen (or a certain part of it) or what a certain application displays on the screen as source for streams.
The browser is expected to provide mechanisms for informing the user that the screen, part thereof or an application is serving as a stream source ("hot").
The browser is expected to provide mechanisms for users to revise and even completely revoke consent to use the screen, part thereof or an application is serving as a stream source.
The browser is expected to provide mechanisms in order to assure that streams are the ones the recipient intended to receive.
The browser needs to ensure that media is not sent, and that received media is not rendered, until the associated stream establishment and handshake procedures with the remote peer have been successfully finished.
The browser needs to ensure that the stream negotiation procedures are not seen as Denial Of Service (DOS) by other entities.
The web application is expected to ensure user consent in sending and receiving media streams.
Several additional use-cases have been discussed. At this point these use-cases are not included as requirement deriving use-cases for different reasons (lack of documentation, overlap with existing use-cases, lack of consensus). For completeness these additional use-cases are listed below:
Dan Burnett has reviewed and proposed a lot of things that enhances the document. Most of this has been incorporated in rev -05.
Stephan Wenger has provided a lot of useful input and feedback, as well as editorial comments.
Harald Alvestrand and Ted Hardie have provided comments and feedback on the draft.
Harald Alvestrand and Cullen Jennings have provided additional use-cases.
Thank You to everyone in the RTCWEB community that have provided comments, feedback and improvement proposals on the draft content.
[RFC EDITOR NOTE: Please remove this section when publishing]
Changes from draft-ietf-rtcweb-use-cases-and-requirements-05
Changes from draft-ietf-rtcweb-use-cases-and-requirements-04
Changes from draft-ietf-rtcweb-use-cases-and-requirements-03
Changes from draft-ietf-rtcweb-use-cases-and-requirements-02
Changes from draft-ietf-rtcweb-ucreqs-01
Changes from draft-ietf-rtcweb-ucreqs-00
Changes from draft-holmberg-rtcweb-ucreqs-01
Changes from draft-holmberg-rtcweb-ucreqs-00
[RFC2119] | Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. |
[webrtc_reqs] | Webrt requirements, http://dev.w3.org/2011/webrtc/editor/webrtc_reqs.html", . | , "