Internet-Draft | SINC | February 2023 |
Lou, et al. | Expires 26 August 2023 | [Page] |
This memo introduces "Signaling In-Network Computing operations" (SINC), a mechanism to enable signaling in-network computing operations on data packets in specific scenarios like NetReduce, NetDistributedLock, NetSequencer, etc. In particular, this solution allows to flexibly communicate computational parameters, to be used in conjunction with the payload, to in-network SINC-enabled devices in order to perform computing operations.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 26 August 2023.¶
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
According to the original design, the Internet performs just "store and forward" of packets, and leaves more complex operations at the end-points. However, new emerging applications could benefit from in-network computing to improve the overall system efficiency ([GOBATTO], [ZENG]).¶
The formation of the COmputing In-Network (COIN) Research Group [COIN], in the IRTF, encourages people to explore this emerging technology and its impact on the Internet architecture. The "Use Cases for In-Network Computing" document [I-D.irtf-coinrg-use-cases] introduces some use cases to demonstrate how real applications can benefit from COIN and show essential requirements demanded by COIN applications.¶
Recent research has shown that network devices undertaking some computing tasks can greatly improve the network and application performance in some scenarios, like for instance aggregating path-computing [NetReduce], key-value(K-V) cache [NetLock], and strong consistency [GTM]. Their implementations mainly rely on programmable network devices, by using P4 [P4] or other languages. In the context of such heterogeneity of scenarios, it is desirable to have a generic and flexible framework, able to explicitly signaling the computing operation to be performed by network devices, which should be applicable to many use cases, enabling easier deployment.¶
This document specifies such a Signaling In-Network Computing (SINC) framework for, as the name states, in-network computing operation. The computing functions are hosted on network devices, which, in this memo, are generally named as SINC switches/routers.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] and [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
Hereafter a few relevant use cases are described, namely NetReduce, NetDistributedLock, and NetSequencer, in order to help understanding the requirements for a framework. Such a framework, should be generic enough to accommodate a large variety of use cases, besides the ones described in this document.¶
Over the last decade, the rapid development of Deep Neural Networks (DNN) has greatly improved the performance of many Artificial Intelligence (AI) applications like computer vision and natural language processing. However, DNN training is a computation intensive and time consuming task, which has been increasing exponentially (computation time gets doubled every 3.4 months [OPENAI]) in the past 10 years. Scale-up techniques concentrating on the computing capability of a single device cannot meet the expectation. Distributed DNN training approaches with synchronous data parallelism like Parameter Server [PARAHUB] and All-Reduce [MGWFBP] are commonly employed in practice, which on the other hand, become increasingly a network-bound workload since communication becomes a bottleneck at scale.¶
Comparing to host-oriented solutions, in-network aggregation approaches like SwitchML [SwitchML] and SHARP [SHARP] could potentially reduce to nearly half the bandwidth needed for data aggregation, by offloading gradients aggregation from the host to network switches. The SwitchML solution uses UDP for network transport. The system solely relies on application layer logic to trigger retransmission for packet loss, which leads to extra latency and reduces the training performance. The SHARP solution on the contrary, uses Remote Direct Memory Access (RDMA) to provide reliable transmission [ROCEv2]. As the Infini-Band (IB) technology requires specific hardware support, this solution is not very cost-effective. NetReduce [NetReduce] does not depend on dedicated hardware and provides a general in-network aggregation solution that is suitable for Ethernet networks.¶
In the majority of distributed system, the lock primitive is a widely used concurrency control mechanism. For large distributed systems, there is usually a dedicated lock manager that nodes contact to gain read and/or write permissions of a resource. The lock manager is often abstracted as Compare And Swap (CAS) or Fetch Add (FA) operations.¶
The lock manager is typically running on a server, causing a limitation on the performance by the speed of disk I/O transaction. When the load increases, for instance in the case of database transactions processed on a single node, the lock manager becomes a major performance bottleneck, consuming nearly 75% of transaction time [OLTP]. The multi-node distributed lock processing superimposes the communication latency between nodes, which makes the performance even worse. Therefore offloading the lock manager function from the server to the network switch might be a better choice, since the switch is capable of managing lock function efficiently. Meanwhile it liberate the server for other computation tasks.¶
The test results in NetLock [NetLock] show that the lock manager running on a switch is able to answer 100 million requests per second, nearly 10 times more than what a lock server can do.¶
Transaction managers are centralized solutions to guarantee consistency for distributed transactions, such as GTM in Postgre-XL ([GTM], [CALVIN]). However, as a centralized module, transaction managers have become a bottleneck in large scale high-performance distributed systems. The work by Kalia et al. [HPRDMA] introduces a server based networked sequencer, which is a kind of task manager assigning monotonically increasing sequence number for transactions. In [HPRDMA], the authors shows that the maximum throughput is 122 Million requests per second (Mrps), at the cost of an increased average latency. This bounded throughput will impact the scalability of distributed systems. The authors also test the bottleneck for varies optimization methods, including CPU, DMA bandwidth and PCIe RTT, which is introduced by the CPU centric architecture.¶
For a programmable switch, a sequencer is a rather simple operation, while the pipeline architecture can avoid bottlenecks. It is worth implementing a switch-based sequencer, which sets the performance goal as hundreds of Mrps and latency in the order of microseconds.¶
The COIN use case draft [I-D.irtf-coinrg-use-cases] illustrates some general requirements for scenarios like in-network control and distributed AI, where the aforementioned use cases belong to. One of the requirements is that any in-network computing system must provide means to specify the constraints for placing execution logic in certain logical execution points (and their associated physical locations). In case of NetReduce, NetDistributedLock, and NetSequencer, data aggregation, lock management and sequence number generation functions can be offloaded onto the in-network device. It can be observed that those functions are based on "simple" and "generic" operators, as shown in Table 1. Programmable switches are capable of performing basic operations by executing one or more operators, without impacting the forwarding performance ([NetChain], [ERIS]).¶
Use Case | Operation | Description |
---|---|---|
NetReduce | Sum value (SUM) | The in-network device sums the data together and outputs the resulting value. |
NetLock | Compare And Swap or Fetch-and-Add (CAS or FA) | By comparing the request with the status of its own lock, the in-network device sends out whether the host has the acquired the lock. Through the CAS and FA, host can implement shared and exclusive locks. |
NetSequencer | Fetch-and-Add (FA) | The in-network device provides a monotonically increasing counter number for the host. |
This section describes the various elements of the SINC framework and explains how they work together.¶
The SINC protocol and extensions are designed for deployment in limited domains, such as a data center network, rather than deployment across the open Internet. The requirements and semantics are specifically limited, as defined in the previous sections.¶
Figure 1 shows the overall SINC framework, consisting of Hosts, the SINC Ingress Proxy, SINC switch/router (SW/R), the SINC Egress Proxy and normal switches/routers(if any).¶
In the SINC domain, a host MUST be SINC-aware. It defines the data operation to be executed. However, it does not need to be aware of where the operation will be executed and how the traffic will be steered in the network. The host sends out packets with a SINC header containing the definition and parameters of data operations. The SINC header could be placed directly after the transport layer, before the computing data as part of the payload. However, the SINC header can also potentially be positioned at layer 4, layer 3, or even layer 2, depending on the network context of the applications and the deployment consideration. This will be discussed in further details in [I-D.zhou-sinc-deployment-considerations].¶
The SINC proxies are responsible for encapsulating/decapsulating packets in order to steer them through the right network path and nodes. The SINC proxies may or may not be collocated with hosts. The SINC Ingress Proxy encapsulates and forwards packets containing a SINC header, to the right node(s) with SINC operation capabilities. Such an operation may involve the use of protocols like Service Function Chaining (SFC [RFC7665]), LISP [RFC9300], Geneve [RFC8926], or even MPLS [RFC3031]. Based on the definition of the required data processing and the network capabilities, the SINC ingress proxy can determine whether the data processing defined in the SINC header could be executed in a single node or in multiple nodes. The SINC Egress Proxy is responsible for decapsulating packets before forwarding them to the destination host.¶
The SINC switch/router is the node equipped with in-network computing capabilities. Upon receiving a SINC packet, the SINC switch/router data-plane processes the SINC header, executes required operations, updates the payload with results (if necessary) and forwards the packet to the destination.¶
The SINC workflow is as follows:¶
According to the SINC scenarios, the SINC processing can be divided into two modes: individual computing mode and batch computing mode.¶
Individual operations include all operations that can be performed on data coming from a single packet (e.g., Netlock). Conversely, batch operations include all operations that require to collect data from multiple packets (e.g., NetReduce data aggregation).¶
The NetLock is a typical scenario involving individual operations, where the SINC switch/router acts as a lock server, generating a lock for a packet coming from one host.¶
This kind of operation has some general aspects to be considered:¶
The batch operations require to collect data from multiple before actually being able to perform the required operations. For instance, in the NetReduce scenario, the gradient aggregation requires packets carrying gradient arrays from each host to generate the desired result array.¶
In this scenario, beside the general issues mentioned for the individual operations, the batch operation may fail because some packets do not arrive (or arrive too late). The time the packets are temporarily cached on the SINC switch/router should be carefully configured. On the one hand, it has to be sufficiently long so that there is enough time to receive all required packets. On the other hand, it has to be sufficiently short so that no retransmissions are triggered at the transport or application layers on the end hosts. Similarly to the error condition for the individual operations, if the SINC switch/router does not receive all required packets in the configured time interval, it can simply forward the packets to the end host so that they deal with packet losses and retransmissions if necessary.¶
The SINC header carries the data operation information and it has a fixed length of 16 octets, as shown in Figure 2.¶
The SINC control plane has to configure SINC network elements to ensure the proper execution of the computing task. The SINC framework can work with either centralized or distributed control planes However, this document does not assume any specific control plane design. The basic requirements of the control plane shall include the following:¶
In-network computing exposes computing data to network devices, which inevitably raises security and privacy considerations. The security problems faced by in-network computing include, but are not limited to:¶
This documents assume that the deployment is done in a trusted environment. For example, in a data center network or a private network.¶
A fine security analysis will be provided in future revisions of this memo.¶
This document makes no requests to IANA.¶
Dirk Trossen's feedback was of great help in improving this document.¶
In-Network computing can greatly help distributed applications that make an intensive usage of the network. Yet, not all of the operations can be performed in-network, since the computational resources are usually very limited. Disassembling complex tasks into basic calculation operation, such as addition, subtraction, Max, etc. is the most appropriate approach for offloading these operations on in-network devices at line rate.¶
SINC aims at providing a general way for signaling the operation to be performed on the data. As such, the definition of the operations are orthogonal to the SINC proposal it self, as long as it is possible to identify the different operations via a code point. An example of basic operation that may be performed in-network are listed in Table 2¶
OpName | Operation Explanation |
---|---|
Max | Maximum value of several parameters |
MIN | Minimum value |
SUM | Sum value |
PROD | Product value |
LAND | Logical and |
BAND | Bit-wise and |
LOR | Logical or |
BOR | Bit-wise or |
LXOR | Logical xor |
BXOR | Bit-wise xor |
WRITE | Write value accord to key |
READ | Read value accord to key |
DELETE | Delete value accord to key |
CAS | Compare and swap. compare the value of the key and old value. If not same, swap old value to key value. Return old key value. |
CAADD | Compare and add. compare the value of the key and expected value. If same, add add-value to key value. Return old key value. |
CASUB | Compare and subtract. compare the value of the key and expected value. If same, sub sub-value to key value. Return old key value. |
FA | Fetch and add. Fetch value according key. Add add-value to key value. Return old key-value. |
FASUB | Fetch and subtract.Fetch value according key. Subtract sub-value to key value. Return old key value. |
FAOR | Fetch and OR. Fetch value according key. Key value get logical or operation with parameter. Return old key value. |
FAADD | Fetch and ADD. Fetch value according key. Key value get logical add operation with parameter. Return old key value. |
FANAND | Fetch and NAND. Fetch value according key. Key value get logical NAND operation with parameter. Return old key value. |
FAXOR | Fetch and XOR. Fetch value according key. Key value get logical XOR operation with parameter. Return old key value. |