Internet-Draft | Network Working Group | July 2022 |
Kim & Youn | Expires 25 January 2023 | [Page] |
Artificial intelligence based IoT applications demand more massive computing resource through networks for the process of AI tasks. To support these applications, some new technologies based an edge computing and fog computing are emerging. Especially, the computation-intensive and latency-sensitive IoT applications such as augmented reality, virtual reality and AI based inference application is deployed with an edge computing and fog computing which are connected with cloud computing. Recently, cluster-based edge system is deployed to extend computation capacity of an edge server. The cluster-based edge system has the advantage that can enhace the resource scalability and availability in edge computing and fog computing. In this draft, we present cluster-based edge system architecture and multi-cluster edge network topology that consists of multi-cluster edge system and core cloud. Also, we define the network functions and network node to configurate and operate multi-cluster edge network collaboratively.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 25 January 2023.¶
Copyright (c) 2022 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Recently, artificial intelligence (AI) based diverse IoT applications utilizing computing resources in cloud are emerging. These applications are deployed with a computation offloading service which offloads the AI task in IoT devices to a cloud which has the enough computing resources. However, this centralized processing service is not suitalbe for latency-sensitive and computing-intensive AI applications, since the unpredictable delay in the dynamic network and computing environments may occur due to the network congestion and the available computing resource may vary dynamically.¶
Recently, as edge computing or fog computing evolve, some solutions are emerging to overcome the shortcoming of cloud computing. Specially, these solutions can quickly offload and deploy tasks for latency-sensitive and computation-intensive application to edge computing server because edge computing and fog computing is geographically closer to IoT devices and service users. Also, IoT application can get better quality of service (QoS), such as fast task response time. This means that edge computing has an advantage in terms of the development of computation-intensive and latency-sensitive intelligence IoT applications, such as augmented reality (AR), virtual reality (VR) and AI based inference application.[I-D.irtf-t2trg-iot-edge]¶
Nevertheless, it is difficult for the edge computing itself to strictly satisfy the quality of service requested in the task due to the hardware constraints and the consideration of computing power in the edge computing server. Thus, one solution proposes the collaborative processing that offloads the part of tasks to the remote cloud or neighbor edge server. This solution adopts the collaborative resource allocation in a distributed computing manner between the edge computing server and the cloud and between the edge computing servers. Also, to extend the computation capacity of an edge computing server, cluster-based edge system is deployed and extended with Kubernetes technology. Kubernetes is an open-source platform which is optimized for configuring the infrastructures to deploy the cluster-based edge system. In this draft, we present cluster-based edge system architecture and multi-cluster edge network topology that consists of multi-cluster edge system and core cloud. Also, we define the network functions and network node to configurate and operate multi-cluster edge network collaboratively.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
The detailed cluster-based edge system architecture and multi-cluster edge network topology is presented in this section. The cluster edge system architecture will be shown below and the definition of each element in the cluster edge system will be given, and then multi-cluster edge network topology is shown. Also, the required network functions and network node will be explained the next section.¶
The cluster-based edge system architecture is shown in figure below. The cluster edge system consists of an edge controller (a master node) and N edge nodes (worker nodes) which can execute an offloaded computation task and application service provision. At the case of a computation offloading for tasks, on the procedures of a offloading task, the mobile node (MN) requests task offloading to the cluster-based edge system and then the edge controller determines appropriate edge node (worker) deployed with the application which can perform the offloaded task with a scheduler. After that, the task offloading is performed at the selected edge node, the edge controller collects and then responses the task results to the mobile node requesting task offloading.¶
The multi-cluster edge network topology is shown in figure below. It provides an edge network which can support a distributed computing environment for collaboration among cluster-based edge systems and between multi-cluster edge systems and core cloud. The following network functions are required to smoothly provide distributed computing services in a multi-cluster edge network environment.¶
In multi-cluster edge network topology, two collaborative computation models are possible. One is vertical collaborative computation. The other is horizontal collaborative computation. Vertical collaborative computation is a collaboration service between a multi-cluster edge network and the core cloud, and horizontal collaborative computation is a collaboration service between cluster edge systems in a multi-cluster edge network. For at all, to provide collaborative computation, high-speed network connection is required between cluster edge systems. This can be configurated with a tunneling protocol. In addition, a storage, or a cache for sharing data and operating service collaboratively should be configured between cluster edge systems. Thus, a management function for multi-cluster edge network management is required. Also, the monitoring function to monitor resource state in multi-cluster edge network and when the computation offloading or caching service is required in multi-cluster edge network, a scheduler and the resource allocation policy for allocating the resource of multi-cluster edge network is necessary. And a computation resource, a storage and a cache in multi-cluster edge network shall be driven and managed collaboratively. In multi-cluster edge network, the management function takes a role of management to support the collaborative computation. The monitoring function takes a role of the collection of information of current resource state per cluster-based edge system and the estimation of the collected resource state. The scheduler takes a role of allocating an edge resource for the computation offloading or caching service through the resource allocation policy. Thus, in multi-cluster edge network, the resource allocation policy shall provide the policy which can support the collaborative computation model.¶
TBD¶
TBD¶
TBD.¶
This document contains no requests to IANA.¶