Internet-Draft | Scalability of IPv4aaS Technologies | October 2022 |
Lencse | Expires 26 April 2023 | [Page] |
Several IPv6 transition technologies have been developed to provide customers with IPv4-as-a-Service (IPv4aaS) for ISPs with an IPv6-only access and/or core network. All these technologies have their advantages and disadvantages, and depending on existing topology, skills, strategy and other preferences, one of these technologies may be the most appropriate solution for a network operator.¶
This document examines the scalability of the five most prominent IPv4aaS technologies (464XLAT, Dual Stack Lite, Lightweight 4over6, MAP-E, MAP-T) considering two aspects: (1) how their performance scales up with the number of CPU cores, (2) how their performance degrades, when the number of concurrent sessions is increased until hardware limit is reached.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 26 April 2023.¶
Copyright (c) 2022 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
IETF has standardized several IPv6 transition technologies [LEN2019] and occupied a neutral position trusting the selection of the most appropriate ones to the market. [RFC9313] provides a comprehensive comparative analysis of the five most prominent IPv4aaS technologies to assist operators with this problem. This document adds one more detail: measurement data regarding the scalability of the examined IPv4aaS technologies.¶
This draft is a collection of various measurement results. Some measurements with the iptables stateful NAT44 implementation and the Jool stateful NAT64 implementation were performed directly for this draft. Some other results published in open access research papers are added gradually.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
Netfilter [NETFLTR] is a widely used firewall, NAT and packet mangling framework for Linux. It is often called as "iptables" after the name of its user space command line tool. From our point of view, iptables is used as a stateful NAT44 solution. (Also called as NAPT: Network Address and Port Translation.) It is a free and open source software under the GPLv2 license.¶
This document deals with iptables for multiple considerations:¶
[RFC8219] has defined a benchmarking methodology for IPv6 transition technologies. [I-D.ietf-bmwg-benchmarking-stateful] has amended it by addressing how to benchmark stateful NATxy gateways using pseudorandom port numbers recommended by [RFC4814]. It has defined measurement procedures for maximum connection establishment rate, connection tear down rate and connection table capacity measurement, plus it reused the classic measurement procedures like throughput, latency, frame loss rate, etc. from [RFC8219]. Besides the new metrics, we used throughput to characterize the performance of the examined system.¶
The scalability of iptables is examined in two aspects:¶
The test setup in Figure 1 was followed. The two devices, the Tester and the DUT (Device Under Test), were both Dell PowerEdge R430 servers having two 2.1GHz Intel Xeon E5-2683 v4 CPUs, 384GB 2400MHz DDR4 RAM and Intel 10G dual port X540 network adapters. The NICs of the servers were interconnected by direct cables, and the CPU clock frequecy was set to fixed 2.1 GHz on both servers. They had Debian 9.13 Linux operating system with 4.9.0-16-amd64 kernel. The measurements were performed by siitperf [LEN2021] using the "stateful" branch (latest commit Aug. 16, 2021). The DPDK version was 16.11.11-1+deb9u2. The version of iptables was 1.6.0.¶
The ratio of number of connections in the connection tracking table and the value of the hashsize parameter of iptables significantly influences its performance. Although the default setting is hashsize=nf_conntrack_max/8, we have usually set hashsize=nf_conntrack_max to increase the performance of iptables, which was crucial, when high number of connections were used, because then the execution time of the tests was dominated by the preliminary phase, when several hundereds of millions connections had to be established. (In some cases, we had to use different settings due to memory limitations. The tables presenting the results always contain these parameters.)¶
The size of the port number pool is an important parameter of the bechmarking method for stateful NATxy gateways, thus it is also given for all tests.¶
To examine how the performance of iptables scales up with the number of CPU cores, the number of active CPU cores was set to 1, 2, 4, 8, 16 using the "maxcpus=" kernel parameter.¶
The number of connections was always 4,000,000 using 4,000 different source port numbers and 1,000 different destination port numbers. Both the connection tracking table size and the hash table size was set to 2^23.¶
The error of the binary search was chosen to be lower than 0.1% of the expected results. The experiments were executed 10 times.¶
Besides the connection establishment rate and the throughput of iptables, also the throughput of the IPv4 packet forwarding of the Linux kernel was measured to provide a basis for comparison.¶
The results are presented in Figure 2. The unit for the maximum connection establishment rate is 1,000 connections per second. The unit for throughput is 1,000 packets per second (measured with bidirectional traffic, and the number of all packets per second is displayed).¶
Whereas the throughput of IPv4 packet forwarding scaled up from 0.91Mpps to 11.56Mpps showing a relative scale up of 0.793, the throughput of iptables scaled up from 414.9kpps to 4,557kpps showing a relative scale up of 0.686 (and the relative scale up of the maximum connection establishment rate is only 0.666). On the one hand, this is the price of the stateful operation. On the other hand, this result is quite good compared to the scale-up results of NSD (a high performance authoritative DNS server) presented in Table 9 of [LEN2020], which is only 0.52. (1,454,661/177,432=8.2-fold performance using 16 cores.) And DNS is not a stateful technology.¶
To examine how the performance of iptables degrades with the number connections in the connection tracking table, the number of connections was increased fourfold by doubling the size of both the source port number range and the destination port number range. Both the connection tracking table size and the hash table size was also increased four fold. However, we reached the limits of the hardware at 400,000,000 connections: we could not set the size of the hash table to 2^29 but only to 2^28. The same value was used at 800,000,000 connections too, when the number of connections was only doubled, because 1.6 billion connections would not fit into the memory.¶
The error of the binary search was chosen to be lower than 0.1% of the expected results. The experiments were executed 10 times (except for the very long lasting measurements with 800,000,000 connections).¶
The results are presented in Figure 4. The unit for the maximum connection establishment rate is 1,000,000 connections per second. The unit for throughput is 1,000,000 packets per second (measured with bidirectional traffic, and the number of all packets per second is displayed).¶
The performance of iptables shows degradation at 6.25M connections compared to 1.56M connections very likely due to the exhaustion of the L3 cache of the CPU of the DUT. Then the performance of iptables is fearly constant up to 100M connections. A small performance decrease can be observed at 400M connections due to the lower hash table size. A more significant performance decrease can be observed at 800M connections. It is caused by two factors:¶
We note that the CPU has 2 NUMA nodes, cores 0, 2, ... 14 belong to NUMA node 0, and cores 1, 3, ... 15 belong to NUMA node 1. The maximum memory consumption with 400,000,000 connections was below 150GB, thus it could be stored in NUMA local memory.¶
Therefore, we have pointed out important limitations of the stateful NAT44 technology:¶
Therefore, we can conclude that, on the one hand, a well tailored hashing may guarantee an excellent scale-up of stateful NAT44 regarding the number of connections in a wide range, however, on the other hand, stateful operation has its limits resulting both in performance decrease, when approaching hardware limits and also in inability to handle more sessions, when reaching the memory limits.¶
[I-D.ietf-bmwg-benchmarking-stateful] has defined connection tear down rate measurement as an aggregate measurement, that is, N number of connections are loaded into the connection tracking table of the DUT and then the entire content of the connection tracking table is deleted, and its deletion time is measured (T). Finally, the connection tear down rate is computed as: N/T.)¶
We have observed that the deletion of an empty connection tracking table of iptables my take a significant amount of time depending on its size. Therefore, we made our measurements more accurate by subtracting the deletion time of the empty connection tracking table from that of the filled one, thus we got the time spent with the deleting of the connections.¶
The same setup and parameters were used as in Section 2.4 and the experiments were executed 10 times (except for the long lasting measurements with 800,000,000 connections).¶
The results are presented in Figure 5.¶
The connection tear down performance of iptables shows significant degradation at 6.25M connections compared to 1.56M connections very likely due to the exhaustion of the L3 cache of the CPU of the DUT. Then it shows only a minor degradation up to 100M connections. A small performance increase can be observed at 400M connections due to the relatively lower hash table size. A more visible performance decrease can be observed at 800M connections. It is likely caused by keeping the hash table size constant and doubling the number of connections. The same thing that caused performance degradation of the maximum connection establishment rate and throughput, made now the deletion of the connections faster and thus caused an increase of the connection tear down rate.¶
We note that according to the recommended settings of iptables, 8 connections are hashed to each place of the hash table on average, but we wilfully used much smaller number (0.745 whenever it was possible) to increase the maximum connection estabilishment rate and thus to speed up experimenting. However, finally this choice significantly slowed down our experiments due to the very low connection tear down rate.¶
[I-D.ietf-bmwg-benchmarking-stateful] has defined connection tracking table capacity measurement using the following quantities:¶
First, the order of magnitude of the size of the connection tracking table is determined by an exponential search. When it stops, then the C capacity of the connection tracking table is between CS and CT=2*CS.¶
Then the C size of the connection tracking table is determined by a binary search within E error.¶
Measurements were performed with the following parameters: hashsize=nf_conntrack_max=2**22=4,194,304; R0=1,000,000; E=1, alpha=1.0; beta=0.2; gamma=0.4. The measurements were performed 10 times to see the stability of the results.¶
The results are presented in Figure 6. The exponential search finished at its third step (CS=4,000,000 and CT=8,000,000). And the result of the final binary search was always very close to 4,194,304.¶
Jool [JOOLMX] is an open source SIIT and stateful NAT64 implementation for Linux. Since its version 4.2 it also supports MAP-T. It has been developed by NIC Mexico in cooperation with ITESM (Monterrey Institute of Technology and Higher Education). Its source code is released under GPLv2 license.¶
The same methodology was used as in Section 2, but now the test setup in Figure 7 was followed. The same Tester and DUT devices were used as before, but the operating system of the DUT was updated to Debian 10.11 with 4.19.0-18-amd64 kernel to meet the requirement of the jool-tools package. The version of Jool was 4.1.6. (The most mature version of Jool at the date of starting the measurements, Relase Date: 2021-12-10.)¶
Unlike with iptables, we did not find any way to tune the hashsize or any other parameters of Jool.¶
The number of connections was always 1,000,000 using 2,000 different source port numbers and 500 different destination port numbers.¶
The error of the binary search was chosen to be lower than 0.1% of the expected results. The experiments were executed 10 times.¶
The results are presented in Figure 8. The unit for the maximum connection establishment rate is 1,000 connections per second. The unit for throughput is 1,000 packets per second (measured with bidirectional traffic, and the number of all packets per second is displayed).¶
Both the maximum connection establishment rate and the throughput scaled up poorly with the number of active CPU cores. The increase of the performance was very low above 4 CPU cores.¶
To examine how the performance of Jool degrades with the number connections, the number of connections was increased fourfold by doubling the size of both the source port number range and the destination port number range. We did not reach the limits of the hardware regarding the number of connections, because unlike iptables, Jool worked also with 1.6 billion connections.¶
The error of the binary search was chosen to be lower than 0.1% of the expected results and the experiments were executed 10 times (except for the very long lasting measurements with 800 million and 1.6 billion connections to save execution time).¶
The results are presented in Figure 9. The unit for the maximum connection establishment rate is 1,000 connections per second. The unit for throughput is 1,000 packets per second (measured with bidirectional traffic, and the number of all packets per second is displayed).¶
The performance of Jool shows degradation at the entire range of the number of connections. We did not analyze the root cause of the degradation yet. And we are not aware of the implementation of its connection tracking table. We also plan to check the memory consumption of Jool, what is definitely lower that that of iptables.¶
Basically, the same measurement method was used as in Section 2.5, however having no parameter of Jool to tune, only a single measurement series was performed to determine the deletion time of the empty connection tracking table. The median, minimum and maximum values of the 10 measurements were 0.46s, 0.42s and 0.50s respectively.¶
The same setup and parameters were used as in Section 2.4 and the experiments were executed 10 times (except for the long lasting measurements with 800,000,000 connections).¶
The results are presented in Figure 10. The unit for the connection tear down rate is 1,000,000 connections per second.¶
The connection tear down performance of Jool is excellent at any number of connections. It is about and order of magnitude higher that its connection establishment rate and than the connection tear down rate of iptables. (A slight degradation can be observed at 100M connections.)¶
The measurement of connection establishment rate with validation was performed using different values for the "alpha" parameter.¶
The results are presented in Figure 11. It is well visible that alpha values 0.8 and 0.6 cause significant decrease of the validated rate, therefore, they are unsuitable. Values 0.5 and 0.25 make no difference compared to the unvalidated connection establishment rate. (The less than 1,000 cps increase of the median is deliberately a measurement error.)¶
This section summarizes the essence of our meeasurements for the scalability comparison of the Jool implementation of the 464XLAT and of the MAP-T IPv4aaS technologies presented in [LEN2022]. The measurements did not comply with the requirements of [RFC8219], but the results give an insight into the scalability of the Jool implementation of the two technologies. Because of the limitations of the measurement method, only their scalability with the number of CPU cores was examined.¶
The measurement setup for the scalability analysis of 464XLAT is shown in Figure 12.¶
The p097 - p100 devices were the same type of Dell PowerEdge R430 servers residing at NICT StarBED as before, and Debian Linux 10.11 operating system with kernel version 4.19 was used. Both CLAT and PLAT was implemented by Jool [JOOLMX]. To faciliate a fair comparison with MAP-T, Jool version 4.2.0-rc2 was used as Jool supports MAP-T from its 4.2 version.¶
The measurement traffic was generated by the dns64perf++ program, which sent DNS qeries for "AAAA" records and counted the valid replies. The reverse traffic was generated by the "Knot DNS" authoritative DNS server. As not direct cable connections, but a switch with VLANS was used, we allowed 0.01% packet loss during the binary search to find the highest supported rate. To measure how the performance of the 464XLAT test system scaled up with the number of CPU cores, the number of CPU cores of the CLAT and PLAT devices were set to: 1, 2, 4, 8, and 16, whereas the number of CPU cores of the A and B part of the Tester was always 32.¶
The number of connections was always 1600. (Dns64perf++ used 16 thread pairs, and the number of source port numbers per sending thread was set to 100. The destination port number was always 53, the well-known port number for DNS.) The reason behind this low number of connections was to use the same number of connections as with MAP-T, which had the limit of 2048 source port numbers per subscriber.¶
The results are presented in Figure 13. It is well visible that the scalability of the system is moderate, the addition of the last 8 cores results in only 4% performance increase.¶
The measurement setup for the scalability analysis of MAP-T is shown in Figure 14.¶
The configuration of the test system and the measurement method was the same as with 464XLAT.¶
The results are presented in Figure 15. It is well visible that the scalability of the system is much better now.¶
All further details can be found in our open access paper [LEN2022].¶
The measurements were carried out by remotely using the resources of NICT StarBED, 2-12 Asahidai, Nomi-City, Ishikawa 923-1211, Japan. The author would like to thank Shuuhei Takimoto for the possibility to use StarBED, as well as to Satoru Gonno and Makoto Yoshida for their help and advice in StarBED usage related issues.¶
The author would like to thank Ole Troan for his comments on the v6ops mailing list, while the scalalability measurements of iptables were intended to be a part of the draft later published as [RFC9313].¶
This document does not make any request to IANA.¶
Initial version: scale up of iptables.¶
Added the scale up of Jool.¶
Connection tear down rate measurements of iptables and Jool.¶
Added: introductions to iptables and Jool, connection tracking table capacity measurement of iptables and connection validation measurement of Jool.¶
Added: scalability comparison of the Jool implementation of the 464XLAT and of the MAP-T IPv4aaS technologies using DNS traffic.¶