New – ENA Express: Improved Network Latency and Per-Flow Performance on EC2

Default Header Image

New – ENA Express: Improved Network Latency and Per-Flow Performance on EC2

We know that you can always make great use of all available network bandwidth and network performance, and have done our best to supply it to you. Over the years, network bandwidth has grown from the 250 Mbps on the original m1 instance to 200 Gbps on the newest m6in instances. In addition to raw bandwidth, we have also introduced advanced networking features including Enhanced Networking, Elastic Network Adapters (ENAs), and (for tightly coupled HPC workloads) Elastic Fabric Adapters (EFAs).

Introducing ENA Express
Today we are launching ENA Express. Building on the Scalable Reliable Datagram (SRD) protocol that already powers Elastic Fabric Adapters, ENA Express reduces P99 latency of traffic flows by up to 50% and P99.9 latency by up to 85% (in comparison to TCP), while also increasing the maximum single-flow bandwidth from 5 Gbps to 25 Gbps. Bottom line, you get a lot more per-flow bandwidth and a lot less variability.

You can enable ENA Express on new and existing ENAs and take advantage of this performance right away for TCP and UDP traffic between c6gn instances running in the same Availability Zone.

Using ENA Express
I used a pair of c6gn instances to set up and test ENA Express. After I launched the instances I used the AWS Management Console to enable ENA Express for both instances. I find each ENI, select it, and choose Manage ENA Express from the Actions menu:

I enable ENA Express and ENA Express UDP and click Save:

Then I set the Maximum Transmission Unit (MTU) to 8900 on both instances:

$ sudo /sbin/ifconfig eth0 mtu 8900

I install iperf3 on both instances, and start the first one in server mode:

$ iperf3 -s
———————————————————–
Server listening on 5201
———————————————————–

Then I run the second one in client mode and observe the results:

$ iperf3 -c 10.0.178.46
Connecting to host 10.0.178.46, port 5201
[ 4] local 10.0.187.74 port 35622 connected to 10.0.178.46 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 2.80 GBytes 24.1 Gbits/sec 0 1.43 MBytes
[ 4] 1.00-2.00 sec 2.81 GBytes 24.1 Gbits/sec 0 1.43 MBytes
[ 4] 2.00-3.00 sec 2.80 GBytes 24.1 Gbits/sec 0 1.43 MBytes
[ 4] 3.00-4.00 sec 2.81 GBytes 24.1 Gbits/sec 0 1.43 MBytes
[ 4] 4.00-5.00 sec 2.81 GBytes 24.1 Gbits/sec 0 1.43 MBytes
[ 4] 5.00-6.00 sec 2.80 GBytes 24.1 Gbits/sec 0 1.43 MBytes
[ 4] 6.00-7.00 sec 2.80 GBytes 24.1 Gbits/sec 0 1.43 MBytes
[ 4] 7.00-8.00 sec 2.81 GBytes 24.1 Gbits/sec 0 1.43 MBytes
[ 4] 8.00-9.00 sec 2.81 GBytes 24.1 Gbits/sec 0 1.43 MBytes
[ 4] 9.00-10.00 sec 2.81 GBytes 24.1 Gbits/sec 0 1.43 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 28.0 GBytes 24.1 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 28.0 GBytes 24.1 Gbits/sec receiver

The ENA driver reports on metrics that I can review to confirm the use of SRD:

ethtool -S eth0 | grep ena_srd
ena_srd_mode: 3
ena_srd_tx_pkts: 25858313
ena_srd_eligible_tx_pkts: 25858323
ena_srd_rx_pkts: 2831267
ena_srd_resource_utilization: 0

The metrics work as follows:

ena_srd_mode indicates that SRD is enabled for TCP and UDP.
ena_srd_tx_pkts denotes the number of packets that have been transmitted via SRD.
ena_srd_eligible_pkts denotes the number of packets that were eligible for transmission via SRD. A packet is eligible for SRD if ENA-SRD is enabled on both ends of the connection, both connections reside in the same Availability Zone, and the packet is using either UDP or TCP.
ena_srd_rx_pkts denotes the number of packets that have been received via SRD.
ena_srd_resource_utilization denotes the percent of allocated Nitro network card resources that are in use, and is proportional to the number of open SRD connections. If this value is consistently approaching 100%, scaling out to more instances or scaling up to a larger instance size may be warranted.

Thing to Know
Here are a couple of things to know about ENA Express and SRD:

Access – I used the Management Console to enable and test ENA Express; CLI, API, CloudFormation and CDK support is also available.

Fallback – If a TCP or UDP packet is not eligible for transmission via SRD, it will simply be transmitted in the usual way.

UDP – SRD takes advantage of multiple network paths and “sprays” packets across them. This would normally present a challenge for applications that expect packets to arrive more or less in order, but ENA Express helps out by putting the UDP packets back into order before delivering them to you, taking the burden off of your application. If you have built your own reliability layer over UDP, or if your application does not require packets to arrive in order, you can enable ENA Express for TCP but not for UDP.

Instance Types and Sizes – We are launching with support for the 16xlarge size of the c6gn instances, with additional instance families and sizes in the works.

Resource Utilization – As I hinted at above, ENA Express uses some Nitro card resources to process packets. This processing also adds a few microseconds of latency per packet processed, and also has a moderate but measurable effect on the maximum number of packets that a particular instance can process per second. In situations where high packet rates are coupled with small packet sizes, ENA Express may not be appropriate. In all other cases you can simply enable SRD to enjoy higher per-flow bandwidth and consistent latency.

Pricing – There is no additional charge for the use of ENA Express.

Regions – ENA Express is available in all commercial AWS Regions.

All About SRD
I could write an entire blog post about SRD, but my colleagues beat me to it! Here are some great resources to help you to learn more:

A Cloud-Optimized Transport for Elastic and Scalable HPC – This paper reviews the challenges that arise when trying to run HPC traffic across a TCP-based network, and points out that the variability (latency outliers) can have a profound effect on scaling efficiency, and includes a succinct overview of SRD:

Scalable reliable datagram (SRD) is optimized for hyper-scale datacenters: it provides load balancing across multiple paths and fast recovery from packet drops or link failures. It utilizes standard ECMP functionality on the commodity Ethernet switches and works around its limitations: the sender controls the ECMP path selection by manipulating packet encapsulation.

There’s a lot of interesting detail in the full paper, and it is well worth reading!

In the Search for Performance, There’s More Than One Way to Build a Network – This 2021 blog post reviews our decision to build the Elastic Fabric Adapter, and includes some important data (and cool graphics) to demonstrate the impact of packet loss on overall application performance. One of the interesting things about SRD is that it keeps track of the availability and performance of multiple network paths between transmitter and receiver, and sprays packets across up to 64 paths at a time in order to take advantage of as much bandwidth as possible and to recover quickly in case of packet loss.

Jeff;

To top