Appendix B: Network Testing Options¶
A key part of certification is testing your SUT’s network cards. This document is written with the assumption of a fairly basic configuration; however, some labs may have more advanced needs. Important variables include:
Multiple simultaneous network tests – A single server takes about 60 minutes per network port to run its network tests – long enough that testing multiple SUTs simultaneously is likely to result in contention for access to the
iperf3
server. This is especially true if SUTs have multiple network ports – a server with four ports will tie up aniperf3
server for four hours. Aniperf3
server will refuse multiple connections, which should at least enable one SUT’s network tests to pass; but if theiperf3
server has a sufficiently fast NIC, it will then be under-utilized.Advanced network interfaces – A low-end computer configured as described here will likely have a 1 Gbps link to the internal LAN. If you’re testing systems with faster interfaces, you will need a separate computer to function as an
iperf3
server.
If your iperf3
target system has a fast NIC and you want to test
multiple slower SUTs, you can configure the fast NIC with multiple IP
addresses. A NetPlan configuration (as used in Ubuntu 17.10 and later) to
support multiple IP addresses can be enabled in
/etc/netplan/50-cloud-init.yaml
(or another file; the name varies
depending on how the system was installed). For example:
eno2:
addresses:
- 172.24.124.2/22
- 172.24.124.3/22
- 172.24.124.4/22
Note that you do not explicitly set separate names for each interface.
You must activate the changes after making them. In theory, you can do this
without rebooting by typing sudo netplan apply
; however, you may find
it’s necessary to reboot to reliably apply an advanced configuration like
this one. You can verify the network settings with ip addr show eno2
(changing the interface name as necessary):
$ ip addr show eno2
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UNKNOWN group default qlen 1000
link/ether 08:00:27:90:0e:07 brd ff:ff:ff:ff:ff:ff
inet 172.24.124.2/22 brd 172.24.127.255 scope global eno2
valid_lft forever preferred_lft forever
inet 172.24.124.3/22 brd 172.24.127.255 scope global secondary eno2
valid_lft forever preferred_lft forever
inet 172.24.124.4/22 brd 172.24.127.255 scope global secondary eno2
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe90:e07/64 scope link
valid_lft forever preferred_lft forever
This example shows eno2
up with all three of its IP addresses. Note
that the older ifconfig
tool will show only the first IP address for
any device configured via NetPlan.
You would then launch iperf3
separately on each IP address:
iperf3 -sD -B 172.24.124.2
iperf3 -sD -B 172.24.124.3
iperf3 -sD -B 172.24.124.4
On the MAAS server, you can enter all of the
iperf3
target addresses in /etc/maas-cert-server/iperf.conf
:
172.24.124.2,172.24.124.3,172.24.124.4
The result should be that each of your SUTs will detect an open port on the
iperf3
server and use it without conflict, up to the number of ports
you’ve configured. Past a certain point, though, you may over-stress your
CPU or NIC, which will result in failed network tests. You may need to
discover the limit experimentally.
Furthermore, if you want to test a SUT with a NIC that meets the speed of
the iperf3
server’s NIC, you’ll have to ensure that the high-speed
SUT is tested alone – additional simultaneous tests will degrade the
performance of all the tests, causing them all to fail.
If the iperf3
server has multiple interfaces of differing speeds, you
may find that performance will match the lowest-speed interface. This is
because the Linux kernel arbitrarily decides which NIC to use for handling
network traffic when multiple NICs are linked to one network segment, so
the kernel may use a low-speed NIC in preference to a high-speed NIC. Two
solutions to this problem exist:
You can disable the lower-speed NIC(s) (permanently or temporarily) and rely exclusively on the high-speed NIC(s), at least when performing high-speed tests.
You can configure the high-speed and low-speed NICs to use different address ranges – for instance, 172.24.124.0/22 for the low-speed NICs and 172.24.128.0/22 for the high-speed NICs. This approach will require additional MAAS configuration not described here. To minimize DHCP hassles, it’s best to keep the networks on separate physical switches or VLANs, too.
If your network has a single iperf3
server with multiple physical
interfaces, you can launch iperf3
separately on each NIC, as just
described; however, you may run into a variant of the problem with NICs of
differing speed – the Linux kernel may try to communicate over just one
NIC, causing a bottleneck and degraded performance for all tests. Using
multiple network segments or bonding NICs together may work around this
problem, at the cost of increased configuration complexity.
If your lab uses separate LANs for different network speeds, you can list
IP addresses on separate LANs in /etc/maas-cert-server/iperf.conf
on the
MAAS server or in /etc/xdg/canonical-certification.conf
on SUTs. The
SUT will try each IP address in turn until a test passes or until all the
addresses are exhausted.
If you want to test multiple SUTs but your network lacks a high-speed NIC
or a system with multiple NICs, you can do so by splitting your SUTs into
two equal-sized groups. On Group A, launch iperf3
as a server, then run
the certification suite on Group B, configuring these SUTs to point to
Group A’s iperf3
servers. When that run is done, reverse their
roles – run iperf3
as a server on Group B and run the certification
suite on Group A. You’ll need to adjust the
/etc/xdg/canonical-certification.conf
file on each SUT to point it to
its own matched server.
Testing high-speed network devices (above 10 Gbps) requires changing some
network configuration options. Appendix D of the Ubuntu Server Certified
Hardware Self-Testing Guide covers how to configure both the SUT and the
iperf3
Target system for such testing. Configuration of one feature in
particular, though, can be facilitated via MAAS: jumbo frames. When testing
servers with high-speed network interfaces (those over about 10Gbps), it’s
often necessary to set jumbo frames (an MTU significantly higher than 1500,
and typically 9000) on the iperf3 server, the SUT, and any intervening
switches. By default, MAAS configures SUTs with an MTU of 1500; however,
you can change this detail by editing the MAAS settings:
On any MAAS web UI screen, click Subnets.
On the resulting page, locate the fabric corresponding to your high-speed network and click the link under the VLAN column. (This is usually entitled “untagged,” unless you’ve configured VLAN tagging.)
Under VLAN Summary, you’ll probably see the MTU as set to 1500. If it’s already set to 9000, you don’t need to make any changes. If it’s 1500, though, click the Edit button near the top-right of this section.
The resulting input fields enable you to change configuration details for this network. Change the MTU field to 9000.
Click Save Summary to save the changes.
Perform a test deployment and verify that the node’s MTU is set to 9000 for the interface(s) connected to the high-speed network.
You can make this change even to lower-speed networks or to networks with mixed speeds; however, the change applies to all the computers that MAAS controls on the associated fabric. Because jumbo frames create problems in some cases (such as PXE-booting some older UEFI-based computers or complete failure of communication if intervening switches are not properly configured), you should be cautious about applying this change too broadly. That said, if it works for your servers, there’s little reason to not set jumbo frames universally. Note that this change will not automatically adjust your iperf3 servers’ MTUs, so you may need to set them manually, as described in the Self-Test Guide. You may also need to adjust your switches, since they must support jumbo frames, too, in order to get their speed benefit.
You may find the iftop
utility helpful on the iperf3
server system.
This tool enables you to monitor network connections, which can help you to
spot performance problems early.