Contributed by rueda on from the aggregating bonded trunks dept.
David Gwynne (dlg@
) has
committed to -current
a dedicated
Link Aggregation (EEE 802.1AX)
driver,
aggr(4)
.
The
main commit message
explains the raison d'être:
CVSROOT: /cvs Module name: src Changes by: dlg@cvs.openbsd.org 2019/07/04 19:35:58 Added files: sys/net : if_aggr.c Log message: add aggr(4), a dedicated driver that implements 802.1AX link aggregation 802.1AX (formerly known as 802.3ad) describes the Link Aggregation Control Protocol (LACP) and how to use it in a bunch of different state machines to control when to bundle interfaces into an aggregation. technically the trunk(4) driver already implements support for 802.1AX, but it had a couple of problems i struggled to deal with as part of that driver. firstly, i couldnt easily make the output path in trunk mpsafe without getting bogged down, and the state machine handling had a few hard to diagnose edge cases that i couldnt figure out. the new driver has an mpsafe output path, and implements ifq bypass like vlan(4) does. this means output with aggr(4) is up to twice as fast as trunk(4). the implementation of the state machines as per the standard means the driver behaves more correctly in edge cases like when a physical link looks like it is up, but is logically unidirectional. the code has been good enough for me to use in production, but it does need more work. that can happen in tree now instead of carrying a large diff around. some testing by ccardenas@, hrvoje popovski, and jmatthew@ ok deraadt@ ccardenas@ jmatthew@
(Comments are closed)
By Peter J. Philipp (pjp) nospam@centroid.eu on https://centroid.eu
I gave this a test on my G5 Power Mac with DP 1.8 GHz processors. Unfortunately the macppc isn't powerful enough to shuffle more than 840 Mbit/s on 1 processor.
What I did was 10 GbE from Xeon to/from 2 aggr(4)'ed interfaces and back to the Xeon with a LAG and VLAN capable switch (and 10 GbE). What I got back was 420 Mbit/s throughput at 4% idleness of the CPU 0 on the mac. Double this then you get about 840 Mbit/s I/O. Also I had set jumbo frames to 9000 mtu, as I did iperf benchmarks.
I know my quad ethernet PCI-X card could do more busspeed than this, and even if it was 32 bit PCI, there would have been bandwidth to spare on the bus.
This new driver is great and seems to work out of the box. I was shortly on IRC wondering why it didn't detect the link as active, but that was solved when I had to administratively activate the aggregation on the switch.
Thank you for all the good work!
-peter
Comments
By Peter J. Philipp (pjp) nospam@centroid.eu on https://centroid.eu
Oh yeah one more thing. The macppc has a quad em(4) driver and I got back "Device Busy" messages when trying to raise the MTU, not sure if that was because of the aggr(4) interface though. A reboot with new
mtu settings worked though.
-peter
Comments
By David Gwynne (dlg) dlg@openbsd.org on
aggr(4) takes over the MTU of the ports that are added to it, just like the MAC address. You don't have to configure the MTU of the aggr and it's ports, just the aggr and it pushes it out to the ports.
By Gadai BPKB (BPKB) dankos.grt@gmail.com on https://digadaibpkb.com
where to get the driver
Comments
By Peter J. Philipp (pjp) nospam@centroid.eu on https://centroid.eu
it's in the snapshots or you have to wait for 6.6, I guess.