OpenBSD Journal

OpenBSD 4.9 Latency and Throughput testing

Contributed by pitrh on from the filling-the-tubes dept.

Andrew Fresh wrote in to tell us about his recent work in measuring key performance data for OpenBSD 4.9 using professional-grade testing equipment. Andrew writes,

I have been working to build a new router for a customer and have had access to a pair of Spirent SmartBits SMB-600 with gigabit interfaces. This gave me the opportunity to do some testing. The machines available are Dell PowerEdge 860's with "80557,Xeon 3000 Conroe,3060,LGA775,Burn 2" processors. I tested with the onboard bge interfaces and thanks to Dave some em NICs as well.

The testing shows that with a single processor kernel for any average packet size above 767 bytes you can easily saturate a gigabit network interface forwarding packets. I was able to achieve close to 300k pps in any configuration I tested. Surprisingly with small packet sizes, amd64 did slightly better than i386 achieving a maximum of of 347,812 pps with an em NIC. Until the interface was saturated, latency remained low at less than 500 microseconds.

For more details and pictures, see the full post.

(Comments are closed)

  1. By Dan Shechter (danshtr) on

    If I understand correctly, you made PF run on its rule set for each and every packet going through. Which makes the results very impressive!

    I wonder what would have happened if you tried UDP and not TCP, and jeep the state.

    1. By Andrew Fresh (andrew) on

      The program I was using to generate the packets does not support creating actual tcp flows, for that I would need a more advanced piece of software to control the testing hardware. The simple one is complicated enough for the amount of time I have to put into testing so keeping state is unlikely to happen. However, I did just use the default ruleset which has very few rules so I would not expect much difference.

      When I have time for more testing I may well try some UDP packets.

  2. By Kevin Bowling (kev009) on

    Is the moral of the story that the mp kernel has packet loss? I've got 2 atom d525 boxes running as gateways, is that a bad idea?

    1. By Andrew Fresh (andrew) on

      In my testing and reported by others, yes, I got packet loss with that I haven't had time to file a proper bug report on. It does not seem to have much effect on real world traffic because it is a fairly low percentage.

    2. By Anonymous Coward (anon) on

      > Is the moral of the story that the mp kernel has packet loss?

      No. In a certain configuration on certain hardware running certain OS versions, a problem was seen. But you can't generalize from that without doing more testing and IME that's unusual.

      > I've got 2 atom d525 boxes running as gateways, is that a bad idea?

      MP involves some overheads not needed with UP, so you may do a bit better with UP on primarily kernel-based workloads, but unless you're seeing a particular problem, that most likely just translates to a bit lower peak performance.


Copyright © - Daniel Hartmeier. All rights reserved. Articles and comments are copyright their respective authors, submission implies license to publish on this web site. Contents of the archive prior to as well as images and HTML templates were copied from the fabulous original with Jose's and Jim's kind permission. This journal runs as CGI with httpd(8) on OpenBSD, the source code is BSD licensed. undeadly \Un*dead"ly\, a. Not subject to death; immortal. [Obs.]