Contributed by sean on from the securing-hostile-networks-for-fun-and-profit dept.
Surprised I didn’t see this here already, but I thought readers might be interested in knowing OpenBSD and pf were part of the foundation for the network at DEFCON. While this in and of itself is not surprising, of interest though is the use of a quad-core Xeon to power the OpenBSD box—as those of us who religiously read henning@’s misc@ posts know pf is actually being somewhat hindered by the beastly SMP CPU…DEFCON has been using OpenBSD for a long time.
The write-up by Dave Bullock is at Threat Level blog @ Wired.
Cheers.
(Comments are closed)
By henning (213.39.181.2) on
pf itself doesn't benefit from multiple CPUs, the locking actually hurts a bit. we dunno what else they ran on the firewall. the more userland stuff is involved (proxies?), the more the MP kernel helps.
i huess they just used it because they could :)
Comments
By Anonymous Coward (64.131.83.138) on
Then pf is faulty, fix it... now, do you hear me? :-)
SMP is becoming common place, so instead of letting cores go unused.. make pf use the cores, idiot.
Comments
By Anonymous Coward (203.20.79.230) on
>
> Then pf is faulty, fix it... now, do you hear me? :-)
>
> SMP is becoming common place, so instead of letting cores go unused.. make pf use the cores, idiot.
Wow, if Henning is someone you'd consider an idiot, then I can't wait to see your awesome SMP capable packet filter!
Could you point it out to us?
Comments
By Anonymous Coward (24.215.87.134) on
Comments
By Anonymous Coward (146.131.120.2) on
>
>
Wow and here I thought the Undeadly community was actually in Adulthood in comparison with all the little kiddies Slashdot is made up of...
Comments
By Anonymous Coward (64.129.81.128) on
> >
> >
>
> Wow and here I thought the Undeadly community was actually in Adulthood in comparison with all the little kiddies Slashdot is made up of...
hence anonymous...
By Anonymous Coward (203.122.237.14) on
maybe, but they likely have to stand up to some pretty nasty DOS attacks
Comments
By Anonymous Coward (198.175.14.193) on
> maybe, but they likely have to stand up to some pretty nasty DOS attacks
>
Bah. The low-end orthogon point-to-point gear they are using can't handle more than 6,000 packets per second. A soekris 5501 could stand up to that shit :)
Comments
By Anonymous Coward (219.90.147.24) on
By nuintari (64.246.119.33) on
That was Motorola 5.7 Ortho radio, if my eyes do not deceive me, advertised as "20 mbit, 16 mb after modulation" Annnnnd about 14 mbit after reality sets in.
And that is in both directions, set it for 75% downlink and watch it _almost_ get 10 mbit down.
Soo, maybe they should order a 486?
By c2 (208.191.177.19) on
The extra cores maybe came in handy for generating those Cacti graphs, or running RRDtool.
By Heather (209.98.241.169) on
>
> pf itself doesn't benefit from multiple CPUs, the locking actually hurts a bit. we dunno what else they ran on the firewall. the more userland stuff is involved (proxies?), the more the MP kernel helps.
>
> i huess they just used it because they could :)
Keep in mind, our internet bottleneck only affects that specific traffic. Our internal network traffic is not limited by that amount. But yes it does not require multiple CPUs, however we need to have hardware that won't die on us from age. I try to upgrade our hardware every few years to avoid the other obvious failures like drives, fans, CPUs etc. All of the firewalls have been borrowed boxes, as such they are specd for other projects.
Comments
By Anonymous Coward (64.129.81.128) on
> >
> > pf itself doesn't benefit from multiple CPUs, the locking actually hurts a bit. we dunno what else they ran on the firewall. the more userland stuff is involved (proxies?), the more the MP kernel helps.
> >
> > i huess they just used it because they could :)
>
> Keep in mind, our internet bottleneck only affects that specific traffic. Our internal network traffic is not limited by that amount. But yes it does not require multiple CPUs, however we need to have hardware that won't die on us from age. I try to upgrade our hardware every few years to avoid the other obvious failures like drives, fans, CPUs etc. All of the firewalls have been borrowed boxes, as such they are specd for other projects.
>
>
funny how openbsd gets the left over computers, since it w/o a head it does not need as much horsepower...
By Anonymous Coward (128.171.90.200) on
By Anonymous Coward (201.229.159.167) on
Comments
By Justin (216.17.68.210) on
Really? Maybe the hardware was donated or borrowed or what if they chose to standardize on a particular hardware system? That way they can reasonably expect each machine to operate similarly if for example they need to do an emergency replacement or plan to run additional services on the machine and need the extra power.
Comments
By Heather (209.98.241.169) on
Most of the hardware is borrowed from the volunteers. This year was no exception, so far I have provided everything from a book pc to an elderly dell 1550 to this years newer hardware. It gets put back into production or in my lab at work. The majority of the hardware used in the core infrastructure is in the same situation.
By Anonymous Coward (137.61.234.225) on
If it's from dell or their sorts it's often cheaper to just take the included default choise in the configuration wizard. Sometimes that just happens to be a quadcore.
By pbug (62.75.160.180) on
-peter
Comments
By Marc Espie (213.41.185.88) espie@openbsd.org on
>
> -peter
For that to make any kind of sense, you first need to have the traffic way up there... Then there's memory contention, and killer network cards.
My guess is that SMP for pf does not make any sense until you're well into the gigabit range, and even then, the issues are more to get memory moving fast enough for things to work.
Remember, we're not talking inefficient software like web servers there.
Comments
By Dean (63.224.74.16) on
> My guess is that SMP for pf does not make any sense until you're well into the gigabit range, and even then, the issues are more to get memory moving fast enough for things to work.
>
> Remember, we're not talking inefficient software like web servers there.
What about different functions on different CPUs? Use one for logging, one for pf internals, and one for state management. I admit I have no experience with the pf code, so this may be way off base.
Comments
By Björn Andersson (83.254.32.101) on
> > My guess is that SMP for pf does not make any sense until you're well into the gigabit range, and even then, the issues are more to get memory moving fast enough for things to work.
> >
> > Remember, we're not talking inefficient software like web servers there.
>
> What about different functions on different CPUs? Use one for logging, one for pf internals, and one for state management. I admit I have no experience with the pf code, so this may be way off base.
>
>
To get SMP-code efficient you have to keep the synchronization between the cores/threads at a minimum. So my guess is that we should try to distribute the packets evenly over the cores, but then we have to do the state keeping and things get messy.
But there's lots of SMP work to do in the kernel before this question needs to be answered.
By Matthew Dempsky (67.164.9.127) on
Handling multiple packets at once would also be difficult because evaluating a pf rule can establish state that would affect how later packets are handled. Maybe you could get around this by partitioning packets into multiple queues such that packets in different queues could never produce state that affects another queue's packets
E.g., for TCP packets, hash the IP addresses and port numbers (being careful that traffic in either direction gives the same hash), and then stick the packet in a queue based on the hash. Each queue can have its own state table, and then you could have multiple processors each handling one packet queue.
Of course, first the kernel would need to support concurrent kernel threads. :)
Comments
By Anthony (2001:470:e828:100:207:e9ff:fe39:24e8) on
The problem with that is that you can attack it by crafting connections such that they all hash to the same CPU. Sometimes mediocre performance is okay, if it's really tough to get it to degrade past that. That's the same reason they eg use balanced trees instead of hash tables.
Comments
By Matthew Dempsky (67.164.9.127) on
You can mostly circumvent that by including a random per-host key in the hash. Also, even if all packets end up on the same queue so only one CPU is doing work, it's no worse than what we have right now. :-)
Comments
By Daniel Gracia (Paladdin) on http://www.alpuntodesal.com
By Dean (63.224.74.16) on
How did they monitor the logs? any lessons learned?
Did the cpu show any sort of strain, or did it just hum along?
How easy were the rules to configure, was it just a deny everything outbound with limited ports permitted, or was it a more standard - deny everything in and allow almost everything out? Any Rate limits?
Did they have any remote notification - DoS attacks going on, congestion?
What did they log, and what did they do with the logs, any analysis for the future? Did they use any ports - Hatchet, etc.
I think anyone would admit that during the weeks around Defcon, they certainly have the attention.