OpenBSD Journal

Redesign of the pfsync Protocol, Part 2

Contributed by jason on from the with-one-giant-lock-tied-behind-my-back dept.

As mentioned in Part 1 of this series, David Gwynne (dlg@) recently rewrote OpenBSD's pfsync protocol as part of his studies at the University of Queensland. His research paper details the challenges and methods he used in improving pfsync for active-active firewall clusters. This series is a reprint of David's paper, with his permission.

This selection dives deeper into the gritty details of pfsync, introducing some of the technical hurdles within the kernel and the protocol itself. We hope you enjoy this as much as we do.

Active-Active Firewall Cluster Support in OpenBSD

Background (continued)

OpenBSD Kernel Semantics

Pf and pfsync both operate entirely within the OpenBSD kernel, only taking configuration or monitoring requests from userland. It is therefore necessary to understand some of the semantics of the OpenBSD kernel to effectively develop software in that context.

Firstly, despite the recent development for support of multiple processors in the OpenBSD kernel, it still largely operates as if were running on a single processor system. On a SMP system only one processor is able to be running the kernel at any point in time, a semantic which is enforced by a Big Giant Lock. There are portions of the kernel which are not protected by the Big Giant Lock, but they are not relevant to this discussion. The network stack and therefore pfsync still run under the Big Giant Lock.

There are two contexts for code execution in the kernel: interrupt context and process context. The differences between these two contexts affect how the locking of data structures and code are performed, and also affects which internal kernel APIs are available or appropriate for use.

The kernel has a process context when it has been asked to perform an action by a userland process via a system call. The OpenBSD kernel is non-preemptible, meaning that any code running in the kernel has to explicitly give up control before anything else in the kernel will run. If a critical path is only modified from a process context then simply having process context is sufficient to guarantee exclusive access to that section. However, if a critical path may sleep or yield control of the CPU, then additional locking is required. The OpenBSD kernel provides reader/writer locks for processes in the kernel to lock against each other.

The only exception to the kernel's non-preemptibility is for interrupts. Interrupt handlers may run at any time unless their interrupt source is masked and therefore prevented from running. Interrupt handlers are established at an interrupt priority level (IPL). To prevent interrupt handlers you simply mask interrupts at that level before executing your critical path. IPLs, as their name implies, are leveled. A high IPL prevents interrupt handlers established at lower priorities running at the same time. These IPLs range from IPL_NONE (no interrupts are masked) up to IPL_HIGH (all interrupts are masked). Interrupt handlers are run at their IPL, meaning that if you're currently executing code in a hardware network interrupt handler established at IPL_NET, you are guaranteed that no other interrupt handler at IPL_NET will run at the same time.

This ability to mask interrupts is the fundamental locking primitive in the OpenBSD kernel for data structures used in interrupt handlers. When a userland process enters the kernel, the effective interrupt mask level is at IPL_NONE, meaning no interrupts are masked at all. If this code wishes to modify structures that are also handled during interrupts, it must raise the IPL to prevent those interrupt handlers from running.

The only other large difference between interrupt and process contexts is that you cannot yield in an interrupt handler. The code must return, it is not able to sleep since sleeping relies on being able to switch from the current process to a new one.

The network stack in OpenBSD runs at two IPL levels: IPL_NET and IPL_SOFTNET. IPL_NET is used by drivers for network hardware and protects their data structures. The majority of the network stack runs at IPL_SOFTNET which is a lower priority than IPL_NET. Packets move between the network stack and the hardware via queues protected with IPL_NET. This allows packet processing to occur at the same time as packets are moved on and off the hardware. Packets may be received and queued for processing while processing of other packets is still in progress. It is therefore possible for many packets to be processed in a single call to the softnet handlers.

There are several softnet interrupt handlers implemented in the OpenBSD kernel, generally one per address family. Each of these handlers can be run from software by generating a software interrupt anywhere in the kernel. Pf is run when the softnet interrupt handlers for the IPv4 and IPv6 protocols are run. Each handler de-queues a packet from the hardware and passes it to pf for testing. If pf passes the packet it is then allowed to go down the rest of that particular address families stack, otherwise the packet is dropped. For a packet coming out of the stack, it is tested by pf before being put on an interface's send queue. If pf makes a change to its state table because of that packet, it in turn notifies pfsync with that state. All this occurs at IPL_SOFTNET.

Another relevant detail about the OpenBSD kernel is that there are no readily usable high resolution timers that can be used to schedule events in the future with. The most usable timer has a resolution of 100 ticks a second. This is important to know if you're trying to mitigate some activity within the kernel.

pfsync In Depth

Pfsync can be thought of as being made up of two parts: the protocol and the implementation.

The current version of the pfsync protocol is version 4. The protocol is surprisingly simple. Each message is an IP (the protocol allows either IPv4 or IPv6) encapsulated datagram that contains a small header prefixing a series of messages. The IP protocol ID for pfsync packets is 240, it does not get encapsulated inside a UDP or TCP. The header describes the type of these messages, and the number of them to expect in the packet. Each packet may only contain messages of the one type specified in the header, and it may only include as many messages as may be contained in a single IP datagram without needing to perform fragmentation. The size of the IP datagram is determined by the medium over which it is transmitted.

An exact knowledge of the layout of the header and the messages is generally unnecessary, however, some details are useful to know.

While a pf state may be uniquely identifiable by the characteristics of the packets it represents (ie, the address family, protocol, port numbers, etc), it can be inefficient to exchange these details between pfsync peers.

To address this inefficiency, an arbitrary state must be uniquely identifiable by all peers exchanging pfsync messages by a 96 bit key made up of a 32 bit "creator id" and a 64 bit "state id". Each peer in a pfsync cluster randomly generates a 32 bit creator id on boot which uniquely identifies that particular peer within the cluster while it is alive. Each state that the peer creates must be uniquely identifiable within the system by a state id. The combination of those two values allows pfsync to uniquely refer to any state amongst all its peers.

The message types within the pfsync protocol are:

PFSYNC_ACT_CLEAR

Clear pf state table

A host will send this message when an administrator on the system has requested that the state table be flushed.

On receiving this message the host must flush its state table.

PFSYNC_ACT_INS

Insert a new state into the state table

When pf has processed a packet and decided to create state for it, it will request pfsync send a message of this type to describe that state to its peers

A peer receiving this message creates a new state for the traffic described in the message and inserts into its local state table.

PFSYNC_ACT_UPD

Update a state

When a packet causes a state to be modified, eg, a TCP packet will move the sequence numbers within the state along, it will generate a message of this type describing the state.

A peer receiving this message attempts to locate an existing state and updates it with the new details. If no state has been found it may create and insert a new state with the specified details.

PFSYNC_ACT_DEL

Delete a state

When a peer has received a packet that terminates a state, or a state times out, it will send a message of this type.

A peer receiving this message will attempt to locate this state and remove it from its local state table.

PFSYNC_ACT_UPD_C

"compressed" state update

This is semantically the same as the PFSYNC_ACT_UPD message, except instead of exchanging all the state details it only contains the creator and state ids to identify which state the update is for.

PFSYNC_ACT_DEL_C

"compressed" state deletion

This is semantically the same as the PFSYNC_ACT_UPD message, except instead of exchanging all the state details it only contains the creator and state ids to identify which state to delete.

PFSYNC_ACT_INS_F

insert fragment

pf may maintain a cache of fragmented packets and reassemble them to allow proper stateful filtering on those fragments. This message is intended to share the contents of the fragment cache between peers.

PFSYNC_ACT_DEL_F

delete fragment

This request is intended to remove a fragment from the shared fragment cache.

PFSYNC_ACT_UREQ

request "uncompressed" state update

If a pfsync peer receives a compressed update for a state it cannot locate by the creator and state ids, it can request an uncompressed state update so it can enter the current state details into its local table.

A peer may request a a bulk update by sending a message with the creator and state ids set to 0.

A peer receiving this message will search its local state table for an entry identified by the creator and state ids and will send a message of type PFSYNC_ACT_UPD in response.

If the peer receives a request with the creator and state ids set to zero it will initiate a send of update messages for all of its states.

PFSYNC_ACT_BUS

bulk update status

A peer will send this message after completing a bulk send of update messages.

A peer receiving this message may be confident that it has a usable copy of the pf state table and may actively participate in the cluster.

PFSYNC_ACT_TDB_UPD

TDB replay counter update

If the peer is acting as an IPsec gateway, it will send updates to the replay counters on the IPsec security associations to its peers.

A peer receiving these messages will search for an IPsec security association with the same parameters and update the replay counters with the values provided by the pfsync message.

This can be used to provide failover for IPsec gateways

Pfsync doesn't recognise individual peers in the cluster, it only deals with communicating with the cluster as a whole. Pfsync by default sends packets to a multicast group and receives packets destined for that multicast group. If a peer receives a compressed update it knows nothing about, it will send an update request to the multicast group, not to the specific peer the compressed update came from.

The current implementation of pfsync is implemented as a pseudo network interface driver. This is largely to allow for the creation and configuration of pfsync via ifconfig, which is a familiar administration tool for network services in UNIX-like systems. Despite being a network interface it does not provide and endpoint for traffic to be routed in or out of in the kernel.

A pfsync interface requires very minimal configuration to become usable. It is enough to define which interface the pfsync packets are to be exchanged on, and then bring the pfsync interface up. If pf is active on the system then it will begin to notify pfsync of inserts, updates, deletions, and clears of states in the pf state table.

Internally pfsync builds pfsync packets on the fly when pf issues notifications to it. The first notification from pf will cause pfsync to allocate memory for a packet to be built in, and will write the message for that notification into the packet. A second notification from pf will cause pfsync to search that packet it is building for a message about that same state. If the state was found in the packet already, the message is updated with the new parameters, otherwise a new message is added to the packet about the state.

Pfsync will eventually transmit the packet it is building on several events.

The first is from a timeout that is scheduled one second from when the packet was first allocated. When that timeout is hit the packet is sent out with whatever updates were made during that 1 second.

The second condition is if any particular state described in a pfsync packet is updated more than a certain number of times. The maximum number of updates a state in a packet may receive is 128 by default, but that value is configurable by the user.

Both of these conditions are there to try and mitigate against pfsync generating a single packet for each and every state update that pf notifies it of. Without this mitigation the pfsync traffic between peers in a cluster could conceivably exceed the traffic pf is firewalling.

The third condition a packet is transmitted on is caused when pfsync has to switch message types for the packet is building. Because pfsync packets may only contain messages of one type, if pfsync is notified of a different type of action to the one it was building a packet for it will simply transmit the current packet immediately. It will then allocate a new packet for messages of the new type and will start filling it in again.

The fourth condition a packet will be sent out on is if it is in response to an action that requires peers know about the change immediately. For example, if pf notifies pfsync of a state table clear, pfsync will build that packet and send it immediately. Also, if a peer requests an uncompressed update about a state, that message is also built and sent out immediately.

Lastly, if an additional message will cause the packet to grow larger than the MTU of the physical interface specified for the pfsync traffic, the current packet will be transmitted immediately.

When a pfsync packet is received by the network stack, it is sent immediately to the pfsync input routine. The input routine simply iterates over the messages that were specified by the header and takes action accordingly. Generally the implementation follows the actions that the protocol specifies that were described above. However, some aspects of the implementation of the pfsync receive code are smarter about their actions than what the protocol would imply.

For example, it is possible that a host will receive an update for a state that has been modified locally as well as by another peer, which is quite likely when you are moving from a passive to active role in a cluster. In that situation it will try to take the most current details from the state and the update it just received and merge them together. If it has determined that the peers version is stale, it will immediately send an update of the state based on its merged information.

Also, pfsync attempts to keep track of how current it thinks it is with respect to its peers, and it feeds that state back into the system. When the pfsync interface is first brought up it marks itself as "not ok" and sends out a request for a bulk update by transmitting a PFSYNC_ACT_UREQ with 0s set for the creator and state ids. It is only after it has successfully received a PFSYNC_ACT_BUS message that it will move itself to "ok". If the firewall is using carp as part of an active-passive cluster, the pfsync ok state is used to demote the carp interfaces so it will not receive traffic until it is ready to continue forwarding traffic with the existing states. If pfsync does not receive such a message, it will request a bulk update another 12 until it times out and moves to the ok state under the assumption that it is the only peer still present.

It is worth noting that the current implementation makes no effort to mitigate against the generation of packets for actions requiring "immediate" transmission. I.e., if a peer requests uncompressed updates for 8 states, the current pfsync code will generate 8 separate packets, one for each of the uncompressed updates it generates.

Also, because pfsync only builds one type of packet at a time, it is susceptible to generating excessive traffic simply because pf notifies it of different events all the time. It is uncommon for a state insert to be followed immediately by another state insert. The same is true for state deletions. It is common for pfsync to be able to build packets for several updates and mitigate against frequent sends of those packets, but that mitigation is offset by the inserts and deletes that occur too.

It is also worth noting that this implementation makes no attempt to detect if the traffic associated with a particular state is split over two of the systems in the cluster. For traffic going through a pf firewall that requires up to date state information to proceed, the lack of this detection will prevent the traffic from moving forward.

For example, because pf tracks a TCP sessions sequence numbers, and because TCP uses packets in both directions to move those sequence numbers forward, a pf firewall needs frequent updates from its peers if it can only see half the TCP packets. Without an update from a peer pf will not move the states sequence numbers forward. Pf will drop TCP packets that move beyond the TCP sequence number windows (plus a bit of fuzz).

Because it is almost impossible for a TCP session to receive 128 updates (ie, 128 new packets from the wire) without moving the session forward, it is in the worst case only the 1 second timeout which causes pfsync to send its update out. Therefore an active-active pf cluster may only allow new packets in a TCP session through for a small portion of every second.

In response to this the TCP endpoints will back off the speed at which they send and attempt retransmission. The TCP session will move forward, but only at a small fraction of the speed that a single pf firewall would forward at.

This over mitigation of messages proves to be the fatal flaw that prevents pfsync and pf to be usable in active-active clusters.

Currently pfsync only generates and receives IPv4 datagrams.

[ To be continued... ]

(Comments are closed)


Comments
  1. By Anonymous Coward (84.245.24.117) on

    AHHH part II *reading.... ah ohh realy? wow* WTF part 3?
    I HATE CLIFFHANGERS GODDA....
    i he writing a new PF book on undeadly?
    *waiting on part III then.....*

    Comments
    1. By jason (jason) on http://www.dixongroup.net/

      A little part of me dies inside every time I read a comment like this.

      Comments
      1. By Anonymous Coward (70.81.15.127) on

        > A little part of me dies inside every time I read a comment like this.
        >
        How did you manage to read it? =)

      2. By Martin (88.117.12.14) on http://ctrl.alt.delete.co.at

        > A little part of me dies inside every time I read a comment like this.
        >

        http://xkcd.com/481/ <- we should give him something like this... (classic but still good)

        Comments
        1. By Anonymous Coward (84.245.24.117) on

          > > A little part of me dies inside every time I read a comment like this.
          > >
          >
          > http://xkcd.com/481/ <- we should give him something like this... (classic but still good)

          nah this one make my point:
          http://xkcd.com/518/

      3. By Anonymous Coward (89.103.68.92) on

        > A little part of me dies inside every time I read a comment like this.
        >

        Why?World has constat number of idiots between people all the time in history,now and in future.Why to be scared about them?

  2. By Anonymous Coward (83.101.57.138) on

    Thanks a bunch for these! Will this article be published in full anywhere? I'd like to reference it in a paper I'm writing.

    Comments
    1. By jason (jason) on http://www.dixongroup.net/

      That's really up to David (dlg@). I don't see why he wouldn't though, it's a great paper!

      Comments
      1. By Anonymous Coward (81.83.46.237) on

        > That's really up to David (dlg@). I don't see why he wouldn't though, it's a great paper!

        I agree, it's very interesting. Thanks for sharing!

      2. By loki (218.214.194.113) on

        > That's really up to David (dlg@). I don't see why he wouldn't though, it's a great paper!

        Yes indeed! It looks like he is living up to the motto of U of Q: "Scientia ac labore" (Knowledge and hard work).

        Comments
        1. Comments
          1. By The other Loki (218.214.194.113) on

            > Don't play coy. :)
            >
            I'm not him! Check the geographical IP.
            Just happens that he's in my birth city.
            :))

Credits

Copyright © - Daniel Hartmeier. All rights reserved. Articles and comments are copyright their respective authors, submission implies license to publish on this web site. Contents of the archive prior to as well as images and HTML templates were copied from the fabulous original deadly.org with Jose's and Jim's kind permission. This journal runs as CGI with httpd(8) on OpenBSD, the source code is BSD licensed. undeadly \Un*dead"ly\, a. Not subject to death; immortal. [Obs.]