OpenBSD Journal

Dynamic Rulesets with PF

Contributed by jose on from the yet-more-cool-stuff-from-daniel dept.

Daniel is at it again. Answering the question of how to dynamically create rulesets based on arbitrary criteria, Daniel discusses how he tracks web clients and kills them with dynamic rulesets. This would be easy to extend to a variety of detection criteria and add various levels of security via a PF host. Don't forget that reactionary firewalls are a great way to lock yourself off the Internet, so don't be too overzealous in your ruleset building.

(Comments are closed)


Comments
  1. By Shane () on

    This is great. One of the things I'm looking forward to doing in 3.3 is killing rogue robots with pf's dynamic rules. I had already started building some traps, but can't use them in 3.2 (not without having to reload the rules, as far as I know). Daniel's examples are going to help a lot. Thanks Daniel!

    Comments
    1. By Anonymous Coward () on

      Wow, this is nice! Now if only the PF how-to/faq can be updated to include the latest and greatest features...

    2. By Daniel Hartmeier () daniel@benzedrine.cx on http://www.benzedrine.cx/pf.html

      Since I'm running this setup for several weeks now, you might be interested in some results.

      I added a page to the web server which is linked to from every existing page, through a tiny href which nobody would manually follow. This is added to robots.txt, so a well-behaved crawler will not fetch it, either (you want to make sure google is not hitting it, before you activate the block ;).

      There's basically two kinds of clients that get caught by the trap:

      Bad crawlers that hit me once or twice a week, going berserk and recursively fetching the entire htdocs, including mirrored contents that is not meant to be retrieved like that, wasting massive amounts of my limited bandwidth.

      Some Windows based 'surf optimizers' which pre-fetch pages linked to, but dishonour robots.txt. They waste bandwidth as well, since the user doesn't really read the large .pdf files his client is fetching prematurely in the background, and I don't mind blocking those. You might want to block those only temporarily or exclude specific user agents from the block, if these are potential customers.

      One particularly interesting case is someone (I presume) spamming my web server with fake Referrer headers, pointing to weird porn URLs. I'm not sure what the intention is there, but maybe they target sites which make referrers public (like I do with webalizer), in an attempt to get more google karma. The interesting thing is that it's an entire cluster of several hundred clients that used to hammer my web server every day. After crafting some rather sexually explicit regexp's feeding relevant entries from the web server log to the pf quickblock table, their IP addresses got collected very quickly, and they get blocked. Looking at the pf stats, they are still trying to connect, but it's costing them more resources (sockets) than me :)

      There's plenty of potential, instead of just blocking the connections, you could redirect them to a special server, which either annoys the client similar to spamd, or servers a small static page quickly (so your real server isn't hurt).

      Daniel

      Comments
      1. By Shane () on

        Thanks for your response. Do you have any statistics on how many people click on your hidden link? I don't want to blackhole visitors who just happened to click in the wrong place. Speaking of which, I just clicked on your little trap, so you'll probably get an email from me asking about getting unblocked :)

        Have you considered writing a how-to about using pf to do fancy stuff like this? Your "Annoying spammers" got me started with bmf and I know I'll turn to it again when I upgrade to 3.3 and put spamd to work. Seems like this topic is worthy of something similar.

      2. By Anonymous Coward () on

        How does this work with someone trying to mirror your site with wget and the like? That seems to be a legitimate (though probably infrequent) use which might trigger this. Sure, if it's a known mirror site (e.g. as part of a service, or archive.org or whatnot), you could make exceptions - but every now and then someone with an archival sense might want to tuck things away for posterity - and ignore what robots.txt says to.

        It's very neat, and obviously has some advantages - but I'm still wary of the somewhat nonstandard (but still legitimate) usage which might get denied, or am I worrying about a non-issue?

        Comments
        1. By Anonymous Coward () on

          wget (and all decent mirroring tools) will honour robots.txt. Unless you explicitely tell it not to do, which is rude.

  2. By Kolchak () on

    How hard would it be to use Snort to update the dynamic table? Would be interested in using Snort to decide whether to block an IP.

    Comments
    1. By Daniel Hartmeier () daniel@benzedrine.cx on http://www.benzedrine.cx/pf.html

      It would be rather easy. But it's very dangerous. You have protect yourself from attacks with spoofed source addresses, or you actually invite abuse that is worse than what you are trying to protect against.

      One way of making sure you're not blocking an innocent peer whose address is being spoofed is only watching established TCP connections. If the peer completed the TCP handshake, it must have received your replies and wasn't spoofed.

      I'm not sure if snort can statefully track connections and only issue block requests for established connections, but I think it has a module for stateful tracking now.

      Daniel

      Comments
      1. By hex () on

        I think you mention the stream4 preprocessor, it gives snort the ability to ignore stateless attacks. Its supposed to be able to handle 32 768 simultaneous TCP connections.

      2. By hex () on

        Even better :
        Snortsam support pf, ipf, and much more :

        http://www.snortsam.net/

        Comments
        1. By Anonymous Coward () on

          Does anyone have any experience with using this tool? its the first i've heard of it, but it looks promising.

  3. By Ray () ray@cyth.net on mailto:ray@cyth.net

    Yeah, the first thing I'm going to block out are all these IIS hack attempts. At the very least, it will make my server logs much easier to read.

    Heck, the next time some IIS worm hits, at least my server won't get pounded, the poor thing.

    Is it just me, or is pf becoming a higher-level scrub?

    Comments
    1. By Stefan () on

      Yeah, in combination with a IDS (snort) it seems,
      that a dream of a active intrusion response may become real.
      Generate block rules in a real-time fashion,
      whoh.
      But what still is missing to pf is an enterprise management solution, and IMHO a generic, multi-protocol proxy, with the ability to inspect the higher levels.

      Comments
      1. By Anonymous Coward () on

        So you're volunteering to write these? Great! I look forward to your code!

      2. By Anonymous Coward () on

        Why should a firewall do that? Use the right tool for the job, something like snort-inline or hogwash.

        Comments
        1. By Stefan () on

          This is IMHO a mistake in thinking of many people.
          A "Firewall" should not only be a part of a security environment, like a packet filter, a proxy etc., but a combination of all.
          It should be possible to control data at one central point, where the control can be enforced.

          Comments
          1. By Anonymous Coward () on

            Firewall is combination of multiple devices/applications, pf serves as a part of it
            I hope you meant at least two independent central points ...

  4. By Anonymous Coward () on

    Not too long ago I read about using the new anchor rulesets for this purpose (ie temporarily blocking a source address). As far as I can tell, the main difference between these techniques is that the anchor ruleset is temporary and is flushed when pf restarts, but this newer technique is more permanent because it builds a table of hosts to block because it stores them in a file.

    Can someone elaborate on the differences here and fill me in on what I might be overlooking?

    Comments
    1. By Anonymous Coward () on

      ok my bad, this is what happens when you only read the post and not the whole thread.

  5. By Michael Anuzis () on

    Hmmmmm. I wonder if I can catch all the old addresses that have tried to code-red me in the past...


    # cat quickblock.grep
    /_vti_bin/
    "GET /www/scripts/
    cmd.exe
    root.exe

    *few commands later*

    # pfctl -t quickblock -T replace -f ~/quickblock
    13026 addresses added.
    #


    I guess so! Thanks Daniel!

    Comments
    1. By Shane () on

      Mind posting whatever you did in the *few commands later* part?

      Comments
      1. By Anonymous Coward () on

        Just read the link from the post. All he did was copy it straigh from Daniel's email.

        Comments
        1. By Shane () on

          I didn't notice it was the same stuff until after I posted. My bad.

  6. By Michael Anuzis () on

    All of these applications of dynamic PF tables are really cool! Now I've got the spamd/relaydb/PF-tables thing set up and this new concept of blocking certain web clients... there's only one more thing I would want to make the puzzle complete.

    It seems really cool how everything can be automated via cron to update the PF tables... but the downside to using cron it seems is that there's usually some significant time delay before the blocks take place.

    For example, when a code red host tries to infect you it's still going to get in its 20, 30, 50-whatever attempts before it decides to move on and your system blocks it a few hours later.

    To cut to the chase, it would be really nice if there was some tool available that could make the changes take place automatically... I'm thinking of that program in /usr/ports/security/swatch for example. Perhaps you could have it listen to /www/logs/whatever/access_log and when it sees a cmd.exe, have it take the IP and that second add it to the firewall and kill the current connected state that IP had. I'm not sure this is possible though... Would anyone with more experience on the topic be able to verify or suggest another solution? --Michael

    Comments
    1. By Matt () on

      Why not take that cron job and run it more often? If you're worried about a heavy load on your firewall, then add another part that checks to make sure there are at least xyz new ips in that log. You can determine what xyz is and have an easy way to adjust how often you reload that ruleset. (ie: check the log every 3 minutes to see if the line count has gone up significantly, and if so, update your blocking table)

      Comments
      1. By Michael Anuzis () on

        That's not exactly the point.

        Why run a script that parses your entire apache access log every 3 minutes when 90% of the time there will be nothing new to add. Wastes a lot of system resources, while at the same time it doesn't guarantee the firewall will be implemented fast enough to block *any* of the attack attempts. How long does it take code red to bug you 50 times? Maybe a few seconds, certainly not 3 minutes.

        If some sort of script/automated method could be developed that would add newly offending hosts to the PF table when, and *only* when there was someone new to be added not only would it save system resources but it would block the offenders faster and before they could cause as much damage.

        Sure, I could be running the cron tab to parse my 80m apache access log file every 5 seconds... but it just seems silly when there must be a better solution

        Comments
        1. By Spider () spider@gentoo.org on mailto:spider@gentoo.org

          perhaps something like this would work?


          cat ~/quickblock >~/quickblock.tmp
          touch ~/log-old
          cat /var/log/thttpd > ~/log-now
          comm -3 ~/log-old ~/log-now | egrep -f ~/quickblock.grep - | cut -d " " -f 1 >>~/quickblock.tmp
          sort -u ~/quickblock
          pfctl -t quickblock -T replace -f ~/quickblock
          mv ~/log-new ~/log-old




          Warning, untested and so on and so on.

          Comments
          1. By Spider () spider@gentoo.org on mailto:spider@gentoo.org

            bah, I should learn to do this better, note the typo "log-new" and "log-old" in there.


            (see it as a test, if you didn't notice it when you read it ;-)

          2. By Michael Anuzis () on

            I fail to see how that does anything much different. It looks like it still would run out of cron (if I'm not mistaken), and thus it wouldn't solve either of the 2 issues.

            Comments
            1. By xavier () on

              Would't Snort be able to handle this? I know the stream4 preprocessor could take care of the fact that a full tcp connection must be made ( so no spoofed ips). The other issue is to call pfctl when an alert occurs, which could be done with an output plugin... but maybe there`s an easier way to do this. Anyone has an idea?

            2. By Spider () spider@gentoo.org on mailto:spider@gentoo.org

              in reply to:
              >Why run a script that parses your entire apache access log every 3 minutes when 90% of the time there will be nothing new to add. Wastes a lot of system resources..>

              Well, it was merely a redesign to have it handle smaller amounts of data with each set, instead of going through the "fairly big" httpd logfile in the process each time, you only parse the new/changed things.

              and if you want more "real time" you would probably be better off with a filtering proxy hooked before your httpd.

              Generally its hard to even attempt to forsee what the next generation attack will look like, so this shouldnt even be considered as such a measure, this is more to deny the bad behaved users (crawlers, spam-checkers and others).


Latest Articles

Credits

Copyright © - Daniel Hartmeier. All rights reserved. Articles and comments are copyright their respective authors, submission implies license to publish on this web site. Contents of the archive prior to as well as images and HTML templates were copied from the fabulous original deadly.org with Jose's and Jim's kind permission. This journal runs as CGI with httpd(8) on OpenBSD, the source code is BSD licensed. undeadly \Un*dead"ly\, a. Not subject to death; immortal. [Obs.]