Contributed by jose on from the yet-more-cool-stuff-from-daniel dept.
(Comments are closed)
OpenBSD Journal
Contributed by jose on from the yet-more-cool-stuff-from-daniel dept.
(Comments are closed)
Copyright © - Daniel Hartmeier. All rights reserved. Articles and comments are copyright their respective authors, submission implies license to publish on this web site. Contents of the archive prior to as well as images and HTML templates were copied from the fabulous original deadly.org with Jose's and Jim's kind permission. This journal runs as CGI with httpd(8) on OpenBSD, the source code is BSD licensed. undeadly \Un*dead"ly\, a. Not subject to death; immortal. [Obs.]
By Shane () on
Comments
By Anonymous Coward () on
By Daniel Hartmeier () daniel@benzedrine.cx on http://www.benzedrine.cx/pf.html
I added a page to the web server which is linked to from every existing page, through a tiny href which nobody would manually follow. This is added to robots.txt, so a well-behaved crawler will not fetch it, either (you want to make sure google is not hitting it, before you activate the block ;).
There's basically two kinds of clients that get caught by the trap:
Bad crawlers that hit me once or twice a week, going berserk and recursively fetching the entire htdocs, including mirrored contents that is not meant to be retrieved like that, wasting massive amounts of my limited bandwidth.
Some Windows based 'surf optimizers' which pre-fetch pages linked to, but dishonour robots.txt. They waste bandwidth as well, since the user doesn't really read the large .pdf files his client is fetching prematurely in the background, and I don't mind blocking those. You might want to block those only temporarily or exclude specific user agents from the block, if these are potential customers.
One particularly interesting case is someone (I presume) spamming my web server with fake Referrer headers, pointing to weird porn URLs. I'm not sure what the intention is there, but maybe they target sites which make referrers public (like I do with webalizer), in an attempt to get more google karma. The interesting thing is that it's an entire cluster of several hundred clients that used to hammer my web server every day. After crafting some rather sexually explicit regexp's feeding relevant entries from the web server log to the pf quickblock table, their IP addresses got collected very quickly, and they get blocked. Looking at the pf stats, they are still trying to connect, but it's costing them more resources (sockets) than me :)
There's plenty of potential, instead of just blocking the connections, you could redirect them to a special server, which either annoys the client similar to spamd, or servers a small static page quickly (so your real server isn't hurt).
Daniel
Comments
By Shane () on
Have you considered writing a how-to about using pf to do fancy stuff like this? Your "Annoying spammers" got me started with bmf and I know I'll turn to it again when I upgrade to 3.3 and put spamd to work. Seems like this topic is worthy of something similar.
By Anonymous Coward () on
It's very neat, and obviously has some advantages - but I'm still wary of the somewhat nonstandard (but still legitimate) usage which might get denied, or am I worrying about a non-issue?
Comments
By Anonymous Coward () on
By Kolchak () on
Comments
By Daniel Hartmeier () daniel@benzedrine.cx on http://www.benzedrine.cx/pf.html
One way of making sure you're not blocking an innocent peer whose address is being spoofed is only watching established TCP connections. If the peer completed the TCP handshake, it must have received your replies and wasn't spoofed.
I'm not sure if snort can statefully track connections and only issue block requests for established connections, but I think it has a module for stateful tracking now.
Daniel
Comments
By hex () on
By hex () on
Snortsam support pf, ipf, and much more :
http://www.snortsam.net/
Comments
By Anonymous Coward () on
By Ray () ray@cyth.net on mailto:ray@cyth.net
Heck, the next time some IIS worm hits, at least my server won't get pounded, the poor thing.
Is it just me, or is pf becoming a higher-level scrub?
Comments
By Stefan () on
that a dream of a active intrusion response may become real.
Generate block rules in a real-time fashion,
whoh.
But what still is missing to pf is an enterprise management solution, and IMHO a generic, multi-protocol proxy, with the ability to inspect the higher levels.
Comments
By Anonymous Coward () on
By Anonymous Coward () on
Comments
By Stefan () on
A "Firewall" should not only be a part of a security environment, like a packet filter, a proxy etc., but a combination of all.
It should be possible to control data at one central point, where the control can be enforced.
Comments
By Anonymous Coward () on
I hope you meant at least two independent central points ...
By Anonymous Coward () on
Can someone elaborate on the differences here and fill me in on what I might be overlooking?
Comments
By Anonymous Coward () on
By Michael Anuzis () on
# cat quickblock.grep
/_vti_bin/
"GET /www/scripts/
cmd.exe
root.exe
*few commands later*
# pfctl -t quickblock -T replace -f ~/quickblock
13026 addresses added.
#
I guess so! Thanks Daniel!
Comments
By Shane () on
Comments
By Anonymous Coward () on
Comments
By Shane () on
By Michael Anuzis () on
It seems really cool how everything can be automated via cron to update the PF tables... but the downside to using cron it seems is that there's usually some significant time delay before the blocks take place.
For example, when a code red host tries to infect you it's still going to get in its 20, 30, 50-whatever attempts before it decides to move on and your system blocks it a few hours later.
To cut to the chase, it would be really nice if there was some tool available that could make the changes take place automatically... I'm thinking of that program in /usr/ports/security/swatch for example. Perhaps you could have it listen to /www/logs/whatever/access_log and when it sees a cmd.exe, have it take the IP and that second add it to the firewall and kill the current connected state that IP had. I'm not sure this is possible though... Would anyone with more experience on the topic be able to verify or suggest another solution? --Michael
Comments
By Matt () on
Comments
By Michael Anuzis () on
Why run a script that parses your entire apache access log every 3 minutes when 90% of the time there will be nothing new to add. Wastes a lot of system resources, while at the same time it doesn't guarantee the firewall will be implemented fast enough to block *any* of the attack attempts. How long does it take code red to bug you 50 times? Maybe a few seconds, certainly not 3 minutes.
If some sort of script/automated method could be developed that would add newly offending hosts to the PF table when, and *only* when there was someone new to be added not only would it save system resources but it would block the offenders faster and before they could cause as much damage.
Sure, I could be running the cron tab to parse my 80m apache access log file every 5 seconds... but it just seems silly when there must be a better solution
Comments
By Spider () spider@gentoo.org on mailto:spider@gentoo.org
cat ~/quickblock >~/quickblock.tmp
touch ~/log-old
cat /var/log/thttpd > ~/log-now
comm -3 ~/log-old ~/log-now | egrep -f ~/quickblock.grep - | cut -d " " -f 1 >>~/quickblock.tmp
sort -u ~/quickblock
pfctl -t quickblock -T replace -f ~/quickblock
mv ~/log-new ~/log-old
Warning, untested and so on and so on.
Comments
By Spider () spider@gentoo.org on mailto:spider@gentoo.org
(see it as a test, if you didn't notice it when you read it ;-)
By Michael Anuzis () on
Comments
By xavier () on
By Spider () spider@gentoo.org on mailto:spider@gentoo.org
>Why run a script that parses your entire apache access log every 3 minutes when 90% of the time there will be nothing new to add. Wastes a lot of system resources..>
Well, it was merely a redesign to have it handle smaller amounts of data with each set, instead of going through the "fairly big" httpd logfile in the process each time, you only parse the new/changed things.
and if you want more "real time" you would probably be better off with a filtering proxy hooked before your httpd.
Generally its hard to even attempt to forsee what the next generation attack will look like, so this shouldnt even be considered as such a measure, this is more to deny the bad behaved users (crawlers, spam-checkers and others).