Contributed by grey on from the starband simulations dept.
Could it be feasible to setup an openbsd box that is working as a router/firewall (2 network cards forwarding IPv4) to not only shape down traffict and limit to 128K but also add latency, like 500 miliseconds?
That way the box could allow admin to see how things might behave over a limited vpn or satellite or whatever connection...
I know that pf can throttle the bandwidth... but how can I add latency?
It would be useful to see how applications behave between lans that are connected remotely.
(Comments are closed)
By Philipp (85.74.8.52) pb@ on
Comments
By Anonymous Coward (192.25.19.11) on
Comments
By Anonymous Coward (65.190.56.4) on
Comments
By Anonymous Coward (134.58.253.113) on
Comments
By Jake Von Slatt (12.30.32.50) jake@vonslatt.com on http://www.vonslatt.com
By drkfiber (207.241.43.115) drkfiber at retortion.net on
By Anonymous Coward (209.170.130.44) on
Comments
By steve latif (64.174.237.206) steve at latif dot org on
shaping module. These are both very unstable and 2.6 kernel traffic
shaping has one of the most unusable user interfaces.
By Anonymous Coward (216.254.17.65) on
My job requires me to do some rather tricky things with TCP sometimes and when I need to add delay or packet loss to a network for a reproduction, dummynet is my tool of choice and is a significant reason i keep some FreeBSD boxes lying around in addition to my OpenBSD ones.
By iGsys (80.229.197.106) on
By cruel (195.39.211.10) on
1 000 000 / 12 000 = 83.3 pps.
1 / 83.3 = 0.012 s, which is 12 ms delay.
bandwidth is DIRECTLY related to packet round-trip time.
Comments
By Otto (82.197.192.49) otto@drijf.net on
Individual packet round trip time does not say anything about bandwidth, since multiple packets can be on a link at the same time.
Reducing the bandwidth of a link with altq like you suggest will only have effect on a saturated link, so your simulation does nor produce the same behaviour as a real high latency link.
Comments
By masoud (47.248.0.43) mas at masoud.ir on
By cruel (195.39.211.10) on
32Kbit/s is 32Kbit/s on dialup equally as on some altq queue built over Ethernet device. nothing else.
pppd has buffers. ip stack has buffers. interface driver has buffers. altq queue is a buffer after all. and yes, multiple packets almost always are on the long osi stack path.
> Reducing the bandwidth of a link with altq like you suggest
> will only have effect on a saturated link, so your simulation
> does nor produce the same behaviour as a real high latency link.
so? calculate round-trip time from traceroute/ping against real link and use my formula in revers manner to know what bandwidth setting you need to set with altq to simulate.
if i will need to test 32Kbit/s with high latency which is (in real) higher than theoretical, i will calculate appropriate bandwidth and go into pf.conf to configure it.
> Imagine trans-atlantic links. They are big fat pipes, but
> Turn-around-time is still higher than your locla 10 megabit
> network.
so? big atlantic pipes are highly VARIABLE DELAY links. how can mythic "delay" option (which will add CONSTANT DELAY) help?
all you need is average round-trip time close to what you have in real. after this number is uncovered, just configure altq queue with bandwidth which will reflect your average round-trip time.
Comments
By masoud (70.28.15.40) on
Comments
By cruel (195.39.211.10) on
if you want to watch interactive applications (like terminal sessions), rtt is important since you need to care about every single packet: increasing queue's qlimit will increase rtt since packets will be dequeued less frequently.
if you want to watch download time, only bandwidth does matter. so you need to care about entire flow: decreasing queue's bandwidth will produce longer delays for entire flows.
but everybody in the thread agreed with synthetic manner of these tricks. there are close to real life but real life is always bit diff...
By Anonymous Coward (217.65.148.187) on
Comments
By cruel (195.39.211.10) on
> streaming over a satellite connection? How can altq simulate
> 10mbps with 1000ms latency?
use 10Mbps bandwidth and play with qlimit to achieve 1000ms rtt latency.
look, i greatly suspect FreeBSD's dummynet do similar thing: dequeue packet after internal per-packet timer (user configured rtt) expired.
i also agreed dummynet's delays are more user-friendly, but somebody will not install FreeBSD for that. somebody will. i will not :)
i just want people on the thread to recall original question:
> Could it be feasible to setup an openbsd box that is working as a
> router/firewall (2 network cards forwarding IPv4) to not only shape
> down traffict and limit to 128K but also add latency, like 500
> miliseconds?
got it? no dummynet. no delay knob. just user with "openbsd box" and the need in real life scenario simulation.
btw, hfsc may be more suitable for streaming applications due to fine-grained bandwidth control and lot of knobs.
By cruel (195.39.211.10) on
probability _number_
A probability attribute can be attached to a rule, with a value set
between 0 and 1, bounds not included. In that case, the rule will
be honoured using the given probability value only. For example,
the following rule will drop 20% of incoming ICMP packets:
block in proto icmp probability 20%
Comments
By mark r (62.231.57.115) on
Comments
By sthen (81.168.66.229) on
Comments
By Nice one (62.231.57.115) on
By wob (69.247.79.93) wob@bonch.org on
By P.Pruett (67.78.160.141) ppruett@webengr.com on
By Jörg Sonnenberger (139.30.252.72) joerg@leaf.dragonflybsd.org on
Once done, it should be pretty easy to port to OpenBSD.