http://lartc.org/lartc.html
A very hands-on approach to iproute2, traffic shaping and a bit of netfilter.
u32
classifier
route
classifier
This document is dedicated to lots of people, and is my attempt to do something back. To list but a few:
Rusty Russell
Alexey N. Kuznetsov
The good folks from Google
The staff of Casema Internet
Welcome, gentle reader.
This document hopes to enlighten you on how to do more with Linux 2.2/2.4 routing. Unbeknownst to most users, you already run tools which allow you to do spectacular things. Commands like route and ifconfig are actually very thin wrappers for the very powerful iproute2 infrastructure.
I hope that this HOWTO will become as readable as the ones by Rusty Russell of (amongst other things) netfilter fame.
You can always reach us by posting to the mailing list (see the relevant section) if you have comments or questions about or somewhat related to this HOWTO. We are no free helpdesk, but we often will answer questions asked on the list.
Before losing your way in this HOWTO, if all you want to do is simpletraffic shaping, skip everything and head to the Other possibilities chapter, and read about CBQ.init.
This document is distributed in the hope that it will be useful,but WITHOUT ANY WARRANTY; without even the implied warranty ofMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
In short, if your STM-64 backbone breaks down and distributes pornography toyour most esteemed customers - it's never our fault. Sorry.
Copyright (c) 2002 by bert hubert, Gregory Maxwell, Martijn vanOosterhout, Remco van Mook, Paul B. Schroeder and others. This material maybe distributed only subject to the terms and conditions set forth in theOpen Publication License, v1.0 or later (the latest version is presentlyavailable at http://www.opencontent.org/openpub/).
Please freely copy and distribute (sell or give away) this document in anyformat. It's requested that corrections and/or comments be forwarded to thedocument maintainer.
It is also requested that if you publish this HOWTO in hardcopy that yousend the authors some samples for "review purposes" :-)
As the title implies, this is the "Advanced" HOWTO.While by no means rocket science, some prior knowledge is assumed.
Here are some other references which might help teach you more:
Very nice introduction, explaining what a network is, and how it is connected to other networks.
Great stuff, although very verbose. It teaches you a lot of stuff that's already configured if you are able to connect to the Internet. Should be located in /usr/doc/HOWTO/NET3-4-HOWTO.txt but can be also be found online.
A small list of things that are possible:
Throttle bandwidth for certain computers
Throttle bandwidth TO certain computers
Help you to fairly share your bandwidth
Protect your network from DoS attacks
Protect the Internet from your customers
Multiplex several servers as one, for load balancing or enhanced availability
Restrict access to your computers
Limit access of your users to other hosts
Do routing based on user id (yes!), MAC address, source IP address, port, type of service, time of day or content
Currently, not many people are using these advanced features. This is forseveral reasons. While the provided documentation is verbose, it is not veryhands-on. Traffic control is almost undocumented.
There are several things which should be noted about this document. While Iwrote most of it, I really don't want it to stay that way. I am a strongbeliever in Open Source, so I encourage you to send feedback, updates,patches etcetera. Do not hesitate to inform me of typos or plain old errors.If my English sounds somewhat wooden, please realize that I'm not a nativespeaker. Feel free to send suggestions.
If you feel you are better qualified to maintain a section, or think thatyou can author and maintain new sections, you are welcome to do so. The SGMLof this HOWTO is available via GIT, I very much envision more peopleworking on it.
In aid of this, you will find lots of FIXME notices. Patches are alwayswelcome! Wherever you find a FIXME, you should know that you are treading inunknown territory. This is not to say that there are no errors elsewhere,but be extra careful. If you have validated something, please let us know sowe can remove the FIXME notice.
About this HOWTO, I will take some liberties along the road. For example, Ipostulate a 10Mbit Internet connection, while I know full well that thoseare not very common.
The canonical location for the HOWTO is here.
We now have anonymous GIT access available to the world at large. This isgood in a number of ways. You can easily upgrade to newer versions of thisHOWTO and submitting patches is no work at all.
Furthermore, it allows the authors to work on the source independently,which is good too.
$ git clone git://repo.or.cz/lartc.git or (if you're behind a firewall which only allows HTTP) $ git clone http://repo.or.cz/r/lartc.git Enter the checked out directory: $ cd lartc.git If you want to update your local copy, run $ git pull
If you made changes and want to contribute them, run git diff,and mail the output to the LARTC mailing list
, wecan then integrate it easily. Thanks! Please make sure that you edit the.db file, by the way, the other files are generated from that one.
A Makefile is supplied which should help you create postscript, dvi, pdf,html and plain text. You may need to install docbook, docbook-utils,ghostscript and tetex to get all formats.
Be careful not to edit 2.4routing.sgml! It contains an older version of theHOWTO. The right file is lartc.db.
The authors receive an increasing amount of mail about this HOWTO. Becauseof the clear interest of the community, it has been decided to start amailinglist where people can talk to each other about Advanced Routing andTraffic Control. You can subscribe to the listhere.
It should be pointed out that the authors are very hesitant of answeringquestions not asked on the list. We would like the archive of the list tobecome some kind of knowledge base. If you have a question, please searchthe archive, and then post to the mailinglist.
We will be doing interesting stuff almost immediately, which also means thatthere will initially be parts that are explained incompletely or are notperfect. Please gloss over these parts and assume that all will become clear.
Routing and filtering are two distinct things. Filtering is documented verywell by Rusty's HOWTOs, available here:
Rusty's Remarkably Unreliable Guides
We will be focusing mostly on what is possible by combining netfilterand iproute2.
Most Linux distributions, and most UNIX's, currently use the venerable arp, ifconfig and route commands.While these tools work, they show some unexpected behaviour under Linux 2.2 and up.For example, GRE tunnels are an integral part of routing these days, but require completely different tools.
With iproute2, tunnels are an integral part of the tool set.
The 2.2 and above Linux kernels include a completely redesigned networksubsystem. This new networking code brings Linux performance and a featureset with little competition in the general OS arena. In fact, the newrouting, filtering, and classifying code is more featureful than the oneprovided by many dedicated routers and firewalls and traffic shapingproducts.
As new networking concepts have been invented, people have found ways toplaster them on top of the existing framework in existing OSes. Thisconstant layering of cruft has lead to networking code that is filled withstrange behaviour, much like most human languages. In the past, Linuxemulated SunOS's handling of many of these things, which was not ideal.
This new framework makes it possible to clearly express featurespreviously beyond Linux's reach.
Linux has a sophisticated system for bandwidth provisioning called TrafficControl. This system supports various method for classifying, prioritizing,sharing, and limiting both inbound and outbound traffic.
We'll start off with a tiny tour of iproute2 possibilities.
You should make sure that you have the userland tools installed. Thispackage is called 'iproute' on both RedHat and Debian, and may otherwise befound at ftp://ftp.inr.ac.ru/ip-routing/iproute2-2.2.4-now-ss??????.tar.gz".
You can also try here for the latest version.
Some parts of iproute require you to have certain kernel options enabled. Itshould also be noted that all releases of RedHat up to and including 6.2come without most of the traffic control features in the default kernel.
RedHat 7.2 has everything in by default.
Also make sure that you have netlink support, should you choose to roll yourown kernel. Iproute2 needs it.
This may come as a surprise, but iproute2 is already configured! The currentcommands ifconfig and route are already using the advancedsyscalls, but mostly with very default (ie. boring) settings.
The ip tool is central, and we'll ask it to display our interfacesfor us.
[ahu@home ahu]$ ip link list 1: lo:mtu 3924 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: dummy: mtu 1500 qdisc noop link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 3: eth0: mtu 1400 qdisc pfifo_fast qlen 100 link/ether 48:54:e8:2a:47:16 brd ff:ff:ff:ff:ff:ff 4: eth1: mtu 1500 qdisc pfifo_fast qlen 100 link/ether 00:e0:4c:39:24:78 brd ff:ff:ff:ff:ff:ff 3764: ppp0: mtu 1492 qdisc pfifo_fast qlen 10 link/ppp
Your mileage may vary, but this is what it shows on my NAT router athome. I'll only explain part of the output as not everything is directlyrelevant.
We first see the loopback interface. While your computer may functionsomewhat without one, I'd advise against it. The MTU size (Maximum TransferUnit) is 3924 octets, and it is not supposed to queue. Which makes sensebecause the loopback interface is a figment of your kernel's imagination.
I'll skip the dummy interface for now, and it may not be present on yourcomputer. Then there are my two physical network interfaces, one at the sideof my cable modem, the other one serves my home ethernet segment.Furthermore, we see a ppp0 interface.
Note the absence of IP addresses. iproute disconnects the concept of 'links'and 'IP addresses'. With IP aliasing, the concept of 'the' IP address hadbecome quite irrelevant anyhow.
It does show us the MAC addresses though, the hardware identifier of ourethernet interfaces.
[ahu@home ahu]$ ip address show 1: lo:mtu 3924 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 brd 127.255.255.255 scope host lo 2: dummy: mtu 1500 qdisc noop link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 3: eth0: mtu 1400 qdisc pfifo_fast qlen 100 link/ether 48:54:e8:2a:47:16 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/8 brd 10.255.255.255 scope global eth0 4: eth1: mtu 1500 qdisc pfifo_fast qlen 100 link/ether 00:e0:4c:39:24:78 brd ff:ff:ff:ff:ff:ff 3764: ppp0: mtu 1492 qdisc pfifo_fast qlen 10 link/ppp inet 212.64.94.251 peer 212.64.94.1/32 scope global ppp0
This contains more information. It shows all our addresses, and to whichcards they belong. 'inet' stands for Internet (IPv4). There are lots of otheraddress families, but these don't concern us right now.
Let's examine eth0 somewhat closer. It says that it is related to the inetaddress '10.0.0.1/8'. What does this mean? The /8 stands for the number ofbits that are in the Network Address. There are 32 bits, so we have 24 bitsleft that are part of our network. The first 8 bits of 10.0.0.1 correspondto 10.0.0.0, our Network Address, and our netmask is 255.0.0.0.
The other bits are connected to this interface, so 10.250.3.13 is directlyavailable on eth0, as is 10.0.0.1 for example.
With ppp0, the same concept goes, though the numbers are different. Itsaddress is 212.64.94.251, without a subnet mask. This means that we have apoint-to-point connection and that every address, with the exception of212.64.94.251, is remote. There is more information, however. It tells usthat on the other side of the link there is, yet again, only one address,212.64.94.1. The /32 tells us that there are no 'network bits'.
It is absolutely vital that you grasp these concepts. Refer to thedocumentation mentioned at the beginning of this HOWTO if you have trouble.
You may also note 'qdisc', which stands for Queueing Discipline. This willbecome vital later on.
Well, we now know how to find 10.x.y.z addresses, and we are able to reach212.64.94.1. This is not enough however, so we need instructions on how toreach the world. The Internet is available via our ppp connection, and itappears that 212.64.94.1 is willing to spread our packets around theworld, and deliver results back to us.
[ahu@home ahu]$ ip route show 212.64.94.1 dev ppp0 proto kernel scope link src 212.64.94.251 10.0.0.0/8 dev eth0 proto kernel scope link src 10.0.0.1 127.0.0.0/8 dev lo scope link default via 212.64.94.1 dev ppp0
This is pretty much self explanatory. The first 3 lines of output explicitlystate what was already implied by ip address show, the last linetells us that the rest of the world can be found via 212.64.94.1, ourdefault gateway. We can see that it is a gateway because of the wordvia, which tells us that we need to send packets to 212.64.94.1, and that itwill take care of things.
For reference, this is what the old route utility shows us:
[ahu@home ahu]$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 212.64.94.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 212.64.94.1 0.0.0.0 UG 0 0 0 ppp0
ARP is the Address Resolution Protocol as described inRFC 826.ARP is used by a networked machine to resolve the hardware location/address ofanother machine on the samelocal network. Machines on the Internet are generally known by their nameswhich resolve to IPaddresses. This is how a machine on the foo.com network is able to communicatewith another machine which is on the bar.net network. An IP address, though,cannot tell you the physical location of a machine. This is where ARP comesinto the picture.
Let's take a very simple example. Suppose I have a network composed of severalmachines. Two of the machines which are currently on my network are foowith an IP address of 10.0.0.1 and bar with an IP address of 10.0.0.2.Now foo wants to ping bar to see that he is alive, but alas, foo has no ideawhere bar is. So when foo decides to ping bar he will need to sendout an ARP request.This ARP request is akin to foo shouting out on the network "Bar (10.0.0.2)!Where are you?" As a result of this every machine on the network will hearfoo shouting, but only bar (10.0.0.2) will respond. Bar will then send anARP reply directly back to foo which is akinbar saying,"Foo (10.0.0.1) I am here at 00:60:94:E9:08:12." After this simple transactionthat's used to locate his friend on the network, foo is able to communicatewith bar until he (his arp cache) forgets where bar is (typically after15 minutes on Unix).
Now let's see how this works.You can view your machines current arp/neighbor cache/table like so:
[root@espa041 /home/src/iputils]# ip neigh show 9.3.76.42 dev eth0 lladdr 00:60:08:3f:e9:f9 nud reachable 9.3.76.1 dev eth0 lladdr 00:06:29:21:73:c8 nud reachable
As you can see my machine espa041 (9.3.76.41) knows where to find espa042 (9.3.76.42) andespagate (9.3.76.1). Now let's add another machine to the arp cache.
[root@espa041 /home/paulsch/.gnome-desktop]# ping -c 1 espa043 PING espa043.austin.ibm.com (9.3.76.43) from 9.3.76.41 : 56(84) bytes of data. 64 bytes from 9.3.76.43: icmp_seq=0 ttl=255 time=0.9 ms --- espa043.austin.ibm.com ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 0.9/0.9/0.9 ms [root@espa041 /home/src/iputils]# ip neigh show 9.3.76.43 dev eth0 lladdr 00:06:29:21:80:20 nud reachable 9.3.76.42 dev eth0 lladdr 00:60:08:3f:e9:f9 nud reachable 9.3.76.1 dev eth0 lladdr 00:06:29:21:73:c8 nud reachable
As a result of espa041 trying to contact espa043, espa043's hardwareaddress/location has now been added to the arp/neighbor cache.So until the entry forespa043 times out (as a result of no communication between the two) espa041knows where to find espa043 and has no need to send an ARP request.
Now let's delete espa043 from our arp cache:
[root@espa041 /home/src/iputils]# ip neigh delete 9.3.76.43 dev eth0 [root@espa041 /home/src/iputils]# ip neigh show 9.3.76.43 dev eth0 nud failed 9.3.76.42 dev eth0 lladdr 00:60:08:3f:e9:f9 nud reachable 9.3.76.1 dev eth0 lladdr 00:06:29:21:73:c8 nud stale
Now espa041 has again forgotten where to find espa043 and will need to sendanother ARP request the next time he needs to communicate with espa043.You can also see from the above output that espagate (9.3.76.1) has beenchanged to the "stale" state. This means that the location shown is stillvalid, but it will have to be confirmed at the first transaction to thatmachine.
If you have a large router, you may well cater for the needs of differentpeople, who should be served differently. The routing policy database allowsyou to do this by having multiple sets of routing tables.
If you want to use this feature, make sure that your kernel is compiled withthe "IP: advanced router" and "IP: policy routing" features.
When the kernel needs to make a routing decision, it finds out which tableneeds to be consulted. By default, there are three tables. The old 'route'tool modifies the main and local tables, as does the ip tool (by default).
The default rules:
[ahu@home ahu]$ ip rule list 0: from all lookup local 32766: from all lookup main 32767: from all lookup default
This lists the priority of all rules. We see that all rules apply to allpackets ('from all'). We've seen the 'main' table before, it is output byip route ls, but the 'local' and 'default' table are new.
If we want to do fancy things, we generate rules which point to differenttables which allow us to override system wide routing rules.
For the exact semantics on what the kernel does when there are more matchingrules, see Alexey's ip-cref documentation.
Let's take a real example once again, I have 2 (actually 3, about time Ireturned them) cable modems, connected to a Linux NAT ('masquerading')router. People living here pay me to use the Internet. Suppose one of myhouse mates only visits hotmail and wants to pay less. This is fine with me,but they'll end up using the low-end cable modem.
The 'fast' cable modem is known as 212.64.94.251 and is a PPP link to212.64.94.1. The 'slow' cable modem is known by various ip addresses,212.64.78.148 in this example and is a link to 195.96.98.253.
The local table:
[ahu@home ahu]$ ip route list table local broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 local 10.0.0.1 dev eth0 proto kernel scope host src 10.0.0.1 broadcast 10.0.0.0 dev eth0 proto kernel scope link src 10.0.0.1 local 212.64.94.251 dev ppp0 proto kernel scope host src 212.64.94.251 broadcast 10.255.255.255 dev eth0 proto kernel scope link src 10.0.0.1 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 212.64.78.148 dev ppp2 proto kernel scope host src 212.64.78.148 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
Lots of obvious things, but things that need to be specified somewhere.Well, here they are. The default table is empty.
Let's view the 'main' table:
[ahu@home ahu]$ ip route list table main 195.96.98.253 dev ppp2 proto kernel scope link src 212.64.78.148 212.64.94.1 dev ppp0 proto kernel scope link src 212.64.94.251 10.0.0.0/8 dev eth0 proto kernel scope link src 10.0.0.1 127.0.0.0/8 dev lo scope link default via 212.64.94.1 dev ppp0
We now generate a new rule which we call 'John', for our hypotheticalhouse mate. Although we can work with pure numbers, it's far easier if we addour tables to /etc/iproute2/rt_tables.
# echo 200 John >> /etc/iproute2/rt_tables # ip rule add from 10.0.0.10 table John # ip rule ls 0: from all lookup local 32765: from 10.0.0.10 lookup John 32766: from all lookup main 32767: from all lookup default
Now all that is left is to generate John's table, and flush the route cache:
# ip route add default via 195.96.98.253 dev ppp2 table John # ip route flush cache
And we are done. It is left as an exercise for the reader to implement thisin ip-up.
A common configuration is the following, in which there are two providersthat connect a local network (or even a single machine) to the big Internet.
________ +------------+ / | | | +-------------+ Provider 1 +------- __ | | | / ___/ \_ +------+-------+ +------------+ | _/ \__ | if1 | / / \ | | | | Local network -----+ Linux router | | Internet \_ __/ | | | \__ __/ | if2 | \ \___/ +------+-------+ +------------+ | | | | \ +-------------+ Provider 2 +------- | | | +------------+ \________
There are usually two questions given this setup.
The first is how to route answers to packets coming in over a particular provider, say Provider 1, back out again over that same provider.
Let us first set some symbolical names. Let $IF1 be the name of the first interface (if1 in the picture above) and $IF2 the name of the second interface. Then let $IP1 be the IP address associated with $IF1 and $IP2 the IP address associated with $IF2. Next, let $P1 be the IP address of the gateway at Provider 1, and $P2 the IP address of the gateway at provider 2. Finally, let $P1_NET be the IP network $P1 is in, and $P2_NET the IP network $P2 is in.
One creates two additional routing tables, say T1 and T2. These are added in /etc/iproute2/rt_tables. Then you set up routing in these tables as follows:
ip route add $P1_NET dev $IF1 src $IP1 table T1 ip route add default via $P1 table T1 ip route add $P2_NET dev $IF2 src $IP2 table T2 ip route add default via $P2 table T2Nothing spectacular, just build a route to the gateway and build a default route via that gateway, as you would do in the case of a single upstream provider, but put the routes in a separate table per provider. Note that the network route suffices, as it tells you how to find any host in that network, which includes the gateway, as specified above.
Next you set up the main routing table. It is a good idea to route things to the direct neighbour through the interface connected to that neighbour. Note the `src' arguments, they make sure the right outgoing IP address is chosen.
ip route add $P1_NET dev $IF1 src $IP1 ip route add $P2_NET dev $IF2 src $IP2Then, your preference for default route:
ip route add default via $P1Next, you set up the routing rules. These actually choose what routing table to route with. You want to make sure that you route out a given interface if you already have the corresponding source address:
ip rule add from $IP1 table T1 ip rule add from $IP2 table T2This set of commands makes sure all answers to traffic coming in on a particular interface get answered from that interface.
Reader Rod Roark notes: 'If $P0_NET is the local network and $IF0 is its interface,the following additional entries are desirable: ip route add $P0_NET dev $IF0 table T1 ip route add $P2_NET dev $IF2 table T1 ip route add 127.0.0.0/8 dev lo table T1 ip route add $P0_NET dev $IF0 table T2 ip route add $P1_NET dev $IF1 table T2 ip route add 127.0.0.0/8 dev lo table T2' |
Now, this is just the very basic setup. It will work for all processes running on the router itself, and for the local network, if it is masqueraded. If it is not, then you either have IP space from both providers or you are going to want to masquerade to one of the two providers. In both cases you will want to add rules selecting which provider to route out from based on the IP address of the machine in the local network.
The second question is how to balance traffic going out over the two providers. This is actually not hard if you already have set up split access as above.
Instead of choosing one of the two providers as your default route, you now set up the default route to be a multipath route. In the default kernel this will balance routes over the two providers. It is done as follows (once more building on the example in the section on split-access):
ip route add default scope global nexthop via $P1 dev $IF1 weight 1 \ nexthop via $P2 dev $IF2 weight 1This will balance the routes over both providers. The weight parameters can be tweaked to favor one provider over the other.
Note that balancing will not be perfect, as it is route based, and routes are cached. This means that routes to often-used sites will always be over the same provider.
Furthermore, if you really want to do this, you probably also want to look at Julian Anastasov's patches at http://www.ssi.bg/~ja/#routes , Julian's route patch page. They will make things nicer to work with.
There are 3 kinds of tunnels in Linux. There's IP in IP tunneling, GRE tunneling and tunnels that live outside the kernel (like, for example PPTP).
Tunnels can be used to do some very unusual and very cool stuff. They canalso make things go horribly wrong when you don't configure them right.Don't point your default route to a tunnel device unless you knowEXACTLY what you are doing :-). Furthermore, tunneling increasesoverhead, because it needs an extra set of IP headers. Typically this is 20bytes per packet, so if the normal packet size (MTU) on a network is 1500bytes, a packet that is sent through a tunnel can only be 1480 bytes big.This is not necessarily a problem, but be sure to read up on IP packetfragmentation/reassembly when you plan to connect large networks withtunnels. Oh, and of course, the fastest way to dig a tunnel is to dig atboth sides.
This kind of tunneling has been available in Linux for a long time. It requires 2 kernel modules,ipip.o and new_tunnel.o.
Let's say you have 3 networks: Internal networks A and B, and intermediate network C (or let's say, Internet). So we have network A:
network 10.0.1.0 netmask 255.255.255.0 router 10.0.1.1
The router has address 172.16.17.18 on network C.
and network B:
network 10.0.2.0 netmask 255.255.255.0 router 10.0.2.1
The router has address 172.19.20.21 on network C.
As far as network C is concerned, we assume that it will pass any packet sentfrom A to B and vice versa. You might even use the Internet for this.
Here's what you do:
First, make sure the modules are installed:
insmod ipip.o insmod new_tunnel.o
Then, on the router of network A, you do the following:
ifconfig tunl0 10.0.1.1 pointopoint 172.19.20.21 route add -net 10.0.2.0 netmask 255.255.255.0 dev tunl0
And on the router of network B:
ifconfig tunl0 10.0.2.1 pointopoint 172.16.17.18 route add -net 10.0.1.0 netmask 255.255.255.0 dev tunl0
And if you're finished with your tunnel:
ifconfig tunl0 down
Presto, you're done. You can't forward broadcast or IPv6 traffic throughan IP-in-IP tunnel, though. You just connect 2 IPv4 networks that normally wouldn't be able to talk to each other, that's all. As far as compatibility goes, this code has been around a long time, so it's compatible all the way back to 1.3 kernels. Linux IP-in-IP tunneling doesn't work with other Operating Systems or routers, as far as I know. It's simple, it works. Use it if you have to, otherwise use GRE.
GRE is a tunneling protocol that was originally developed by Cisco, and itcan do a few more things than IP-in-IP tunneling. For example, you can alsotransport multicast traffic and IPv6 through a GRE tunnel.
In Linux, you'll need the ip_gre.o module.
Let's do IPv4 tunneling first:
Let's say you have 3 networks: Internal networks A and B, and intermediate network C (or let's say, Internet).
So we have network A:
network 10.0.1.0 netmask 255.255.255.0 router 10.0.1.1The router has address 172.16.17.18 on network C.Let's call this network neta (ok, hardly original)
and network B:
network 10.0.2.0 netmask 255.255.255.0 router 10.0.2.1The router has address 172.19.20.21 on network C.Let's call this network netb (still not original)
As far as network C is concerned, we assume that it will pass any packet sentfrom A to B and vice versa. How and why, we do not care.
On the router of network A, you do the following:
ip tunnel add netb mode gre remote 172.19.20.21 local 172.16.17.18 ttl 255 ip link set netb up ip addr add 10.0.1.1 dev netb ip route add 10.0.2.0/24 dev netb
Let's discuss this for a bit. In line 1, we added a tunnel device, andcalled it netb (which is kind of obvious because that's where we want it togo). Furthermore we told it to use the GRE protocol (mode gre), that theremote address is 172.19.20.21 (the router at the other end), that ourtunneling packets should originate from 172.16.17.18 (which allows yourrouter to have several IP addresses on network C and let you decide whichone to use for tunneling) and that the TTL field of the packet should be setto 255 (ttl 255).
The second line enables the device.
In the third line we gave the newly born interface netb the address10.0.1.1. This is OK for smaller networks, but when you're starting up amining expedition (LOTS of tunnels), you might want to consider usinganother IP range for tunneling interfaces (in this example, you could use10.0.3.0).
In the fourth line we set the route for network B. Note the different notation for the netmask. If you're not familiar with this notation, here's how it works: you write out the netmask in binary form, and you count all the ones. If you don't know how to do that, just remember that 255.0.0.0 is /8, 255.255.0.0 is /16 and 255.255.255.0 is /24. Oh, and 255.255.254.0 is /23, in case you were wondering.
But enough about this, let's go on with the router of network B.
ip tunnel add neta mode gre remote 172.16.17.18 local 172.19.20.21 ttl 255 ip link set neta up ip addr add 10.0.2.1 dev neta ip route add 10.0.1.0/24 dev netaAnd when you want to remove the tunnel on router A:
ip link set netb down ip tunnel del netbOf course, you can replace netb with neta for router B.
See Section 6 for a short bit about IPv6 Addresses.
On with the tunnels.
Let's assume that you have the following IPv6 network, and you want to connect it to 6bone, or a friend.
Network 3ffe:406:5:1:5:a:2:1/96Your IPv4 address is 172.16.17.18, and the 6bone router has IPv4 address 172.22.23.24.
ip tunnel add sixbone mode sit remote 172.22.23.24 local 172.16.17.18 ttl 255 ip link set sixbone up ip addr add 3ffe:406:5:1:5:a:2:1/96 dev sixbone ip route add 3ffe::/15 dev sixbone
Let's discuss this. In the first line, we created a tunnel device called sixbone. We gave it mode sit (which is IPv6 in IPv4 tunneling) and told it where to go to (remote) and where to come from (local). TTL is set to maximum, 255. Next, we made the device active (up). After that, we added our own network address, and set a route for 3ffe::/15 (which is currently all of 6bone) through the tunnel.
GRE tunnels are currently the preferred type of tunneling. It's a standard that is also widely adopted outside the Linux community and therefore a Good Thing.
There are literally dozens of implementations of tunneling outside the kernel. Best known are of course PPP and PPTP, but there are lots more (some proprietary, some secure, some that don't even use IP) and that is really beyond the scope of this HOWTO.
By Marco Davids
NOTE to maintainer:
As far as I am concerned, this IPv6-IPv4 tunneling is not per definitionGRE tunneling. You could tunnel IPv6 over IPv4 by means of GRE tunnel devices(GRE tunnels ANY to IPv4), but the device used here ("sit") only tunnelsIPv6 over IPv4 and is therefore something different.
This is another application of the tunneling capabilities of Linux. It ispopular among the IPv6 early adopters, or pioneers if you like.The 'hands-on' example described below is certainly not the only wayto do IPv6 tunneling. However, it is the method that is often used to tunnelbetween Linux and a Cisco IPv6 capable router and experience tells us thatthis is just the thing many people are after. Ten to one this applies toyou too ;-)
A short bit about IPv6 addresses:
IPv6 addresses are, compared to IPv4 addresses, really big: 128 bitsagainst 32 bits. And this provides us just with the thing we need: many, manyIP-addresses: 340,282,266,920,938,463,463,374,607,431,768,211,465 to beprecise. Apart from this, IPv6 (or IPng, for IP Next Generation) is supposedto provide for smaller routing tables on the Internet's backbone routers,simpler configuration of equipment, better security at the IP level andbetter support for QoS.
An example: 2002:836b:9820:0000:0000:0000:836b:9886
Writing down IPv6 addresses can be quite a burden. Therefore, to makelife easier there are some rules:
Don't use leading zeroes. Same as in IPv4.
Use colons to separate every 16 bits or two bytes.
When you have lots of consecutive zeroes,you can write this down as ::. You can only do this once in anaddress and only for quantities of 16 bits, though.
The address 2002:836b:9820:0000:0000:0000:836b:9886 can be written downas 2002:836b:9820::836b:9886, which is somewhat friendlier.
Another example, the address 3ffe:0000:0000:0000:0000:0020:34A1:F32C can bewritten down as 3ffe::20:34A1:F32C, which is a lot shorter.
IPv6 is intended to be the successor of the current IPv4. Because itis relatively new technology, there is no worldwide native IPv6 networkyet. To be able to move forward swiftly, the 6bone was introduced.
Native IPv6 networks are connected to each other by encapsulating the IPv6protocol in IPv4 packets and sending them over the existing IPv4 infrastructurefrom one IPv6 site to another.
That is precisely where the tunnel steps in.
To be able to use IPv6, we should have a kernel that supports it. Thereare many good documents on how to achieve this. But it all comes down toa few steps:
Get yourself a recent Linux distribution, with suitable glibc.
Then get yourself an up-to-date kernel source.
Go to /usr/src/linux and type:
make menuconfig
Choose "Networking Options"
Select "The IPv6 protocol", "IPv6: enable EUI-64 token format", "IPv6:disable provider based addresses"
In other words, compile IPv6 as 'built-in' in your kernel.You can then save your config like usual and go ahead with compilingthe kernel.
HINT: Before doing so, consider editing the Makefile:EXTRAVERSION = -x ; --> ; EXTRAVERSION = -x-IPv6
There is a lot of good documentation about compiling and installinga kernel, however this document is about something else. If you run intoproblems at this stage, go and look for documentation about compiling aLinux kernel according to your own specifications.
The file /usr/src/linux/README might be a good start.After you accomplished all this, and rebooted with your brand new kernel,you might want to issue an '/sbin/ifconfig -a' and notice the brand new 'sit0-device'. SIT stands for Simple Internet Transition. You may giveyourself a compliment; you are now one major step closer to IP, the NextGeneration ;-)
Now on to the next step. You want to connect your host, or maybe evenyour entire LAN to another IPv6 capable network. This might be the "6bone"that is setup especially for this particular purpose.
Let's assume that you have the following IPv6 network: 3ffe:604:6:8::/64 andyou want to connect it to 6bone, or a friend. Please note that the /64subnet notation works just like with regular IP addresses.
Your IPv4 address is 145.100.24.181 and the 6bone router has IPv4 address145.100.1.5
# ip tunnel add sixbone mode sit remote 145.100.1.5 [local 145.100.24.181 ttl 255] # ip link set sixbone up # ip addr add 3FFE:604:6:7::2/126 dev sixbone # ip route add 3ffe::0/16 dev sixbone
Let's discuss this. In the first line, we created a tunnel device calledsixbone. We gave it mode sit (which is IPv6 in IPv4 tunneling) and told itwhere to go to (remote) and where to come from (local). TTL is set tomaximum, 255.
Next, we made the device active (up). After that, we added our own networkaddress, and set a route for 3ffe::/15 (which is currently all of 6bone)through the tunnel. If the particular machine you run this on is your IPv6gateway, then consider adding the following lines:
# echo 1 >/proc/sys/net/ipv6/conf/all/forwarding # /usr/local/sbin/radvd
The latter, radvd is -like zebra- a router advertisement daemon, tosupport IPv6's autoconfiguration features. Search for it with your favouritesearch-engine if you like.You can check things like this:
# /sbin/ip -f inet6 addr
If you happen to have radvd running on your IPv6 gateway and boot yourIPv6 capable Linux on a machine on your local LAN, you would be able toenjoy the benefits of IPv6 autoconfiguration:
# /sbin/ip -f inet6 addr 1: lo:mtu 3924 qdisc noqueue inet6 ::1/128 scope host 3: eth0: mtu 1500 qdisc pfifo_fast qlen 100 inet6 3ffe:604:6:8:5054:4cff:fe01:e3d6/64 scope global dynamic valid_lft forever preferred_lft 604646sec inet6 fe80::5054:4cff:fe01:e3d6/10 scope link
You could go ahead and configure your bind for IPv6 addresses. The Atype has an equivalent for IPv6: AAAA. The in-addr.arpa's equivalent is:ip6.int. There's a lot of information available on this topic.
There is an increasing number of IPv6-aware applications available,including secure shell, telnet, inetd, Mozilla the browser, Apache thewebserver and a lot of others. But this is all outside the scope of thisRouting document ;-)
On the Cisco side the configuration would be something like this:
! interface Tunnel1 description IPv6 tunnel no ip address no ip directed-broadcast ipv6 address 3FFE:604:6:7::1/126 tunnel source Serial0 tunnel destination 145.100.24.181 tunnel mode ipv6ip ! ipv6 route 3FFE:604:6:8::/64 Tunnel1But if you don't have a Cisco at your disposal, try one of the manyIPv6 tunnel brokers available on the Internet. They are willing to configuretheir Cisco with an extra tunnel for you. Mostly by means of a friendlyweb interface. Search for "ipv6 tunnel broker" on your favourite search engine.
There are two kinds of IPSEC available for Linux these days. For 2.2 and 2.4, there is FreeS/WAN, which was the first major implementation. They have an official site and an unofficial one that is actually maintained. FreeS/WAN has traditionally not been merged with the mainline kernel for a number of reasons. Most often mentioned are 'political' issues with Americans working on crypto tainting its exportability. Furthermore, it does not integrate too well with the Linux kernel, leading it to be a bad candidate for actual merging.
Additionally, many parties have voicedworries about the quality of the code. To setup FreeS/WAN, a lot ofdocumentationis available.
As of Linux 2.5.47, there is a native IPSEC implementation in the kernel. It was written by Alexey Kuznetsov and Dave Miller, inspired by the work of the USAGI IPv6 group. With its merge, James Morris' CrypoAPI also became part of the kernel - it does the actual crypting.
This HOWTO will only document the 2.5+ version of IPSEC. FreeS/WAN is recommended for Linux 2.4 users for now, but be aware that its configuration will differ from the native IPSEC. In relatednews, there are now patches to make the FreeS/WAN userspace code work withthe native Linux IPSEC.
As of 2.5.49, IPSEC works without further patches.
Userspace tools appear to be available here.There are multiple programs available, the one linked here is based onRacoon. When compiling your kernel, be sure to turn on 'PF_KEY', 'AH', 'ESP' andeverything in the CryptoAPI! |
The author of this chapter is a complete IPSEC nitwit! If you find the inevitable mistakes, please email bert hubert |
First, we'll show how to manually setup secure communication between two hosts. A large part of this process can also be automated, buthere we'll do it by hand so as to acquaint ourselves with what is going on'under the hood'.
Feel free to skip the following section if you are only interestedin automatic keying but be aware that some understanding of manual keying isuseful.
IPSEC is a complicated subject. A lot of information is available online, this HOWTO will concentrate on getting youup and running and explaining the basic principles. All examples arebased on Racoon as found on the link above.
Many iptables configurations drop IPSEC packets! To pass IPSEC, use: 'iptables -A xxx -p 50 -j ACCEPT' and 'iptables -A xxx -p 51 -j ACCEPT' |
IPSEC offers a secure version of the Internet Protocol. Security in this context means two different things: encryption and authentication. A naive vision of security offers only encryption but it can easily be shown that is insufficient - you may be communicating encyphered,but no guarantee is offered that the remote party is the one you expect it to be.
IPSEC supports 'Encapsulated Security Payload' (ESP) for encryption and 'Authentication Header' (AH) for authenticating the remote partner.You can configure both of them, or decided to do only either.
Both ESP and AH rely on security associations. A security association (SA) consists of a source, a destination and an instruction. A sample authentication SA may look like this:
add 10.0.0.11 10.0.0.216 ah 15700 -A hmac-md5 "1234567890123456";This says 'traffic going from 10.0.0.11 to 10.0.0.216 that needs an AH can be signed using HMAC-MD5 using secret 1234567890123456'. This instructionis labelled with SPI ('Security Parameter Index') id '15700', more about that later.The interesting bit about SAs is that they are symmetrical. Both sides of a conversation share exactly the same SA, it is not mirrored on theother side. Do note however that there is no 'autoreverse' rule - this SA only describes a possible authentication from 10.0.0.11 to 10.0.0.216. For two-way traffic, two SAs are needed.
A sample ESP SA:
add 10.0.0.11 10.0.0.216 esp 15701 -E 3des-cbc "123456789012123456789012";This says 'traffic going from 10.0.0.11 to 10.0.0.216 that needs encryption can be encyphered using 3des-cbc with key 123456789012123456789012'. TheSPI id is '15701'.
So far, we've seen that SAs describe possible instructions, but do not in fact describe policy as to when these need to be used. In fact,there could be an arbitrary number of nearly identical SAs with only differing SPI ids. Incidentally, SPI stands for Security Parameter Index.To do actual crypto, we need to describe a policy. This policy can include things as 'use ipsec if available' or 'drop traffic unless we have ispec'.
A typical simple Security Policy (SP) looks like this:
spdadd 10.0.0.216 10.0.0.11 any -P out ipsec esp/transport//require ah/transport//require;If entered on host 10.0.0.216, this means that all traffic going out to 10.0.0.11 must be encrypted and be wrapped in an AH authenticating header. Note that this does not describe which SA is to be used,that is left as an exercise for the kernel to determine.
In other words, a Security Policy specifies WHAT we want; a SecurityAssociation describes HOW we want it.
Outgoing packets are labelled with the SA SPI ('the how') which thekernel used for encryption and authentication so the remote canlookup the corresponding verification and decryption instruction.
What follows is a very simple configuration for talking from host 10.0.0.216 to 10.0.0.11 using encryption and authentication. Note that the reverse path is plaintext in this first version and thatthis configuration should not be deployed.
On host 10.0.0.216:
#!/sbin/setkey -f add 10.0.0.216 10.0.0.11 ah 24500 -A hmac-md5 "1234567890123456"; add 10.0.0.216 10.0.0.11 esp 24501 -E 3des-cbc "123456789012123456789012"; spdadd 10.0.0.216 10.0.0.11 any -P out ipsec esp/transport//require ah/transport//require;
On host 10.0.0.11, the same Security Associations, no Security Policy:
#!/sbin/setkey -f add 10.0.0.216 10.0.0.11 ah 24500 -A hmac-md5 "1234567890123456"; add 10.0.0.216 10.0.0.11 esp 24501 -E 3des-cbc "123456789012123456789012";
With the above configuration in place (these files can be executed if 'setkey' is installed in /sbin),'ping 10.0.0.11' from 10.0.0.216 looks like this using tcpdump:
22:37:52 10.0.0.216 > 10.0.0.11: AH(spi=0x00005fb4,seq=0xa): ESP(spi=0x00005fb5,seq=0xa) (DF) 22:37:52 10.0.0.11 > 10.0.0.216: icmp: echo replyNote how the ping back from 10.0.0.11 is indeed plainly visible. The forward ping cannot be read by tcpdumpof course, but it does show the Security Parameter Index of AH and ESP, which tells 10.0.0.11 how to verify the authenticity of our packet and how to decrypt it.
A few things must be mentioned however. The configuration above is shown in a lot of IPSEC examples and it is very dangerous.The problem is that the above contains policy on how 10.0.0.216 should treat packets going to 10.0.0.11, and that it explains how 10.0.0.11should treat those packets but it does NOT instruct 10.0.0.11 to discard unauthenticated or unencrypted traffic!
Anybody can now insert spoofed and completely unencrypted data and 10.0.0.11 will accept it. To remedy the above, we need an incoming Security Policy on 10.0.0.11, as follows:
#!/sbin/setkey -f spdadd 10.0.0.216 10.0.0.11 any -P IN ipsec esp/transport//require ah/transport//require;This instructs 10.0.0.11 that any traffic coming to it from 10.0.0.216 is required to have valid ESP and AH.
Now, to complete this configuration, we need return traffic to be encrypted and authenticated as well of course. The full configuration on10.0.0.216:
#!/sbin/setkey -f flush; spdflush; # AH add 10.0.0.11 10.0.0.216 ah 15700 -A hmac-md5 "1234567890123456"; add 10.0.0.216 10.0.0.11 ah 24500 -A hmac-md5 "1234567890123456"; # ESP add 10.0.0.11 10.0.0.216 esp 15701 -E 3des-cbc "123456789012123456789012"; add 10.0.0.216 10.0.0.11 esp 24501 -E 3des-cbc "123456789012123456789012"; spdadd 10.0.0.216 10.0.0.11 any -P out ipsec esp/transport//require ah/transport//require; spdadd 10.0.0.11 10.0.0.216 any -P in ipsec esp/transport//require ah/transport//require;
And on 10.0.0.11:
#!/sbin/setkey -f flush; spdflush; # AH add 10.0.0.11 10.0.0.216 ah 15700 -A hmac-md5 "1234567890123456"; add 10.0.0.216 10.0.0.11 ah 24500 -A hmac-md5 "1234567890123456"; # ESP add 10.0.0.11 10.0.0.216 esp 15701 -E 3des-cbc "123456789012123456789012"; add 10.0.0.216 10.0.0.11 esp 24501 -E 3des-cbc "123456789012123456789012"; spdadd 10.0.0.11 10.0.0.216 any -P out ipsec esp/transport//require ah/transport//require; spdadd 10.0.0.216 10.0.0.11 any -P in ipsec esp/transport//require ah/transport//require;
Note that in this example we used identical keys for both directions of traffic. This is not in any way required however.
To examine the configuration we just created, execute setkey -D, which shows the Security Associations or setkey -DP which shows the configured policies.
In the previous section, encryption was configured using simple shared secrets. In other words, to remain secure,we need to transfer our encryption configuration over a trusted channel. If we were to configure the remote host over telnet, any third party would know our shared secret and the setup would not be secure.
Furthermore, because the secret is shared, it is not a secret. The remote can't do a lot with our secret, but we do need to make sure that we use a different secret for communicating with all our partners. This requires a large number of keys,if there are 10 parties, this needs at least 50 different secrets.
Besides the symmetric key problem, there is also the need for key rollover. If a third party manages to sniff enough traffic,it may be in a position to reverse engineer the key. This is prevented by moving to a new key every once in a while but that isa process that needs to be automated.
Another problem is that with manual keying as described above we exactly define the algorithms and key lengths used, somethingthat requires a lot of coordination with the remote party. It is desirable to be able to have the ability to describe a broader key policy such as 'We can do 3DES and Blowfish with at least the following key lengths'.
To solve these isses, IPSEC provides Internet Key Exchange to automatically exchange randomly generated keys which aretransmitted using asymmetric encryption technology, according to negotiated algorithm details.
The Linux 2.5 IPSEC implementation works with the KAME 'racoon' IKEdaemon. As of 9 November, the racoon version in Alexey's iptoolsdistribution can be compiled, although you may need to remove #include
IKE needs access to UDP port 500, be sure that iptables doesnot block it. |
As explained before, automatic keying does a lot of the workfor us. Specifically, it creates Security Associations on the fly. It doesnot however set policy for us, which is as it should be.
So, to benefit from IKE, setup a policy, but do not supply anySAs. If the kernel discovers that there is an IPSEC policy, but no SecurityAssociation, it will notify the IKE daemon, which then goes to work ontrying to negotiate one.
Reiterating, a Security Policy specifies WHAT we want; a SecurityAssociation describes HOW we want it. Using automatic keying lets us getaway with only specifying what we want.
Kame racoon comes with a grand host of options, most of which havevery fine default values, so we don't need to touch them. As describedabove, the operator needs to define a Security Policy, but no SecurityAssociations. We leave their negotiation to the IKE daemon.
In this example, 10.0.0.11 and 10.0.0.216 are once again going tosetup secure communications, but this time with help from racoon. Forsimplicity this configuration will be using pre-shared keys, thedreaded 'shared secrets'. X.509 certificates are discussed in a separatesection, see Section 7.2.3.
We'regoing to stick to almost the default configuration, identical on both hosts:
path pre_shared_key "/usr/local/etc/racoon/psk.txt"; remote anonymous { exchange_mode aggressive,main; doi ipsec_doi; situation identity_only; my_identifier address; lifetime time 2 min; # sec,min,hour initial_contact on; proposal_check obey; # obey, strict or claim proposal { encryption_algorithm 3des; hash_algorithm sha1; authentication_method pre_shared_key; dh_group 2 ; } } sainfo anonymous { pfs_group 1; lifetime time 2 min; encryption_algorithm 3des ; authentication_algorithm hmac_sha1; compression_algorithm deflate ; }
Lots of settings - I think yet more can be removed to get closer tothe default configuration. A few noteworthy things. We've configured twoanonymous settings which hold for all remotes, making further configurationeasy. There is no need for per-host stanzas here, unless we really wantthem.
Furthermore, we've set it up such that we identify ourselves basedon our IP address ('my_identifier address'), and declare that we can do3des, sha1, and that we will be using a pre-shared key, located in psk.txt.
In psk.txt, we now setup two entries, which do differ on both hosts.On 10.0.0.11:
10.0.0.216 password2On 10.0.0.216:
10.0.0.11 password2Make sure these files are owned by root, and set to mode 0600,racoon will not trust their contents otherwise. Note that these files aremirrors from eachother.
Now we are ready to setup our desired policy, which is simpleenough. On host 10.0.0.216:
#!/sbin/setkey -f flush; spdflush; spdadd 10.0.0.216 10.0.0.11 any -P out ipsec esp/transport//require; spdadd 10.0.0.11 10.0.0.216 any -P in ipsec esp/transport//require;And on 10.0.0.11:
#!/sbin/setkey -f flush; spdflush; spdadd 10.0.0.11 10.0.0.216 any -P out ipsec esp/transport//require; spdadd 10.0.0.216 10.0.0.11 any -P in ipsec esp/transport//require;Note how again these policies are mirrored.
We are now ready to launch racoon! Once launched, the moment we tryto telnet from 10.0.0.11 to 10.0.0.216, or the other way around, racoonwill start negotiating:
12:18:44: INFO: isakmp.c:1689:isakmp_post_acquire(): IPsec-SA request for 10.0.0.11 queued due to no phase1 found. 12:18:44: INFO: isakmp.c:794:isakmp_ph1begin_i(): initiate new phase 1 negotiation: 10.0.0.216[500]<=>10.0.0.11[500] 12:18:44: INFO: isakmp.c:799:isakmp_ph1begin_i(): begin Aggressive mode. 12:18:44: INFO: vendorid.c:128:check_vendorid(): received Vendor ID: KAME/racoon 12:18:44: NOTIFY: oakley.c:2037:oakley_skeyid(): couldn't find the proper pskey, try to get one by the peer's address. 12:18:44: INFO: isakmp.c:2417:log_ph1established(): ISAKMP-SA established 10.0.0.216[500]-10.0.0.11[500] spi:044d25dede78a4d1:ff01e5b4804f0680 12:18:45: INFO: isakmp.c:938:isakmp_ph2begin_i(): initiate new phase 2 negotiation: 10.0.0.216[0]<=>10.0.0.11[0] 12:18:45: INFO: pfkey.c:1106:pk_recvupdate(): IPsec-SA established: ESP/Transport 10.0.0.11->10.0.0.216 spi=44556347(0x2a7e03b) 12:18:45: INFO: pfkey.c:1318:pk_recvadd(): IPsec-SA established: ESP/Transport 10.0.0.216->10.0.0.11 spi=15863890(0xf21052)
If we now run setkey -D, which shows the Security Associations, theyare indeed there:
10.0.0.216 10.0.0.11 esp mode=transport spi=224162611(0x0d5c7333) reqid=0(0x00000000) E: 3des-cbc 5d421c1b d33b2a9f 4e9055e3 857db9fc 211d9c95 ebaead04 A: hmac-sha1 c5537d66 f3c5d869 bd736ae2 08d22133 27f7aa99 seq=0x00000000 replay=4 flags=0x00000000 state=mature created: Nov 11 12:28:45 2002 current: Nov 11 12:29:16 2002 diff: 31(s) hard: 600(s) soft: 480(s) last: Nov 11 12:29:12 2002 hard: 0(s) soft: 0(s) current: 304(bytes) hard: 0(bytes) soft: 0(bytes) allocated: 3 hard: 0 soft: 0 sadb_seq=1 pid=17112 refcnt=0 10.0.0.11 10.0.0.216 esp mode=transport spi=165123736(0x09d79698) reqid=0(0x00000000) E: 3des-cbc d7af8466 acd4f14c 872c5443 ec45a719 d4b3fde1 8d239d6a A: hmac-sha1 41ccc388 4568ac49 19e4e024 628e240c 141ffe2f seq=0x00000000 replay=4 flags=0x00000000 state=mature created: Nov 11 12:28:45 2002 current: Nov 11 12:29:16 2002 diff: 31(s) hard: 600(s) soft: 480(s) last: hard: 0(s) soft: 0(s) current: 231(bytes) hard: 0(bytes) soft: 0(bytes) allocated: 2 hard: 0 soft: 0 sadb_seq=0 pid=17112 refcnt=0As are the Security Policies we configured ourselves:
10.0.0.11[any] 10.0.0.216[any] tcp in ipsec esp/transport//require created:Nov 11 12:28:28 2002 lastused:Nov 11 12:29:12 2002 lifetime:0(s) validtime:0(s) spid=3616 seq=5 pid=17134 refcnt=3 10.0.0.216[any] 10.0.0.11[any] tcp out ipsec esp/transport//require created:Nov 11 12:28:28 2002 lastused:Nov 11 12:28:44 2002 lifetime:0(s) validtime:0(s) spid=3609 seq=4 pid=17134 refcnt=3
If this does not work, check that all configuration filesare owned by root, and can only be read by root. To start racoon on theforeground, use '-F'. To force it to read a certain configuration file,instead of at the compiled location, use '-f'. For staggering amounts ofdetail, add a 'log debug;' statement to racoon.conf.
As mentioned before, the use of shared secrets is hard because theyaren't easily shared and once shared, are no longer secret. Luckily, thereis asymmetric encryption technology to help resolve this.
If each IPSEC participant makes a public and a private key, securecommunications can be setup by both parties publishing their public key, andconfiguring policy.
Building a key is relatively easy, although it requires some work.The following is based on the 'openssl' tool.
OpenSSL has a lot of infrastructure for keys that may or may not besigned by certificate authorities. Right now, we need to circumvent all thatinfrastructure and practice some good old Snake Oil security, and do withouta certificate authority.
First we issue a 'certificate request' for our host, called'laptop':
$ openssl req -new -nodes -newkey rsa:1024 -sha1 -keyform PEM -keyout \ laptop.private -outform PEM -out request.pemThis asks us some questions:
Country Name (2 letter code) [AU]:NL State or Province Name (full name) [Some-State]:. Locality Name (eg, city) []:Delft Organization Name (eg, company) [Internet Widgits Pty Ltd]:Linux Advanced Routing & Traffic Control Organizational Unit Name (eg, section) []:laptop Common Name (eg, YOUR name) []:bert hubert Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []:It is left to your own discretion how completely you want to fillthis out. You may or may not want to put your hostname in there, dependingon your security needs. In this example, we have.
We'll now 'self sign' this request:
$ openssl x509 -req -in request.pem -signkey laptop.private -out \ laptop.public Signature ok subject=/C=NL/L=Delft/O=Linux Advanced Routing & Traffic \ Control/OU=laptop/CN=bert hubert/[email protected] Getting Private keyThe 'request.pem' file can now be discarded.
Repeat this procedure for all hosts you need a key for. You candistribute the '.public' file with impunity, but keep the '.private' oneprivate!
Once we have a public and a private key for our hosts we can tellracoon to use them.
We return to our previous configuration and the two hosts, 10.0.0.11('upstairs') and 10.0.0.216 ('laptop').
To the racoon.conf file on 10.0.0.11, we add:
path certificate "/usr/local/etc/racoon/certs"; remote 10.0.0.216 { exchange_mode aggressive,main; my_identifier asn1dn; peers_identifier asn1dn; certificate_type x509 "upstairs.public" "upstairs.private"; peers_certfile "laptop.public"; proposal { encryption_algorithm 3des; hash_algorithm sha1; authentication_method rsasig; dh_group 2 ; } }This tells racoon that certificates are to be found in /usr/local/etc/racoon/certs/. Furthermore, it containsconfiguration items specific for remote 10.0.0.216.
The 'asn1dn' lines tell racoon that the identifier for both thelocal and remote ends are to be extracted from the public keys. This is the'subject=/C=NL/L=Delft/O=Linux Advanced Routing & Traffic Control/OU=laptop/CN=bert hubert/[email protected]' output from above.
The certificate_type line configures the localpublic and private key. The peers_certfile statementconfigures racoon to read the public key of the remote peer from the filelaptop.public.
The proposal stanza is unchanged from what we'veseen earlier, with the exception that theauthentication_method is now rsasig,indicating the use of RSA public/private keys for authentication.
The addition to the configuration of 10.0.0.216 is nearly identical, except for theusual mirroring:
path certificate "/usr/local/etc/racoon/certs"; remote 10.0.0.11 { exchange_mode aggressive,main; my_identifier asn1dn; peers_identifier asn1dn; certificate_type x509 "laptop.public" "laptop.private"; peers_certfile "upstairs.public"; proposal { encryption_algorithm 3des; hash_algorithm sha1; authentication_method rsasig; dh_group 2 ; } }
Now that we've added these statements to both hosts, we only need tomove the key files in place. The 'upstairs' machine needs upstairs.private, upstairs.public, and laptop.public in/usr/local/etc/racoon/certs. Make sure that thisdirectory is owned by root and has mode 0700 or racoon may refuse to readit!
The 'laptop' machine needs laptop.private, laptop.public, and upstairs.public in/usr/local/etc/racoon/certs. In other words, each hostneeds its own public and private key and additionally, the public key of theremote.
Verify that a Security Policy is in place (execute the 'spdadd' lines inSection 7.2.2). Then launch racoon and everything shouldwork.
To setup secure communications with a remote party, we must exchangepublic keys. While the public key does not need to be kept a secret, on thecontrary, it is very important to be sure that it is in fact the unalteredkey. In other words, you need to be certain there is no 'man in the middle'.
To make this easy, OpenSSL provides the 'digest' command:
$ openssl dgst upstairs.public MD5(upstairs.public)= 78a3bddafb4d681c1ca8ed4d23da4ff1
Now all we need to do is verify if our remote partner sees the samedigest. This might be done by meeting in real life or perhaps over thephone, making sure the number of the remote party was not in fact sent overthe same email containing the key!
Another way of doing this is the use of a Trusted Third Party whichruns a Certificate Authority. This CA would then sign your key, which we'vedone ourselves above.
So far, we've only seen IPSEC in so called 'transport' mode where both endpoints understand IPSEC directly. As this is often notthe case, it may be necessary to have only routers understand IPSEC, and have them do the work for the hosts behind them. This is called 'tunnel mode'.
Setting this up is a breeze. To tunnel all traffic to 130.161.0.0/16 from 10.0.0.216 via 10.0.0.11, we issue the following on10.0.0.216:
#!/sbin/setkey -f flush; spdflush; add 10.0.0.216 10.0.0.11 esp 34501 -m tunnel -E 3des-cbc "123456789012123456789012"; spdadd 10.0.0.0/24 130.161.0.0/16 any -P out ipsec esp/tunnel/10.0.0.216-10.0.0.11/require;Note the '-m tunnel', it is vitally important! This first configures an ESP encryption SA between our tunnel endpoints,10.0.0.216 and 10.0.0.11.
Next the actual tunnel is configured. It instructs the kernel to encrypt all traffic it has to route from 10.0.0.0/24 to130.161.0.0. Furthermore, this traffic then has to be shipped to 10.0.0.11.
10.0.0.11 also needs some configuration:
#!/sbin/setkey -f flush; spdflush; add 10.0.0.216 10.0.0.11 esp 34501 -m tunnel -E 3des-cbc "123456789012123456789012"; spdadd 10.0.0.0/24 130.161.0.0/16 any -P in ipsec esp/tunnel/10.0.0.216-10.0.0.11/require;Note that this is exactly identical, except for the change from '-P out' to '-P in'. As with earlier examples,we've now only configured traffic going one way. Completing the other half of the tunnel is left as anexercise for the reader.
Another name for this setup is 'proxy ESP', which is somewhat clearer.
The IPSEC tunnel needs to have IP Forwarding enabled in the kernel! |
Thomas Walpuski reports that he wrote a patch to make OpenBSD isakpmd work with Linux 2.5 IPSEC.Furthermore, the main isakpmd CVS repository now contains this code!Some notes are on his page.
isakpmd is quite different from racoon mentioned above but manypeople like it. It can be found here.Read more about OpenBSD CVS here. Thomas also made atarballavailable for those uncomfortable with CVS or patch.
Furthermore, there are patches to make the FreeS/WAN userspace toolswork with the native Linux 2.5 IPSEC, you can find them here.
FIXME: Write this
Andreas Jellinghaus
Peter Bieringer reports:
Here are some results (tunnel mode only tested, auth=SHA1): DES: ok 3DES: ok AES-128: ok AES-192: not supported by CP VPN-1 AES-256: ok CAST* : not supported by used Linux kernel Tested version: FP4 aka R54 aka w/AI
More information here.
FIXME: Editor Vacancy!
The Multicast-HOWTO is ancient (relatively-speaking) and may be inaccurateor misleading in places, for that reason.
Before you can do any multicast routing, you need to configure the Linuxkernel to support the type of multicast routing you want to do. This, inturn, requires you to decide what type of multicast routing you expect tobe using. There are essentially four "common" types - DVMRP (the Multicastversion of the RIP unicast protocol), MOSPF (the same, but for OSPF), PIM-SM("Protocol Independent Multicasting - Sparse Mode", which assumes that usersof any multicast group are spread out, rather than clumped) and PIM-DM (thesame, but "Dense Mode", which assumes that there will be significant clumpsof users of the same multicast group).
In the Linux kernel, you will notice that these options don't appear. This isbecause the protocol itself is handled by a routing application, such asZebra, mrouted, or pimd. However, you still have to have a good idea of whichyou're going to use, to select the right options in the kernel.
For all multicast routing, you will definitely need to enable "multicasting"and "multicast routing". For DVMRP and MOSPF, this is sufficient. If you aregoing to use PIM, you must also enable PIMv1 or PIMv2, depending on whetherthe network you are connecting to uses version 1 or 2 of the PIM protocol.
Once you have all that sorted out, and your new Linux kernel compiled, youwill see that the IP protocols listed, at boot time, now include IGMP. Thisis a protocol for managing multicast groups. At the time of writing, Linuxsupports IGMP versions 1 and 2 only, although version 3 does exist and hasbeen documented. This doesn't really affect us that much, as IGMPv3 is stillnew enough that the extra capabilities of IGMPv3 aren't going to be thatmuch use. Because IGMP deals with groups, only the features present in thesimplest version of IGMP over the entire group are going to be used. For themost part, that will be IGMPv2, although IGMPv1 is sill going to beencountered.
So far, so good. We've enabled multicasting. Now, we have to tell the Linuxkernel to actually do something with it, so we can start routing. This meansadding the Multicast virtual network to the router table:
ip route add 224.0.0.0/4 dev eth0
(Assuming, of course, that you're multicasting over eth0! Substitute thedevice of your choice, for this.)
Now, tell Linux to forward packets...
echo 1 > /proc/sys/net/ipv4/ip_forward
At this point, you may be wondering if this is ever going to do anything. So,to test our connection, we ping the default group, 224.0.0.1, to see if anyoneis alive. All machines on your LAN with multicasting enabled shouldrespond, but nothing else. You'll notice that none of the machines thatrespond have an IP address of 224.0.0.1. What a surprise! :) This is a groupaddress (a "broadcast" to subscribers), and all members of the group willrespond with their own address, not the group address.
ping -c 2 224.0.0.1
At this point, you're ready to do actual multicast routing. Well, assumingthat you have two networks to route between.
(To Be Continued!)
Now, when I discovered this, it really blew me away. Linux 2.2/2.4comes with everything to manage bandwidth in ways comparable to high-enddedicated bandwidth management systems.
Linux even goes far beyond what Frame and ATM provide.
Just to prevent confusion, tc uses the following rules for bandwith specification:
mbps = 1024 kbps = 1024 * 1024 bps => byte/s mbit = 1024 kbit => kilo bit/s. mb = 1024 kb = 1024 * 1024 b => byte mbit = 1024 kbit => kilo bit.Internally, the number is stored in bps and b.
But when tc prints the rate, it uses following :
1Mbit = 1024 Kbit = 1024 * 1024 bps => byte/s
With queueing we determine the way in which data is SENT.It is important to realise that we can only shape data that we transmit.
With the way the Internet works, we have no direct control of what peoplesend us. It's a bit like your (physical!) mailbox at home. There is no wayyou can influence the world to modify the amount of mail they send you,short of contacting everybody.
However, the Internet is mostly based on TCP/IP which has a few featuresthat help us. TCP/IP has no way of knowing the capacity of the networkbetween two hosts, so it just starts sending data faster and faster ('slowstart') and when packets start getting lost, because there is no room tosend them, it will slow down. In fact it is a bit smarter than this, butmore about that later.
This is the equivalent of not reading half of your mail, and hoping thatpeople will stop sending it to you. With the difference that it works forthe Internet :-)
If you have a router and wish to prevent certain hosts within your networkfrom downloading too fast, you need to do your shaping on the *inner* interfaceof your router, the one that sends data to your own computers.
You also have to be sure you are controlling the bottleneck of the link.If you have a 100Mbit NIC and you have a router that has a 256kbit link,you have to make sure you are not sending more data than your router canhandle. Otherwise, it will be the router who is controlling the link andshaping the available bandwith. We need to 'own the queue' so to speak, andbe the slowest link in the chain. Luckily this is easily possible.
As said, with queueing disciplines, we change the way data is sent.Classless queueing disciplines are those that, by and large accept data andonly reschedule, delay or drop it.
These can be used to shape traffic for an entire interface, without anysubdivisions. It is vital that you understand this part of queueing beforewe go on the classful qdisc-containing-qdiscs!
By far the most widely used discipline is the pfifo_fast qdisc - this is thedefault. This also explains why these advanced features are so robust. Theyare nothing more than 'just another queue'.
Each of these queues has specific strengths and weaknesses. Not all of themmay be as well tested.
This queue is, as the name says, First In, First Out, which means that nopacket receives special treatment. At least, not quite. This queue has 3 socalled 'bands'. Within each band, FIFO rules apply. However, as long asthere are packets waiting in band 0, band 1 won't be processed. Same goesfor band 1 and band 2.
The kernel honors the so called Type of Service flag of packets, and takescare to insert 'minimum delay' packets in band 0.
Do not confuse this classless simple qdisc with the classful PRIO one!Although they behave similarly, pfifo_fast is classless and you cannot addother qdiscs to it with the tc command.
You can't configure the pfifo_fast qdisc as it is the hardwired default.This is how it is configured by default:
Determines how packet priorities, as assigned by the kernel, map to bands.Mapping occurs based on the TOS octet of the packet, which looks like this:
0 1 2 3 4 5 6 7 +-----+-----+-----+-----+-----+-----+-----+-----+ | | | | | PRECEDENCE | TOS | MBZ | | | | | +-----+-----+-----+-----+-----+-----+-----+-----+
The four TOS bits (the 'TOS field') are defined as:
Binary Decimcal Meaning ----------------------------------------- 1000 8 Minimize delay (md) 0100 4 Maximize throughput (mt) 0010 2 Maximize reliability (mr) 0001 1 Minimize monetary cost (mmc) 0000 0 Normal Service
As there is 1 bit to the right of these four bits, the actual value of theTOS field is double the value of the TOS bits. Tcpdump -v -v shows you thevalue of the entire TOS field, not just the four bits. It is the value yousee in the first column of this table:
TOS Bits Means Linux Priority Band ------------------------------------------------------------ 0x0 0 Normal Service 0 Best Effort 1 0x2 1 Minimize Monetary Cost 1 Filler 2 0x4 2 Maximize Reliability 0 Best Effort 1 0x6 3 mmc+mr 0 Best Effort 1 0x8 4 Maximize Throughput 2 Bulk 2 0xa 5 mmc+mt 2 Bulk 2 0xc 6 mr+mt 2 Bulk 2 0xe 7 mmc+mr+mt 2 Bulk 2 0x10 8 Minimize Delay 6 Interactive 0 0x12 9 mmc+md 6 Interactive 0 0x14 10 mr+md 6 Interactive 0 0x16 11 mmc+mr+md 6 Interactive 0 0x18 12 mt+md 4 Int. Bulk 1 0x1a 13 mmc+mt+md 4 Int. Bulk 1 0x1c 14 mr+mt+md 4 Int. Bulk 1 0x1e 15 mmc+mr+mt+md 4 Int. Bulk 1
Lots of numbers. The second column contains the value of the relevant fourTOS bits, followed by their translated meaning. For example, 15 stands for apacket wanting Minimal Monetary Cost, Maximum Reliability, MaximumThroughput AND Minimum Delay. I would call this a 'Dutch Packet'.
The fourth column lists the way the Linux kernel interprets the TOS bits, byshowing to which Priority they are mapped.
The last column shows the result of the default priomap. On the command line,the default priomap looks like this:
1, 2, 2, 2, 1, 2, 0, 0 , 1, 1, 1, 1, 1, 1, 1, 1
This means that priority 4, for example, gets mapped to band number 1. Thepriomap also allows you to list higher priorities (> 7) which do notcorrespond to TOS mappings, but which are set by other means.
This table from RFC 1349 (read it for more details) tells you howapplications might very well set their TOS bits:
TELNET 1000 (minimize delay) FTP Control 1000 (minimize delay) Data 0100 (maximize throughput) TFTP 1000 (minimize delay) SMTP Command phase 1000 (minimize delay) DATA phase 0100 (maximize throughput) Domain Name Service UDP Query 1000 (minimize delay) TCP Query 0000 Zone Transfer 0100 (maximize throughput) NNTP 0001 (minimize monetary cost) ICMP Errors 0000 Requests 0000 (mostly) Responses(mostly)
The length of this queue is gleaned from the interface configuration, whichyou can see and set with ifconfig and ip. To set the queue length to 10,execute: ifconfig eth0 txqueuelen 10
You can't set this parameter with tc!
The Token Bucket Filter (TBF) is a simple qdisc that only passes packetsarriving at a rate which is not exceeding some administratively set rate, butwith the possibility to allow short bursts in excess of this rate.
TBF is very precise, network- and processor friendly. It should be yourfirst choice if you simply want to slow an interface down!
The TBF implementation consists of a buffer (bucket), constantly filled bysome virtual pieces of information called tokens, at a specific rate (tokenrate). The most important parameter of the bucket is its size, that is thenumber of tokens it can store.
Each arriving token collects one incoming data packet from the data queueand is then deleted from the bucket. Associating this algorithmwith the two flows -- token and data, gives us three possible scenarios:
The data arrives in TBF at a rate that's equal to the rateof incoming tokens. In this case each incoming packet has its matching tokenand passes the queue without delay.
The data arrives in TBF at a rate that's smaller than thetoken rate. Only a part of the tokens are deleted at output of each data packetthat's sent out the queue, so the tokens accumulate, up to the bucket size.The unused tokens can then be used to send data at a speed that's exceeding thestandard token rate, in case short data bursts occur.
The data arrives in TBF at a rate bigger than the token rate.This means that the bucket will soon be devoid of tokens, which causes theTBF to throttle itself for a while. This is called an 'overlimit situation'.If packets keep coming in, packets will start to get dropped.
The last scenario is very important, because it allows toadministratively shape the bandwidth available to data that's passingthe filter.
The accumulation of tokens allows a short burst of overlimit data to bestill passed without loss, but any lasting overload will cause packets to beconstantly delayed, and then dropped.
Please note that in the actual implementation, tokens correspond to bytes,not packets.
Even though you will probably not need to change them, tbf has some knobsavailable. First the parameters that are always available:
Limit is the number of bytes that can be queued waiting for tokens to becomeavailable. You can also specify this the other way around by setting thelatency parameter, which specifies the maximum amount of time a packet cansit in the TBF. The latter calculation takes into account the size of thebucket, the rate and possibly the peakrate (if set).
Size of the bucket, in bytes. This is the maximum amount of bytes thattokens can be available for instantaneously. In general, larger shapingrates require a larger buffer. For 10mbit/s on Intel, you need at least10kbyte buffer if you want to reach your configured rate!
If your buffer is too small, packets may be dropped because more tokensarrive per timer tick than fit in your bucket.
A zero-sized packet does not use zero bandwidth. For ethernet, no packetuses less than 64 bytes. The Minimum Packet Unit determines the minimaltoken usage for a packet.
The speedknob. See remarks above about limits!
If the bucket contains tokens and is allowed to empty, by default it does soat infinite speed. If this is unacceptable, use the following parameters:
If tokens are available, and packets arrive, they are sent out immediatelyby default, at 'lightspeed' so to speak. That may not be what you want,especially if you have a large bucket.
The peakrate can be used to specify how quickly the bucket is allowed to bedepleted. If doing everything by the book, this is achieved by releasing apacket, and then wait just long enough, and release the next. We calculatedour waits so we send just at peakrate.
However, due to the default 10ms timer resolution of Unix, with 10.000 bitsaverage packets, we are limited to 1mbit/s of peakrate!
The 1mbit/s peakrate is not very useful if your regular rate is more thanthat. A higher peakrate is possible by sending out more packets pertimertick, which effectively means that we create a second bucket!
This second bucket defaults to a single packet, which is not a bucket atall.
To calculate the maximum possible peakrate, multiply the configured mtu by100 (or more correctly, HZ, which is 100 on Intel, 1024 on Alpha).
A simple but *very* useful configuration is this:
# tc qdisc add dev ppp0 root tbf rate 220kbit latency 50ms burst 1540
Ok, why is this useful? If you have a networking device with a large queue,like a DSL modem or a cable modem, and you talk to it over a fast device,like over an ethernet interface, you will find that uploading absolutelydestroys interactivity.
This is because uploading will fill the queue in the modem, which isprobably *huge* because this helps actually achieving good data throughputuploading. But this is not what you want, you want to have the queue not toobig so interactivity remains and you can still do other stuff while sendingdata.
The line above slows down sending to a rate that does not lead to a queue inthe modem - the queue will be in Linux, where we can control it to a limitedsize.
Change 220kbit to your uplink's *actual* speed, minus a few percent. If youhave a really fast modem, raise 'burst' a bit.
Stochastic Fairness Queueing (SFQ) is a simple implementation of the fairqueueing algorithms family. It's less accurate than others, but it alsorequires less calculations while being almost perfectly fair.
The key word in SFQ is conversation (or flow), which mostly corresponds to aTCP session or a UDP stream. Traffic is divided into a pretty large numberof FIFO queues, one for each conversation. Traffic is then sent in a roundrobin fashion, giving each session the chance to send data in turn.
This leads to very fair behaviour and disallows any single conversation fromdrowning out the rest. SFQ is called 'Stochastic' because it doesn't reallyallocate a queue for each session, it has an algorithm which divides trafficover a limited number of queues using a hashing algorithm.
Because of the hash, multiple sessions might end up in the same bucket, whichwould halve each session's chance of sending a packet, thus halving theeffective speed available. To prevent this situation from becomingnoticeable, SFQ changes its hashing algorithm quite often so that any twocolliding sessions will only do so for a small number of seconds.
It is important to note that SFQ is only useful in case your actual outgoinginterface is really full! If it isn't then there will be no queue on yourlinux machine and hence no effect. Later on we will describe how to combineSFQ with other qdiscs to get a best-of-both worlds situation.
Specifically, setting SFQ on the ethernet interface heading to yourcable modem or DSL router is pointless without further shaping!
The SFQ is pretty much self tuning:
Reconfigure hashing once this many seconds. If unset, hash will never bereconfigured. Not recommended. 10 seconds is probably a good value.
Amount of bytes a stream is allowed to dequeue before the next queue gets aturn. Defaults to 1 maximum sized packet (MTU-sized). Do not set below theMTU!
The total number of packets that will be queued by this SFQ (after that it starts dropping them).
If you have a device which has identical link speed and actual availablerate, like a phone modem, this configuration will help promote fairness:
# tc qdisc add dev ppp0 root sfq perturb 10 # tc -s -d qdisc ls qdisc sfq 800c: dev ppp0 quantum 1514b limit 128p flows 128/1024 perturb 10sec Sent 4812 bytes 62 pkts (dropped 0, overlimits 0)
The number 800c: is the automatically assigned handle number, limit meansthat 128 packets can wait in this queue. There are 1024 hashbucketsavailable for accounting, of which 128 can be active at a time (no morepackets fit in the queue!) Once every 10 seconds, the hashes arereconfigured.
Summarizing, these are the simple queues that actually manage traffic byreordering, slowing or dropping packets.
The following tips may help in choosing which queue to use. It mentions someqdiscs described in theChapter 14 chapter.
To purely slow down outgoing traffic, use the Token Bucket Filter. Works upto huge bandwidths, if you scale the bucket.
If your link is truly full and you want to make sure that no single sessioncan dominate your outgoing bandwidth, use Stochastical Fairness Queueing.
If you have a big backbone and know what you are doing, consider RandomEarly Drop (see Advanced chapter).
To 'shape' incoming traffic which you are not forwarding, use the IngressPolicer. Incoming shaping is called 'policing', by the way, not 'shaping'.
If you *are* forwarding it, use a TBF on the interface you are forwardingthe data to. Unless you want to shape traffic that may go out over severalinterfaces, in which case the only common factor is the incoming interface.In that case use the Ingress Policer.
If you don't want to shape, but only want to see if your interface is soloaded that it has to queue, use the pfifo queue (not pfifo_fast). It lacksinternal bands but does account the size of its backlog.
Finally - you can also do "social shaping".You may not always be able to use technology to achieve what you want.Users experience technical constraints as hostile.A kind word may also help with getting your bandwidth to be divided right!
To properly understand more complicated configurations it is necessary toexplain a few concepts first. Because of the complexity and the relativeyouth of the subject, a lot of different words are used when people in factmean the same thing.
The following is loosely based on draft-ietf-diffserv-model-06.txt,An Informal Management Model for Diffserv Routers.It can currently be found at http://www.ietf.org/internet-drafts/draft-ietf-diffserv-model-06.txt.
Read it for the strict definitions of the terms used.
An algorithm that manages the queue of a device, either incoming (ingress)or outgoing (egress).
The root qdisc is the qdisc attached to the device.
A qdisc with no configurable internal subdivisions.
A classful qdisc contains multiple classes. Some of these classes contains afurther qdisc, which may again be classful, but need not be. According tothe strict definition, pfifo_fast *is* classful, because it contains threebands which are, in fact, classes. However, from the user's configurationperspective, it is classless as the classes can't be touched with the tctool.
A classful qdisc may have many classes, each of which is internal to theqdisc. A class, in turn, may have several classes added to it. So a classcan have a qdisc as parent or an other class.A leaf class is a class with no child classes. This class has 1 qdisc attachedto it. This qdisc is responsible to send the data from that class. Whenyou create a class, a fifo qdisc is attached to it. When you add a child class,this qdisc is removed.For a leaf class, this fifo qdisc can be replaced withan other more suitable qdisc. You can even replace this fifo qdisc with aclassful qdisc so you can add extra classes.
Each classful qdisc needs to determine to which class it needs to send apacket. This is done using the classifier.
Classification can be performed using filters. A filter contains a number ofconditions which if matched, make the filter match.
A qdisc may, with the help of a classifier, decide that some packets need togo out earlier than others. This process is called Scheduling, and isperformed for example by the pfifo_fast qdisc mentioned earlier. Schedulingis also called 'reordering', but this is confusing.
The process of delaying packets before they go out to make traffic confirmto a configured maximum rate. Shaping is performed on egress. Colloquially, dropping packets to slow traffic down is also often called Shaping.
Delaying or dropping packets in order to make traffic stay below aconfigured bandwidth. In Linux, policing can only drop a packet and notdelay it - there is no 'ingress queue'.
A work-conserving qdisc always delivers a packet if one is available. Inother words, it never delays a packet if the network adaptor is ready tosend one (in the case of an egress qdisc).
Some queues, like for example the Token Bucket Filter, may need to hold onto a packet for a certain time in order to limit the bandwidth. This meansthat they sometimes refuse to pass a packet, even though they have oneavailable.
Now that we have our terminology straight, let's see where all these thingsare.
Userspace programs ^ | +---------------+-----------------------------------------+ | Y | | -------> IP Stack | | | | | | | Y | | | Y | | ^ | | | | / ----------> Forwarding -> | | ^ / | | | |/ Y | | | | | | ^ Y /-qdisc1-\ | | | Egress /--qdisc2--\ | --->->Ingress Classifier ---qdisc3---- | -> | Qdisc \__qdisc4__/ | | \-qdiscN_/ | | | +----------------------------------------------------------+Thanks to Jamal Hadi Salim for this ASCII representation.
The big block represents the kernel. The leftmost arrow represents trafficentering your machine from the network. It is then fed to the IngressQdisc which may apply Filters to a packet, and decide to drop it. Thisis called 'Policing'.
This happens at a very early stage, before it has seen a lot of the kernel.It is therefore a very good place to drop traffic very early, withoutconsuming a lot of CPU power.
If the packet is allowed to continue, it may be destined for a localapplication, in which case it enters the IP stack in order to be processed,and handed over to a userspace program. The packet may also be forwardedwithout entering an application, in which case it is destined for egress.Userspace programs may also deliver data, which is then examined andforwarded to the Egress Classifier.
There it is investigated and enqueued to any of a number of qdiscs. In theunconfigured default case, there is only one egress qdisc installed, thepfifo_fast, which always receives the packet. This is called 'enqueueing'.
The packet now sits in the qdisc, waiting for the kernel to ask forit for transmission over the network interface. This is called 'dequeueing'.
This picture also holds in case there is only one network adaptor - thearrows entering and leaving the kernel should not be taken too literally.Each network adaptor has both ingress and egress hooks.
Classful qdiscs are very useful if you have different kinds of traffic whichshould have differing treatment. One of the classful qdiscs is called 'CBQ','Class Based Queueing' and it is so widely mentioned that people identifyqueueing with classes solely with CBQ, but this is not the case.
CBQ is merely the oldest kid on the block - and also the most complex one.It may not always do what you want. This may come as something of a shockto many who fell for the 'sendmail effect', which teaches us that anycomplex technology which doesn't come with documentation must be the bestavailable.
More about CBQ and its alternatives shortly.
When traffic enters a classful qdisc, it needs to be sent to any of theclasses within - it needs to be 'classified'. To determine what to do with apacket, the so called 'filters' are consulted. It is important to know thatthe filters are called from within a qdisc, and not the other way around!
The filters attached to that qdisc then return with a decision, and theqdisc uses this to enqueue the packet into one of the classes. Each subclassmay try other filters to see if further instructions apply. If not, theclass enqueues the packet to the qdisc it contains.
Besides containing other qdiscs, most classful qdiscs also perform shaping.This is useful to perform both packet scheduling (with SFQ, for example) andrate control. You need this in cases where you have a high speedinterface (for example, ethernet) to a slower device (a cable modem).
If you were only to run SFQ, nothing would happen, as packets enter &leave your router without delay: the output interface is far faster thanyour actual link speed. There is no queue to schedule then.
Each interface has one egress 'root qdisc'. By default, it is the earlier mentionedclassless pfifo_fast queueing discipline. Each qdisc and class is assigned ahandle, which can be used by later configuration statements to refer to thatqdisc. Besides an egress qdisc, an interface may also have an ingress qdisc ,which polices traffic coming in.
The handles of these qdiscs consist of two parts, a major number and a minornumber :
Classes need to have the same major number as their parent. This major numbermust be unique within a egress or ingress setup. The minor number must beunique within a qdisc and his classes.
Recapping, a typical hierarchy might look like this:
1: root qdisc | 1:1 child class / | \ / | \ / | \ / | \ 1:10 1:11 1:12 child classes | | | | 11: | leaf class | | 10: 12: qdisc / \ / \ 10:1 10:2 12:1 12:2 leaf classes
But don't let this tree fool you! You should *not* imagine the kernel to beat the apex of the tree and the network below, that is just not the case.Packets get enqueued and dequeued at the root qdisc, which is the only thingthe kernel talks to.
A packet might get classified in a chain like this:
1: -> 1:1 -> 1:12 -> 12: -> 12:2
The packet now resides in a queue in a qdisc attached to class 12:2. In thisexample, a filter was attached to each 'node' in the tree, each choosing abranch to take next. This can make sense. However, this is also possible:
1: -> 12:2
In this case, a filter attached to the root decided to send the packetdirectly to 12:2.
When the kernel decides that it needs to extract packets to send to theinterface, the root qdisc 1: gets a dequeue request, which is passed to1:1, which is in turn passed to 10:, 11: and 12:, each of which queries itssiblings, and tries to dequeue() from them. In this case, the kernel needs towalk the entire tree, because only 12:2 contains a packet.
In short, nested classes ONLY talk to their parent qdiscs, never to aninterface. Only the root qdisc gets dequeued by the kernel!
The upshot of this is that classes never get dequeued faster than theirparents allow. And this is exactly what we want: this way we can have SFQ inan inner class, which doesn't do any shaping, only scheduling, and have ashaping outer qdisc, which does the shaping.
The PRIO qdisc doesn't actually shape, it only subdivides traffic based onhow you configured your filters. You can consider the PRIO qdisc a kindof pfifo_fast on steroids, whereby each band is a separate class instead ofa simple FIFO.
When a packet is enqueued to the PRIO qdisc, a class is chosen based on thefilter commands you gave. By default, three classes are created. Theseclasses by default contain pure FIFO qdiscs with no internalstructure, but you can replace these by any qdisc you have available.
Whenever a packet needs to be dequeued, class :1 is tried first. Higherclasses are only used if lower bands all did not give up a packet.
This qdisc is very useful in case you want to prioritize certain kinds oftraffic without using only TOS-flags but using all the power of the tcfilters. You can also add an other qdisc to the 3 predefined classes,whereas pfifo_fast is limited to simple fifo qdiscs.
Because it doesn't actually shape, the same warning as for SFQ holds: eitheruse it only if your physical link is really full or wrap it inside aclassful qdisc that does shape. The latter holds for almost all cable modemsand DSL devices.
In formal words, the PRIO qdisc is a Work-Conserving scheduler.
The following parameters are recognized by tc:
Number of bands to create. Each band is in fact a class. If you change thisnumber, you must also change:
If you do not provide tc filters to classify traffic, the PRIO qdisc looksat the TC_PRIO priority to decide how to enqueue traffic.
This works just like with the pfifo_fast qdisc mentioned earlier, see therefor lots of detail.
Reiterating, band 0 goes to minor number 1! Band 1 to minor number 2, etc.
We will create this tree:
1: root qdisc / | \ / | \ / | \ 1:1 1:2 1:3 classes | | | 10: 20: 30: qdiscs qdiscs sfq tbf sfq band 0 1 2
Bulk traffic will go to 30:, interactive traffic to 20: or 10:.
Command lines:
# tc qdisc add dev eth0 root handle 1: prio ## This *instantly* creates classes 1:1, 1:2, 1:3 # tc qdisc add dev eth0 parent 1:1 handle 10: sfq # tc qdisc add dev eth0 parent 1:2 handle 20: tbf rate 20kbit buffer 1600 limit 3000 # tc qdisc add dev eth0 parent 1:3 handle 30: sfq
Now let's see what we created:
# tc -s qdisc ls dev eth0 qdisc sfq 30: quantum 1514b Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc tbf 20: rate 20Kbit burst 1599b lat 667.6ms Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc sfq 10: quantum 1514b Sent 132 bytes 2 pkts (dropped 0, overlimits 0) qdisc prio 1: bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 174 bytes 3 pkts (dropped 0, overlimits 0)As you can see, band 0 has already had some traffic, and one packet was sentwhile running this command!
We now do some bulk data transfer with a tool that properly sets TOS flags,and take another look:
# scp tc [email protected]:./ [email protected]'s password: tc 100% |*****************************| 353 KB 00:00 # tc -s qdisc ls dev eth0 qdisc sfq 30: quantum 1514b Sent 384228 bytes 274 pkts (dropped 0, overlimits 0) qdisc tbf 20: rate 20Kbit burst 1599b lat 667.6ms Sent 2640 bytes 20 pkts (dropped 0, overlimits 0) qdisc sfq 10: quantum 1514b Sent 2230 bytes 31 pkts (dropped 0, overlimits 0) qdisc prio 1: bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 389140 bytes 326 pkts (dropped 0, overlimits 0)As you can see, all traffic went to handle 30:, which is the lowest priorityband, just as intended. Now to verify that interactive traffic goes tohigher bands, we create some interactive traffic:
# tc -s qdisc ls dev eth0 qdisc sfq 30: quantum 1514b Sent 384228 bytes 274 pkts (dropped 0, overlimits 0) qdisc tbf 20: rate 20Kbit burst 1599b lat 667.6ms Sent 2640 bytes 20 pkts (dropped 0, overlimits 0) qdisc sfq 10: quantum 1514b Sent 14926 bytes 193 pkts (dropped 0, overlimits 0) qdisc prio 1: bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 401836 bytes 488 pkts (dropped 0, overlimits 0)
It worked - all additional traffic has gone to 10:, which is our highestpriority qdisc. No traffic was sent to the lowest priority, which previouslyreceived our entire scp.
As said before, CBQ is the most complex qdisc available, the most hyped, theleast understood, and probably the trickiest one to get right. This is notbecause the authors are evil or incompetent, far from it, it's just that theCBQ algorithm isn't all that precise and doesn't really match the way Linuxworks.
Besides being classful, CBQ is also a shaper and it is in that aspect thatit really doesn't work very well. It should work like this. If you try toshape a 10mbit/s connection to 1mbit/s, the link should be idle 90% of thetime. If it isn't, we need to throttle so that it IS idle 90% of the time.
This is pretty hard to measure, so CBQ instead derives the idle time fromthe number of microseconds that elapse between requests from the hardwarelayer for more data. Combined, this can be used to approximate how full orempty the link is.
This is rather tortuous and doesn't always arrive at proper results. Forexample, what if the actual link speed of an interface that is not reallyable to transmit the full 100mbit/s of data, perhaps because of a badlyimplemented driver? A PCMCIA network card will also never achieve 100mbit/sbecause of the way the bus is designed - again, how do we calculate the idletime?
It gets even worse if we consider not-quite-real network devices like PPPover Ethernet or PPTP over TCP/IP. The effective bandwidth in that case isprobably determined by the efficiency of pipes to userspace - which is huge.
People who have done measurements discover that CBQ is not always veryaccurate and sometimes completely misses the mark.
In many circumstances however it works well. With the documentation providedhere, you should be able to configure it to work well in most cases.
As said before, CBQ works by making sure that the link is idle just longenough to bring down the real bandwidth to the configured rate. To do so, itcalculates the time that should pass between average packets.
During operations, the effective idletime is measured using an exponentialweighted moving average (EWMA), which considers recent packets to beexponentially more important than past ones. The UNIX loadaverage iscalculated in the same way.
The calculated idle time is subtracted from the EWMA measured one, theresulting number is called 'avgidle'. A perfectly loaded link has an avgidleof zero: packets arrive exactly once every calculated interval.
An overloaded link has a negative avgidle and if it gets too negative, CBQshuts down for a while and is then 'overlimit'.
Conversely, an idle link might amass a huge avgidle, which would then allowinfinite bandwidths after a few hours of silence. To prevent this, avgidle iscapped at maxidle.
If overlimit, in theory, the CBQ could throttle itself for exactly theamount of time that was calculated to pass between packets, and then passone packet, and throttle again. But see the 'minburst' parameter below.
These are parameters you can specify in order to configure shaping:
Average size of a packet, measured in bytes. Needed for calculating maxidle,which is derived from maxburst, which is specified in packets.
The physical bandwidth of your device, needed for idle timecalculations.
The time a packet takes to be transmitted over a device may grow in steps,based on the packet size. An 800 and an 806 size packet may take just as longto send, for example - this sets the granularity. Most often set to '8'.Must be an integral power of two.
This number of packets is used to calculate maxidle so that when avgidle isat maxidle, this number of average packets can be burst before avgidle dropsto 0. Set it higher to be more tolerant of bursts. You can't set maxidledirectly, only via this parameter.
As mentioned before, CBQ needs to throttle in case of overlimit. The idealsolution is to do so for exactly the calculated idle time, and pass 1packet. For Unix kernels, however, it is generally hard to schedule eventsshorter than 10ms, so it is better to throttle for a longer period, and thenpass minburst packets in one go, and then sleep minburst times longer.
The time to wait is called the offtime. Higher values of minburst lead tomore accurate shaping in the long term, but to bigger bursts at millisecondtimescales.
If avgidle is below 0, we are overlimits and need to wait until avgidle willbe big enough to send one packet. To prevent a sudden burst from shuttingdown the link for a prolonged period of time, avgidle is reset to minidle ifit gets too low.
Minidle is specified in negative microseconds, so 10 means that avgidle iscapped at -10us.
Minimum packet size - needed because even a zero size packet is paddedto 64 bytes on ethernet, and so takes a certain time to transmit. CBQ needsto know this to accurately calculate the idle time.
Desired rate of traffic leaving this qdisc - this is the 'speed knob'!
Internally, CBQ has a lot of fine tuning. For example, classes which areknown not to have data enqueued to them aren't queried. Overlimit classesare penalized by lowering their effective priority. All very smart &complicated.
Besides shaping, using the aforementioned idletime approximations, CBQ alsoacts like the PRIO queue in the sense that classes can have differingpriorities and that lower priority numbers will be polled before the higherpriority ones.
Each time a packet is requested by the hardware layer to be sent out to thenetwork, a weighted round robin process ('WRR') starts, beginning with thelower-numbered priority classes.
These are then grouped and queried if they have data available. If so, it isreturned. After a class has been allowed to dequeue a number of bytes, thenext class within that priority is tried.
The following parameters control the WRR process:
When the outer CBQ is asked for a packet to send out on the interface, itwill try all inner qdiscs (in the classes) in turn, in order of the 'priority' parameter. Each time a class gets its turn, it can only send outa limited amount of data. 'Allot' is the base unit of this amount. See the 'weight' parameter for more information.
The CBQ can also act like the PRIO device. Inner classes with higher priorityare tried first and as long as they have traffic, other classes are notpolled for traffic.
Weight helps in the Weighted Round Robin process. Each class gets a chanceto send in turn. If you have classes with significantly more bandwidth thanother classes, it makes sense to allow them to send more data in one roundthan the others.
A CBQ adds up all weights under a class, and normalizes them, so you can usearbitrary numbers: only the ratios are important. People have been using 'rate/10' as a rule of thumb and it appears to work well. The renormalizedweight is multiplied by the 'allot' parameter to determine how much data canbe sent in one round.
Please note that all classes within an CBQ hierarchy need to share the samemajor number!
Besides purely limiting certain kinds of traffic, it is also possible tospecify which classes can borrow capacity from other classes or, conversely,lend out bandwidth.
A class that is configured with 'isolated' will not lend out bandwidth tosibling classes. Use this if you have competing or mutually-unfriendlyagencies on your link who do not want to give each other freebies.
The control program tc also knows about 'sharing', which is the reverse of 'isolated'.
A class can also be 'bounded', which means that it will not try to borrowbandwidth from sibling classes. tc also knows about 'borrow', which is thereverse of 'bounded'.
Within such an agency class, there might be other classes which are allowedto swap bandwidth.
1: root qdisc | 1:1 child class / \ / \ 1:3 1:4 leaf classes | | 30: 40: qdiscs (sfq) (sfq)
This configuration limits webserver traffic to 5mbit and SMTP traffic to 3mbit. Together, they may not get more than 6mbit. We have a 100mbit NIC andthe classes may borrow bandwidth from each other.
# tc qdisc add dev eth0 root handle 1:0 cbq bandwidth 100Mbit \ avpkt 1000 cell 8 # tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth 100Mbit \ rate 6Mbit weight 0.6Mbit prio 8 allot 1514 cell 8 maxburst 20 \ avpkt 1000 boundedThis part installs the root and the customary 1:1 class. The 1:1 class isbounded, so the total bandwidth can't exceed 6mbit.
As said before, CBQ requires a *lot* of knobs. All parameters are explainedabove, however. The corresponding HTB configuration is lots simpler.
# tc class add dev eth0 parent 1:1 classid 1:3 cbq bandwidth 100Mbit \ rate 5Mbit weight 0.5Mbit prio 5 allot 1514 cell 8 maxburst 20 \ avpkt 1000 # tc class add dev eth0 parent 1:1 classid 1:4 cbq bandwidth 100Mbit \ rate 3Mbit weight 0.3Mbit prio 5 allot 1514 cell 8 maxburst 20 \ avpkt 1000
These are our two leaf classes. Note how we scale the weight with the configuredrate. Both classes are not bounded, but they are connected to class 1:1which is bounded. So the sum of bandwith of the 2 classes will never bemore than 6mbit. The classids need to be within the same major number asthe parent qdisc, by the way!
# tc qdisc add dev eth0 parent 1:3 handle 30: sfq # tc qdisc add dev eth0 parent 1:4 handle 40: sfq
Both classes have a FIFO qdisc by default. But we replaced these with an SFQqueue so each flow of data is treated equally.
# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip \ sport 80 0xffff flowid 1:3 # tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip \ sport 25 0xffff flowid 1:4
These commands, attached directly to the root, send traffic to the rightqdiscs.
Note that we use 'tc class add' to CREATE classes within a qdisc, but thatwe use 'tc qdisc add' to actually add qdiscs to these classes.
You may wonder what happens to traffic that is not classified by any of thetwo rules. It appears that in this case, data will then be processed within1:0, and be unlimited.
If SMTP+web together try to exceed the set limit of 6mbit/s, bandwidth willbe divided according to the weight parameter, giving 5/8 of traffic to thewebserver and 3/8 to the mail server.
With this configuration you can also say that webserver traffic will alwaysget at minimum 5/8 * 6 mbit = 3.75 mbit.
As said before, a classful qdisc needs to call filters to determinewhich class a packet will be enqueued to.
Besides calling the filter, CBQ offers other options, defmap & split.This is pretty complicated to understand, and it is not vital. But as thisis the only known place where defmap & split are properly explained, I'mdoing my best.
As you will often want to filter on the Type of Service field only, a specialsyntax is provided. Whenever the CBQ needs to figure out where a packetneeds to be enqueued, it checks if this node is a 'split node'. If so, oneof the sub-qdiscs has indicated that it wishes to receive all packets witha certain configured priority, as might be derived from the TOS field, orsocket options set by applications.
The packets' priority bits are and-ed with the defmap field to see if a matchexists. In other words, this is a short-hand way of creating a very fastfilter, which only matches certain priorities. A defmap of ff (hex) willmatch everything, a map of 0 nothing. A sample configuration may help makethings clearer:
# tc qdisc add dev eth1 root handle 1: cbq bandwidth 10Mbit allot 1514 \ cell 8 avpkt 1000 mpu 64 # tc class add dev eth1 parent 1:0 classid 1:1 cbq bandwidth 10Mbit \ rate 10Mbit allot 1514 cell 8 weight 1Mbit prio 8 maxburst 20 \ avpkt 1000Standard CBQ preamble. I never get used to the sheer amount of numbersrequired!
Defmap refers to TC_PRIO bits, which are defined as follows:
TC_PRIO.. Num Corresponds to TOS ------------------------------------------------- BESTEFFORT 0 Maximize Reliablity FILLER 1 Minimize Cost BULK 2 Maximize Throughput (0x8) INTERACTIVE_BULK 4 INTERACTIVE 6 Minimize Delay (0x10) CONTROL 7
The TC_PRIO.. number corresponds to bits, counted from the right. See thepfifo_fast section for more details how TOS bits are converted topriorities.
Now the interactive and the bulk classes:
# tc class add dev eth1 parent 1:1 classid 1:2 cbq bandwidth 10Mbit \ rate 1Mbit allot 1514 cell 8 weight 100Kbit prio 3 maxburst 20 \ avpkt 1000 split 1:0 defmap c0 # tc class add dev eth1 parent 1:1 classid 1:3 cbq bandwidth 10Mbit \ rate 8Mbit allot 1514 cell 8 weight 800Kbit prio 7 maxburst 20 \ avpkt 1000 split 1:0 defmap 3f
The 'split qdisc' is 1:0, which is where the choice will be made. C0 isbinary for 11000000, 3F for 00111111, so these two together will matcheverything. The first class matches bits 7 & 6, and thus corresponds to 'interactive' and 'control' traffic. The second class matches the rest.
Node 1:0 now has a table like this:
priority send to 0 1:3 1 1:3 2 1:3 3 1:3 4 1:3 5 1:3 6 1:2 7 1:2
For additional fun, you can also pass a 'change mask', which indicatesexactly which priorities you wish to change. You only need to use this if youare running 'tc class change'. For example, to add best effort traffic to1:2, we could run this:
# tc class change dev eth1 classid 1:2 cbq defmap 01/01
The priority map at 1:0 now looks like this:
priority send to 0 1:2 1 1:3 2 1:3 3 1:3 4 1:3 5 1:3 6 1:2 7 1:2
FIXME: did not test 'tc class change', only looked at the source.
Martin Devera (
HTB works just like CBQ but does not resort to idle time calculations toshape. Instead, it is a classful Token Bucket Filter - hence the name. Ithas only a few parameters, which are well documented on his site.
As your HTB configuration gets more complex, your configuration scaleswell. With CBQ it is already complex even in simple cases! HTB3 (checkits homepage fordetails on HTB versions) is now part of the official kernel sources (from 2.4.20-pre1 and 2.5.31 onwards). However, maybe you still need toget a HTB3 patched version of 'tc': HTB kernel and userspace parts mustbe the same major version, or 'tc' will not work with HTB.
If you already have a modern kernel, or are in a position to patch your kernel, by all means consider HTB.
Functionally almost identical to the CBQ sample configuration above:
# tc qdisc add dev eth0 root handle 1: htb default 30 # tc class add dev eth0 parent 1: classid 1:1 htb rate 6mbit burst 15k # tc class add dev eth0 parent 1:1 classid 1:10 htb rate 5mbit burst 15k # tc class add dev eth0 parent 1:1 classid 1:20 htb rate 3mbit ceil 6mbit burst 15k # tc class add dev eth0 parent 1:1 classid 1:30 htb rate 1kbit ceil 6mbit burst 15k
The author then recommends SFQ for beneath these classes:
# tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 # tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10 # tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10
Add the filters which direct traffic to the right classes:
# U32="tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32" # $U32 match ip dport 80 0xffff flowid 1:10 # $U32 match ip sport 25 0xffff flowid 1:20And that's it - no unsightly unexplained numbers, no undocumentedparameters.
HTB certainly looks wonderful - if 10: and 20: both have their guaranteedbandwidth, and more is left to divide, they borrow in a 5:3 ratio, just asyou would expect.
Unclassified traffic gets routed to 30:, which has little bandwidth of itsown but can borrow everything that is left over. Because we chose SFQinternally, we get fairness thrown in for free!
To determine which class shall process a packet, the so-called 'classifierchain' is called each time a choice needs to be made. This chain consists ofall filters attached to the classful qdisc that needs to decide.
To reiterate the tree, which is not a tree:
root 1: | _1:1_ / | \ / | \ / | \ 10: 11: 12: / \ / \ 10:1 10:2 12:1 12:2
When enqueueing a packet, at each branch the filter chain is consulted for arelevant instruction. A typical setup might be to have a filter in 1:1 thatdirects a packet to 12: and a filter on 12: that sends the packet to 12:2.
You might also attach this latter rule to 1:1, but you can make efficiencygains by having more specific tests lower in the chain.
You can't filter a packet 'upwards', by the way. Also, with HTB, you shouldattach all filters to the root!
And again - packets are only enqueued downwards! When they are dequeued,they go up again, where the interface lives. They do NOT fall off the end ofthe tree to the network adaptor!
As explained in the Classifier chapter, you can match on literally anything,using a very complicated syntax. To start, we will show how to do theobvious things, which luckily are quite easy.
Let's say we have a PRIO qdisc called '10:' which contains three classes, andwe want to assign all traffic from and to port 22 to the highest priorityband, the filters would be:
# tc filter add dev eth0 protocol ip parent 10: prio 1 u32 match \ ip dport 22 0xffff flowid 10:1 # tc filter add dev eth0 protocol ip parent 10: prio 1 u32 match \ ip sport 80 0xffff flowid 10:1 # tc filter add dev eth0 protocol ip parent 10: prio 2 flowid 10:2
What does this say? It says: attach to eth0, node 10: a priority 1 u32filter that matches on IP destination port 22 *exactly* and send it to band10:1. And it then repeats the same for source port 80. The last command saysthat anything unmatched so far should go to band 10:2, the next-highestpriority.
You need to add 'eth0', or whatever your interface is called, because eachinterface has a unique namespace of handles.
To select on an IP address, use this:
# tc filter add dev eth0 parent 10:0 protocol ip prio 1 u32 \ match ip dst 4.3.2.1/32 flowid 10:1 # tc filter add dev eth0 parent 10:0 protocol ip prio 1 u32 \ match ip src 1.2.3.4/32 flowid 10:1 # tc filter add dev eth0 protocol ip parent 10: prio 2 \ flowid 10:2
This assigns traffic to 4.3.2.1 and traffic from 1.2.3.4 to the highestpriority queue, and the rest to the next-highest one.
You can concatenate matches, to match on traffic from 1.2.3.4 and from port80, do this:
# tc filter add dev eth0 parent 10:0 protocol ip prio 1 u32 match ip src 4.3.2.1/32 \ match ip sport 80 0xffff flowid 10:1
Most shaping commands presented here start with this preamble:
# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 ..These are the so called 'u32' matches, which can match on ANY part of apacket.
Source mask 'match ip src 1.2.3.0/24', destination mask 'match ip dst4.3.2.0/24'. To match a single host, use /32, or omit the mask.
Source: 'match ip sport 80 0xffff', destination: 'match ip dport 80 0xffff'
Use the numbers from /etc/protocols, for example, icmp is 1: 'match ipprotocol 1 0xff'.
You can mark packets with either ipchains or iptables and have that marksurvive routing across interfaces. This is really useful to for example onlyshape traffic on eth1 that came in on eth0. Syntax:
# tc filter add dev eth1 protocol ip parent 1:0 prio 1 handle 6 fw flowid 1:1Note that this is not a u32 match!
You can place a mark like this:
# iptables -A PREROUTING -t mangle -i eth0 -j MARK --set-mark 6The number 6 is arbitrary.
If you don't want to understand the full tc filter syntax, just useiptables, and only learn to select on fwmark. You can also have iptablesprint basic statistics that will help you debug your rules.The following command will show you all the rules that mark packagesin the mangle table, also how many packages and bytes have matched.
# iptables -L -t mangle -n -v
To select interactive, minimum delay traffic:
# tc filter add dev ppp0 parent 1:0 protocol ip prio 10 u32 \ match ip tos 0x10 0xff \ flowid 1:4Use 0x08 0xff for bulk traffic.
For more filtering commands, see the Advanced Filters chapter.
The Intermediate queueing device is not a qdisc but its usage is tightly boundto qdiscs. Within linux, qdiscs are attached to network devices and everythingthat is queued to the device is first queued to the qdisc. From this concept,two limitations arise:
Only egress shaping is possible (an ingress qdisc exists, but itspossibilities are very limited compared to classful qdiscs).
A qdisc can only see traffic of one interface, global limitations can't beplaced.
IMQ is there to help solve those two limitations. In short, you can put everything you choose in a qdisc. Specially marked packets get interceptedin netfilter NF_IP_PRE_ROUTING and NF_IP_POST_ROUTING hooks and pass throughthe qdisc attached to an imq device. An iptables target is used for markingthe packets.
This enables you to do ingress shaping as you can just mark packets coming in from somewhere and/or treat interfaces as classes to set global limits.You can also do lots of other stuff like just putting your http traffic in aqdisc, put new connection requests in a qdisc, ...
The first thing that might come to mind is use ingress shaping to give yourselfa high guaranteed bandwidth. ;)Configuration is just like with any other interface:
tc qdisc add dev imq0 root handle 1: htb default 20 tc class add dev imq0 parent 1: classid 1:1 htb rate 2mbit burst 15k tc class add dev imq0 parent 1:1 classid 1:10 htb rate 1mbit tc class add dev imq0 parent 1:1 classid 1:20 htb rate 1mbit tc qdisc add dev imq0 parent 1:10 handle 10: pfifo tc qdisc add dev imq0 parent 1:20 handle 20: sfq tc filter add dev imq0 parent 10:0 protocol ip prio 1 u32 match \ ip dst 10.0.0.230/32 flowid 1:10In this example u32 is used for classification. Other classifiers should work asexpected.Next traffic has to be selected and marked to be enqueued to imq0.
iptables -t mangle -A PREROUTING -i eth0 -j IMQ --todev 0 ip link set imq0 up
The IMQ iptables targets is valid in the PREROUTING and POSTROUTING chains ofthe mangle table. It's syntax is
IMQ [ --todev n ] n : number of imq deviceAn ip6tables target is also provided.
Please note traffic is not enqueued when the target is hit but afterwards.The exact location where traffic enters the imq device depends on thedirection of the traffic (in/out).These are the predefined netfilter hooks used by iptables:
enum nf_ip_hook_priorities { NF_IP_PRI_FIRST = INT_MIN, NF_IP_PRI_CONNTRACK = -200, NF_IP_PRI_MANGLE = -150, NF_IP_PRI_NAT_DST = -100, NF_IP_PRI_FILTER = 0, NF_IP_PRI_NAT_SRC = 100, NF_IP_PRI_LAST = INT_MAX, };
For ingress traffic, imq registers itself with NF_IP_PRI_MANGLE + 1 prioritywhich means packets enter the imq device directly after the mangle PREROUTINGchain has been passed.
For egress imq uses NF_IP_PRI_LAST which honours the fact that packets droppedby the filter table won't occupy bandwidth.
The patches and some more information can be found at theimq site.
There are several ways of doing this. One of the easiest and straightforwardways is 'TEQL' - "True" (or "trivial") link equalizer. Like most thingshaving to do with queueing, load sharing goes both ways. Both ends of a linkmay need to participate for full effect.
Imagine this situation:
+-------+ eth1 +-------+ | |==========| | 'network 1' ----| A | | B |---- 'network 2' | |==========| | +-------+ eth2 +-------+
A and B are routers, and for the moment we'll assume both run Linux. Iftraffic is going from network 1 to network 2, router A needs to distributethe packets over both links to B. Router B needs to be configured to acceptthis. Same goes the other way around, when packets go from network 2 tonetwork 1, router B needs to send the packets over both eth1 and eth2.
The distributing part is done by a 'TEQL' device, like this (it couldn't beeasier):
# tc qdisc add dev eth1 root teql0 # tc qdisc add dev eth2 root teql0 # ip link set dev teql0 up
Don't forget the 'ip link set up' command!
This needs to be done on both hosts. The device teql0 is basically aroundrobbin distributor over eth1 and eth2, for sending packets. No dataever comes in over an teql device, that just appears on the 'raw' eth1 andeth2.
But now we just have devices, we also need proper routing. One way to dothis is to assign a /31 network to both links, and a /31 to the teql0 deviceas well:
On router A:
# ip addr add dev eth1 10.0.0.0/31 # ip addr add dev eth2 10.0.0.2/31 # ip addr add dev teql0 10.0.0.4/31
On router B:
# ip addr add dev eth1 10.0.0.1/31 # ip addr add dev eth2 10.0.0.3/31 # ip addr add dev teql0 10.0.0.5/31
Router A should now be able to ping 10.0.0.1, 10.0.0.3 and 10.0.0.5 over the2 real links and the 1 equalized device. Router B should be able to ping10.0.0.0, 10.0.0.2 and 10.0.0.4 over the links.
If this works, Router A should make 10.0.0.5 its route for reaching network2, and Router B should make 10.0.0.4 its route for reaching network 1. Forthe special case where network 1 is your network at home, and network 2 isthe Internet, Router A should make 10.0.0.5 its default gateway.
Nothing is as easy as it seems. eth1 and eth2 on both router A and B need tohave return path filtering turned off, because they will otherwise droppackets destined for ip addresses other than their own:
# echo 0 > /proc/sys/net/ipv4/conf/eth1/rp_filter # echo 0 > /proc/sys/net/ipv4/conf/eth2/rp_filter
Then there is the nasty problem of packet reordering. Let's say 6 packetsneed to be sent from A to B - eth1 might get 1, 3 and 5. eth2 would then do2, 4 and 6. In an ideal world, router B would receive this in order, 1, 2,3, 4, 5, 6. But the possibility is very real that the kernel gets it likethis: 2, 1, 4, 3, 6, 5. The problem is that this confuses TCP/IP. While nota problem for links carrying many different TCP/IP sessions, you won't beable to bundle multiple links and get to ftp a single file lots faster,except when your receiving or sending OS is Linux, which is not easilyshaken by some simple reordering.
However, for lots of applications, link load balancing is a great idea.
William Stearns has used an advanced tunneling setup to achieve good use ofmultiple, unrelated, internet connections together. It can be found onhis tunneling page.
The HOWTO may feature more about this in the future.
So far we've seen how iproute works, and netfilter was mentioned a fewtimes. This would be a good time to browse through Rusty's Remarkably Unreliable Guides. Netfilter itselfcan be found here.
Netfilter allows us to filter packets, or mangle their headers. One specialfeature is that we can mark a packet with a number. This is done with the--set-mark facility.
As an example, this command marks all packets destined for port 25, outgoingmail:
# iptables -A PREROUTING -i eth0 -t mangle -p tcp --dport 25 \ -j MARK --set-mark 1
Let's say that we have multiple connections, one that is fast (andexpensive, per megabyte) and one that is slower, but flat fee. We would mostcertainly like outgoing mail to go via the cheap route.
We've already marked the packets with a '1', we now instruct the routingpolicy database to act on this:
# echo 201 mail.out >> /etc/iproute2/rt_tables # ip rule add fwmark 1 table mail.out # ip rule ls 0: from all lookup local 32764: from all fwmark 1 lookup mail.out 32766: from all lookup main 32767: from all lookup default
Now we generate a route to the slow but cheap link in the mail.out table:
# /sbin/ip route add default via 195.96.98.253 dev ppp0 table mail.out
And we are done. Should we want to make exceptions, there are lots of waysto achieve this. We can modify the netfilter statement to exclude certainhosts, or we can insert a rule with a lower priority that points to the maintable for our excepted hosts.
We can also use this feature to honour TOS bits by marking packets with adifferent type of service with different numbers, and creating rules to acton that. This way you can even dedicate, say, an ISDN line to interactivesessions.
Needless to say, this also works fine on a host that's doing NAT('masquerading').
IMPORTANT: We received a report that MASQ and SNAT at least collidewith marking packets. Rusty Russell explains it inthis posting. Turn off the reverse path filter to make it workproperly.
Note: to mark packets, you need to have some options enabled in yourkernel:
IP: advanced router (CONFIG_IP_ADVANCED_ROUTER) [Y/n/?] IP: policy routing (CONFIG_IP_MULTIPLE_TABLES) [Y/n/?] IP: use netfilter MARK value as routing key (CONFIG_IP_ROUTE_FWMARK) [Y/n/?]
See also the Section 15.5 in theCookbook.
As explained in the section on classful queueing disciplines, filters areneeded to classify packets into any of the sub-queues. These filters arecalled from within the classful qdisc.
Here is an incomplete list of classifiers available:
Bases the decision on how the firewall has marked the packet. This can bethe easy way out if you don't want to learn tc filter syntax. See theQueueing chapter for details.
Bases the decision on fields within the packet (i.e. source IP address, etc)
Bases the decision on which route the packet will be routed by
Routes packets based on RSVP . Only usefulon networks you control - the Internet does not respect RSVP.
Used in the DSMARK qdisc, see the relevant section.
Note that in general there are many ways in which you can classify packetand that it generally comes down to preference as to which system you wishto use.
Classifiers in general accept a few arguments in common. They are listedhere for convenience:
The protocol this classifier will accept. Generally you will only beaccepting only IP traffic. Required.
The handle this classifier is to be attached to. This handle must bean already existing class. Required.
The priority of this classifier. Lower numbers get tested first.
This handle means different things to different filters.
All the following sections will assume you are trying to shape the trafficgoing to HostA. They will assume that the root class has beenconfigured on 1: and that the class you want to send the selected traffic tois 1:1.
u32
classifierThe U32 filter is the most advanced filter available in the currentimplementation. It entirely based on hashing tables, which make itrobust when there are many filter rules.
In its simplest form the U32 filter is a list of records, eachconsisting of two fields: a selector and an action. The selectors,described below, are compared with the currently processed IP packetuntil the first match occurs, and then the associated action is performed.The simplest type of action would be directing the packet into definedclass.
The command line of tc filter program, used to configure the filter,consists of three parts: filter specification, a selector and an action.The filter specification can be defined as:
tc filter add dev IF [ protocol PROTO ] [ (preference|priority) PRIO ] [ parent CBQ ]
The protocol field describes protocol that the filter will beapplied to. We will only discuss case of ip protocol. Thepreference field (priority can be used alternatively)sets the priority of currently defined filter. This is important, sinceyou can have several filters (lists of rules) with different priorities.Each list will be passed in the order the rules were added, then list withlower priority (higher preference number) will be processed. The parentfield defines the CBQ tree top (e.g. 1:0), the filter should be attachedto.
The options described above apply to all filters, not only U32.
The U32 selector contains definition of the pattern, that will be matchedto the currently processed packet. Precisely, it defines which bits areto be matched in the packet header and nothing more, but this simplemethod is very powerful. Let's take a look at the following examples,taken directly from a pretty complex, real-world filter:
# tc filter add dev eth0 protocol ip parent 1:0 pref 10 u32 \ match u32 00100000 00ff0000 at 0 flowid 1:10
For now, leave the first line alone - all these parameters describethe filter's hash tables. Focus on the selector line, containingmatch keyword. This selector will match to IP headers, whosesecond byte will be 0x10 (0010). As you can guess, the 00ff number isthe match mask, telling the filter exactly which bits to match. Hereit's 0xff, so the byte will match if it's exactly 0x10. The atkeyword means that the match is to be started at specified offset (inbytes) -- in this case it's beginning of the packet. Translating allthat to human language, the packet will match if its Type of Servicefield will have `low delay' bits set. Let's analyze another rule:
# tc filter add dev eth0 protocol ip parent 1:0 pref 10 u32 \ match u32 00000016 0000ffff at nexthdr+0 flowid 1:10
The nexthdr option means next header encapsulated in the IP packet,i.e. header of upper-layer protocol. The match will also start hereat the beginning of the next header. The match should occur in thesecond, 32-bit word of the header. In TCP and UDP protocols this fieldcontains packet's destination port. The number is given in big-endianformat, i.e. older bits first, so we simply read 0x0016 as 22 decimal,which stands for SSH service if this was TCP. As you guess, this matchis ambiguous without a context, and we will discuss this later.
Having understood all the above, we will find the following selectorquite easy to read: match c0a80100 ffffff00 at 16. What wegot here is a three byte match at 17-th byte, counting from the IPheader start. This will match for packets with destination addressanywhere in 192.168.1/24 network. After analyzing the examples, wecan summarize what we have learned.
General selectors define the pattern, mask and offset the patternwill be matched to the packet contents. Using the general selectorsyou can match virtually any single bit in the IP (or upper layer)header. They are more difficult to write and read, though, thanspecific selectors that described below. The general selector syntaxis:
match [ u32 | u16 | u8 ] PATTERN MASK at [OFFSET | nexthdr+OFFSET]
One of the keywords u32, u16 or u8 specifieslength of the pattern in bits. PATTERN and MASK should follow, of lengthdefined by the previous keyword. The OFFSET parameter is the offset,in bytes, to start matching. If nexthdr+ keyword is given,the offset is relative to start of the upper layer header.
Some examples:
Packet will match to this rule, if its time to live (TTL) is 64.TTL is the field starting just after 8-th byte of the IP header.
# tc filter add dev ppp14 parent 1:0 prio 10 u32 \ match u8 64 0xff at 8 \ flowid 1:4
The following matches all TCP packets which have the ACK bit set:
# tc filter add dev ppp14 parent 1:0 prio 10 u32 \ match ip protocol 6 0xff \ match u8 0x10 0xff at nexthdr+13 \ flowid 1:3
Use this to match ACKs on packets smaller than 64 bytes:
## match acks the hard way, ## IP protocol 6, ## IP header length 0x5(32 bit words), ## IP Total length 0x34 (ACK + 12 bytes of TCP options) ## TCP ack set (bit 5, offset 33) # tc filter add dev ppp14 parent 1:0 protocol ip prio 10 u32 \ match ip protocol 6 0xff \ match u8 0x05 0x0f at 0 \ match u16 0x0000 0xffc0 at 2 \ match u8 0x10 0xff at 33 \ flowid 1:3
This rule will only match TCP packets with ACK bit set, and no furtherpayload. Here we can see an example of using two selectors, the final resultwill be logical AND of their results. If we take a look at TCP headerdiagram, we can see that the ACK bit is second older bit (0x10) in the 14-thbyte of the TCP header (at nexthdr+13). As for the secondselector, if we'd like to make our life harder, we could write match u80x06 0xff at 9 instead of using the specific selector protocoltcp, because 6 is the number of TCP protocol, present in 10-th byte ofthe IP header. On the other hand, in this example we couldn't use anyspecific selector for the first match - simply because there's no specificselector to match TCP ACK bits.
The filter below is a modified version of the filter above. The difference is, that itdoesn't check the ip header length. Why? Because the filter above does only work on 32bit systems.
tc filter add dev ppp14 parent 1:0 protocol ip prio 10 u32 \ match ip protocol 6 0xff \ match u8 0x10 0xff at nexthdr+13 \ match u16 0x0000 0xffc0 at 2 \ flowid 1:3
The following table contains a list of all specific selectors the author of this section has found in the tc programsource code. They simply make your life easier and increase readabilityof your filter's configuration.
FIXME: table placeholder - the table is in separate file ,,selector.html''
FIXME: it's also still in Polish :-(
FIXME: must be sgml'ized
Some examples:
# tc filter add dev ppp0 parent 1:0 prio 10 u32 \ match ip tos 0x10 0xff \ flowid 1:4
FIXME: tcp dport match does not work as described below:
The above rule will match packets which have the TOS field set to 0x10.The TOS field starts at second byte of the packet and is one byte big,so we could write an equivalent general selector: match u8 0x10 0xffat 1. This gives us hint to the internals of U32 filter -- thespecific rules are always translated to general ones, and in thisform they are stored in the kernel memory. This leads to another conclusion-- the tcp and udp selectors are exactly the sameand this is why you can't use single match tcp dport 53 0xffffselector to match TCP packets sent to given port -- they will alsomatch UDP packets sent to this port. You must remember to also specifythe protocol and end up with the following rule:
# tc filter add dev ppp0 parent 1:0 prio 10 u32 \ match tcp dport 53 0xffff \ match ip protocol 0x6 0xff \ flowid 1:2
route
classifierThis classifier filters based on the results of the routing tables. When apacket that is traversing through the classes reaches one that is markedwith the "route" filter, it splits the packets up based on information inthe routing table.
# tc filter add dev eth1 parent 1:0 protocol ip prio 100 route
Here we add a route classifier onto the parent node 1:0 with priority 100. When a packet reaches this node (which, since it is the root, will happenimmediately) it will consult the routing table. If the packet matches, it willbe send to the given class and have a priority of 100. Then, to finallykick it into action, you add the appropriate routing entry:
The trick here is to define 'realm' based on either destination or source. The way to do it is like this:
# ip route add Host/Network via Gateway dev Device realm RealmNumber
For instance, we can define our destination network 192.168.10.0 with a realmnumber 10:
# ip route add 192.168.10.0/24 via 192.168.10.1 dev eth1 realm 10
When adding route filters, we can use realm numbers to represent thenetworks or hosts and specify how the routes match the filters.
# tc filter add dev eth1 parent 1:0 protocol ip prio 100 \ route to 10 classid 1:10
The above rule matches the packets going to the network 192.168.10.0.
Route filter can also be used to match source routes. For example, there is a subnetwork attached to the Linux router on eth2.
# ip route add 192.168.2.0/24 dev eth2 realm 2 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 \ route from 2 classid 1:2
Here the filter specifies that packets from the subnetwork 192.168.2.0(realm 2) will match class id 1:2.
To make even more complicated setups possible, you can have filters thatonly match up to a certain bandwidth. You can declare a filter either to entirelycease matching above a certain rate, or not to match only the bandwidthexceeding a certain rate.
So if you decided to police at 4mbit/s, but 5mbit/s of traffic is present,you can stop matching either the entire 5mbit/s, or only not match 1mbit/s,and do send 4mbit/s to the configured class.
If bandwidth exceeds the configured rate, you can drop a packet, reclassifyit, or see if another filter will match it.
There are basically two ways to police. If you compiled the kernel with 'Estimators', the kernel can measure for each filter how much trafficit is passing, more or less. These estimators are very easy on the CPU, asthey simply count 25 times per second how many data has been passed, andcalculate the bitrate from that.
The other way works again via a Token Bucket Filter, this time living withinyour filter. The TBF only matches traffic UP TO your configured bandwidth,if more is offered, only the excess is subject to the configured overlimitaction.
This is very simple and has only one parameter: avrate. Either the flowremains below avrate, and the filter classifies the traffic to the classidconfigured, or your rate exceeds it in which case the specified action istaken, which is 'reclassify' by default.
The kernel uses an Exponential Weighted Moving Average for your bandwidthwhich makes it less sensitive to short bursts.
Uses the following parameters:
burst/buffer/maxburst
mtu/minburst
mpu
rate
Which behave mostly identical to those described in the Token Bucket Filtersection. Please note however that if you set the mtu of a TBF policer toolow, *no* packets will pass, whereas the egress TBF qdisc will just passthem slower.
Another difference is that a policer can only let a packet pass, or drop it.It cannot hold it in order to delay it.
If your filter decides that it is overlimit, it can take 'actions'.Currently, four actions are available:
Causes this filter not to match, but perhaps other filters will.
This is a very fierce option which simply discards traffic exceeding acertain rate. It is often used in the ingress policer and has limited uses.For example, you may have a name server that falls over if offered more than5mbit/s of packets, in which case an ingress filter could be used to makesure no more is ever offered.
Pass on traffic ok. Might be used to disable a complicated filter, but leaveit in place.
Most often comes down to reclassification to Best Effort. This is thedefault action.
The only real example known is mentioned in the 'Protecting your host from SYN floods' section.
Limit incoming icmp traffic to 2kbit, drop packetsover the limit:
tc filter add dev $DEV parent ffff: \ protocol ip prio 20 \ u32 match ip protocol 1 0xff \ police rate 2kbit buffer 10k drop \ flowid :1
Limit packets to a certain size (i.e. all packetswith a length greater than 84 bytes will get dropped):
tc filter add dev $DEV parent ffff: \ protocol ip prio 20 \ u32 match tos 0 0 \ police mtu 84 drop \ flowid :1
This method can be used to drop all packets:
tc filter add dev $DEV parent ffff: \ protocol ip prio 20 \ u32 match ip protocol 1 0xff \ police mtu 1 drop \ flowid :1
It actually drops icmp packets greater-than 1 byte. While packets witha size of 1 byte are possible in theory, you will not find these in a real network.
If you have a need for thousands of rules, for example if you have a lot ofclients or computers, all with different QoS specifications, you may findthat the kernel spends a lot of time matching all those rules.
By default, all filters reside in one big chain which is matched indescending order of priority. If you have 1000 rules, 1000 checks may beneeded to determine what to do with a packet.
Matching would go much quicker if you would have 256 chains with each fourrules - if you could divide packets over those 256 chains, so that the rightrule will be there.
Hashing makes this possible. Let's say you have 1024 cable modem customers inyour network, with IP addresses ranging from 1.2.0.0 to 1.2.3.255, and eachhas to go in another bin, for example 'lite', 'regular' and 'premium'. Youwould then have 1024 rules like this:
# tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.0.0 classid 1:1 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.0.1 classid 1:1 ... # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.3.254 classid 1:3 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.3.255 classid 1:2
To speed this up, we can use the last part of the IP address as a 'hashkey'. We then get 256 tables, the first of which looks like this:
# tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.0.0 classid 1:1 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.1.0 classid 1:1 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.2.0 classid 1:3 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.3.0 classid 1:2
The next one starts like this:
# tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.0.1 classid 1:1 ...
This way, only four checks are needed at most, two on average.
Configuration is pretty complicated, but very worth it by the time you havethis many rules. First we make a filter root, then we create a table with256 entries:
# tc filter add dev eth1 parent 1:0 prio 5 protocol ip u32 # tc filter add dev eth1 parent 1:0 prio 5 handle 2: protocol ip u32 divisor 256
Now we add some rules to entries in the created table:
# tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 2:7b: \ match ip src 1.2.0.123 flowid 1:1 # tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 2:7b: \ match ip src 1.2.1.123 flowid 1:2 # tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 2:7b: \ match ip src 1.2.3.123 flowid 1:3 # tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 2:7b: \ match ip src 1.2.4.123 flowid 1:2This is entry 123, which contains matches for 1.2.0.123, 1.2.1.123,1.2.2.123, 1.2.3.123, and sends them to 1:1, 1:2, 1:3 and 1:2 respectively.Note that we need to specify our hash bucket in hex, 0x7b is 123.
Next create a 'hashing filter' that directs traffic to the right entry inthe hashing table:
# tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 800:: \ match ip src 1.2.0.0/16 \ hashkey mask 0x000000ff at 12 \ link 2:Ok, some numbers need explaining. The default hash table is called 800:: andall filtering starts there. Then we select the source address, which livesas position 12, 13, 14 and 15 in the IP header, and indicate that we areonly interested in the last part. This will be sent to hash table 2:, which wecreated earlier.
It is quite complicated, but it does work in practice and performance willbe staggering. Note that this example could be improved to the ideal casewhere each chain contains 1 filter!
The Routing Policy Database (RPDB) replaced the IPv4 routing andaddressing structure within the Linux Kernel which lead to all thewonderful features this HOWTO describes. Unfortunately, the IPv6structure within Linux was implemented outside of this core structure.Although they do share some facilities, the essential RPDB structuredoes not particpate in or with the IPv6 addressing and routingstructures.
This will change for sure, we just have to wait a little longer.
FIXME: Any ideas if someone is working on this? Plans?
ip6tables is able to mark a packet and assign a number to it:
# ip6tables -A PREROUTING -i eth0 -t mangle -p tcp -j MARK --mark 1
But still, this will not help because the packet will not pass through theRPDB structure.
IPv6 is normally encapsulated in a SIT tunnel and transportedover IPv4 networks. See section IPv6 Tunneling for information onhowto setup such a tunnel. This allows us to filter on the IPv4 packetsholding the IPv6 packets as payload.
The following filter matches all IPv6 encapsulated in IPv4 packets:
# tc filter add dev $DEV parent 10:0 protocol ip prio 10 u32 \ match ip protocol 41 0xff flowid 42:42
Let's carry on with that. Assume your IPv6 packets get sent outover IPv4 and these packets have no options set. One could usethe following filter to match ICMPv6 in IPv6 in IPv4 with no options.0x3a (58) is the Next-Header type for ICMPv6.
# tc filter add dev $DEV parent 10:0 protocol ip prio 10 u32 \ match ip protocol 41 0xff \ match u8 0x05 0x0f at 0 \ match u8 0x3a 0xff at 26 \ flowid 42:42
Matching the destination IPv6 address is a bit more work. The followingfilter matches on the destination address3ffe:202c:ffff:32:230:4fff:fe08:358d:
# tc filter add dev $DEV parent 10:0 protocol ip prio 10 u32 \ match ip protocol 41 0xff \ match u8 0x05 0x0f at 0 \ match u8 0x3f 0xff at 44 \ match u8 0xfe 0xff at 45 \ match u8 0x20 0xff at 46 \ match u8 0x2c 0xff at 47 \ match u8 0xff 0xff at 48 \ match u8 0xff 0xff at 49 \ match u8 0x00 0xff at 50 \ match u8 0x32 0xff at 51 \ match u8 0x02 0xff at 52 \ match u8 0x30 0xff at 53 \ match u8 0x4f 0xff at 54 \ match u8 0xff 0xff at 55 \ match u8 0xfe 0xff at 56 \ match u8 0x08 0xff at 57 \ match u8 0x35 0xff at 58 \ match u8 0x8d 0xff at 59 \ flowid 10:13
The same technique can be used to match subnets. For example 2001::
# tc filter add dev $DEV parent 10:0 protocol ip prio 10 u32 \ match ip protocol 41 0xff \ match u8 0x05 0x0f at 0 \ match u8 0x20 0xff at 28 \ match u8 0x01 0xff at 29 \ flowid 10:13
The kernel has lots of parameters whichcan be tuned for different circumstances. While, as usual, the defaultparameters serve 99% of installations very well, we don't call this theAdvanced HOWTO for the fun of it!
The interesting bits are in /proc/sys/net, take a look there. Not everythingwill be documented here initially, but we're working on it.
In the meantime you may want to have a look at the Linux-Kernel sources;read the file Documentation/filesystems/proc.txt. Most of thefeatures are explained there.
(FIXME)
By default, routers route everything, even packets which 'obviously' don'tbelong on your network. A common example is private IP space escaping ontothe Internet. If you have an interface with a route of 195.96.96.0/24 to it,you do not expect packets from 212.64.94.1 to arrive there.
Lots of people will want to turn this feature off, so the kernel hackershave made it easy. There are files in /proc where you can tellthe kernel to do this for you. The method is called "Reverse PathFiltering". Basically, if the reply to a packet wouldn't go out theinterface this packet came in, then this is a bogus packet and should beignored.
The following fragment will turn this on for all current and futureinterfaces.
# for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do > echo 2 > $i > done
Going by the example above, if a packet arrived on the Linux router on eth1claiming to come from the Office+ISP subnet, it would be dropped. Similarly,if a packet came from the Office subnet, claiming to be from somewhereoutside your firewall, it would be dropped also.
The above is full reverse path filtering. The default is to only filterbased on IPs that are on directly connected networks. This is because thefull filtering breaks in the case of asymmetric routing (where packets comein one way and go out another, like satellite traffic, or if you havedynamic (bgp, ospf, rip) routes in your network. The data comes downthrough the satellite dish and replies go back through normal land-lines).
If this exception applies to you (and you'll probably know if it does) youcan simply turn off the rp_filter on the interface where thesatellite data comes in. If you want to see if any packets are beingdropped, the log_martians file in the same directory will tellthe kernel to log them to your syslog.
# echo 1 >/proc/sys/net/ipv4/conf//log_martians
FIXME: is setting the conf/{default,all}/* files enough? - martijn
Ok, there are a lot of parameters which can be modified. We try to list themall. Also documented (partly) in Documentation/ip-sysctl.txt.
Some of these settings have different defaults based on whether you answered 'Yes' to 'Configure as router and not host' while compiling yourkernel.
Oskar Andreasson also has a page on all these flags and it appears to bebetter than ours, so also check http://ipsysctl-tutorial.frozentux.net/.
As a generic note, most rate limiting features don't work on loopback, sodon't test them locally. The limits are supplied in 'jiffies', and areenforced using the earlier mentioned token bucket filter.
The kernel has an internal clock which runs at 'HZ' ticks (or 'jiffies') persecond. On Intel, 'HZ' is mostly 100. So setting a *_rate file to, say 50,would allow for 2 packets per second. The token bucket filter is alsoconfigured to allow for a burst of at most 6 packets, if enough tokens havebeen earned.
Several entries in the following list have been copied from/usr/src/linux/Documentation/networking/ip-sysctl.txt, written by AlexeyKuznetsov
If the kernel decides that it can't deliver a packet, it will drop it, andsend the source of the packet an ICMP notice to this effect.
Don't act on echo packets at all. Please don't set this by default, but ifyou are used as a relay in a DoS attack, it may be useful.
If you ping the broadcast address of a network, all hosts are supposed torespond. This makes for a dandy denial-of-service tool. Set this to 1 toignore these broadcast messages.
The rate at which echo replies are sent to any one destination.
Set this to ignore ICMP errors caused by hosts in the network reacting badlyto frames sent to what they perceive to be the broadcast address.
A relatively unknown ICMP message, which is sent in response to incorrectpackets with broken IP or TCP headers. With this file you can control therate at which it is sent.
This is the famous cause of the 'Solaris middle star' in traceroutes. Limitsthe rate of ICMP Time Exceeded messages sent.
Maximum number of listening igmp (multicast) sockets on the host.FIXME: Is this true?
FIXME: Add a little explanation about the inet peer storage?Miximum interval between garbage collection passes. This interval is ineffect under low (or absent) memory pressure on the pool. Measured injiffies.
Minimum interval between garbage collection passes. This interval is ineffect under high memory pressure on the pool. Measured in jiffies.
Maximum time-to-live of entries. Unused entries will expire after thisperiod of time if there is no memory pressure on the pool (i.e. when thenumber of entries in the pool is very small). Measured in jiffies.
Minimum time-to-live of entries. Should be enough to cover fragmenttime-to-live on the reassembling side. This minimum time-to-liveis guaranteed if the pool size is less than inet_peer_threshold.Measured in jiffies.
The approximate size of the INET peer storage. Starting from this thresholdentries will be thrown aggressively. This threshold also determinesentries' time-to-live and time intervals between garbage collection passes. More entries, less time-to-live, less GC interval.
This file contains the number one if the host received its IP configuration byRARP, BOOTP, DHCP or a similar mechanism. Otherwise it is zero.
Time To Live of packets. Set to a safe 64. Raise it if you have a hugenetwork. Don't do so for fun - routing loops cause much more damage thatway. You might even consider lowering it in some circumstances.
You need to set this if you use dial-on-demand with a dynamic interfaceaddress. Once your demand interface comes up, any local TCP sockets which haven't seen replies will be rebound to have the right address. This solves the problem that theconnection that brings up your interface itself does not work, but thesecond try does.
If the kernel should attempt to forward packets. Off by default.
Range of local ports for outgoing connections. Actually quite small bydefault, 1024 to 4999.
Set this if you want to disable Path MTU discovery - a technique todetermine the largest Maximum Transfer Unit possible on your path. See alsothe section on Path MTU discovery in the Cookbook chapter.
Maximum memory used to reassemble IP fragments. When ipfrag_high_thresh bytes of memory is allocated for this purpose,the fragment handler will toss packets until ipfrag_low_threshis reached.
Set this if you want your applications to be able to bind to an addresswhich doesn't belong to a device on your system. This can be useful whenyour machine is on a non-permanent (or even dynamic) link, so your servicesare able to start up and bind to a specific address when your link is down.
Minimum memory used to reassemble IP fragments.
Time in seconds to keep an IP fragment in memory.
A boolean flag controlling the behaviour under lots of incoming connections.When enabled, this causes the kernel to actively send RST packets when aservice is overloaded.
Time to hold socket in state FIN-WAIT-2, if it was closed by our side. Peercan be broken and never close its side, or even died unexpectedly. Defaultvalue is 60sec. Usual value used in 2.2 was 180 seconds, you may restore it,but remember that if your machine is even underloaded WEB server, you riskto overflow memory with kilotons of dead sockets, FIN-WAIT-2 sockets areless dangerous than FIN-WAIT-1, because they eat maximum 1.5K of memory, butthey tend to live longer. Cf. tcp_max_orphans.
How often TCP sends out keepalive messages when keepalive is enabled. Default: 2hours.
How frequent probes are retransmitted, when a probe isn't acknowledged. Default: 75 seconds.
How many keepalive probes TCP will send, until it decides that theconnection is broken. Default value: 9. Multiplied with tcp_keepalive_intvl, this gives the time a link can benon-responsive after a keepalive has been sent.
Maximal number of TCP sockets not attached to any user file handle, held bysystem. If this number is exceeded orphaned connections are resetimmediately and warning is printed. This limit exists only to prevent simpleDoS attacks, you _must_ not rely on this or lower the limit artificially,but rather increase it (probably, after increasing installed memory), ifnetwork conditions require more than default value, and tune networkservices to linger and kill such states more aggressively. Let me remind youagain: each orphan eats up to 64K of unswappable memory.
How may times to retry before killing TCP connection, closed by our side.Default value 7 corresponds to 50sec-16min depending on RTO. If your machineis a loaded WEB server, you should think about lowering this value, suchsockets may consume significant resources. Cf. tcp_max_orphans.
Maximal number of remembered connection requests, which still did notreceive an acknowledgment from connecting client. Default value is 1024 forsystems with more than 128Mb of memory, and 128 for low memory machines. Ifserver suffers of overload, try to increase this number. Warning! If youmake it greater than 1024, it would be better to change TCP_SYNQ_HSIZE ininclude/net/tcp.h to keep TCP_SYNQ_HSIZE*16<=tcp_max_syn_backlog and torecompile kernel.
Maximal number of timewait sockets held by system simultaneously. If thisnumber is exceeded time-wait socket is immediately destroyed and warning isprinted. This limit exists only to prevent simple DoS attacks, you _must_not lower the limit artificially, but rather increase it (probably, afterincreasing installed memory), if network conditions require more thandefault value.
Bug-to-bug compatibility with some broken printers.On retransmit try to send bigger packets to work around bugs incertain TCP stacks.
How many times to retry before deciding that something is wrongand it is necessary to report this suspicion to network layer.Minimal RFC value is 3, it is default, which correspondsto 3sec-8min depending on RTO.
How may times to retry before killing alive TCP connection.RFC 1122says that the limit should be longer than 100 sec.It is too small number. Default value 15 corresponds to 13-30mindepending on RTO.
This boolean enables a fix for 'time-wait assassination hazards in tcp', describedin RFC 1337. If enabled, this causes the kernel to drop RST packets forsockets in the time-wait state.Default: 0
Use Selective ACK which can be used to signify that specific packets aremissing - therefore helping fast recovery.
Use the Host requirements interpretation of the TCP urg pointerfield. Most hosts use the older BSD interpretation, so if you turn this onLinux might not communicate correctly with them. Default: FALSE
Number of SYN packets the kernel will send before giving up on the newconnection.
To open the other side of the connection, the kernel sends a SYN with apiggybacked ACK on it, to acknowledge the earlier received SYN. This is part2 of the threeway handshake. This setting determines the number of SYN+ACKpackets sent before the kernel gives up on the connection.
Timestamps are used, amongst other things, to protect against wrappingsequence numbers. A 1 gigabit link might conceivably re-encounter a previoussequence number with an out-of-line value, because it was of a previousgeneration. The timestamp will let it recognize this 'ancient packet'.
Enable fast recycling TIME-WAIT sockets. Default value is 1.It should not be changed without advice/request of technical experts.
TCP/IP normally allows windows up to 65535 bytes big. For really fastnetworks, this may not be enough. The window scaling options allows foralmost gigabyte windows, which is good for high bandwidth*delay products.
DEV can either stand for a real interface, or for 'all' or 'default'.Default also changes settings for interfaces yet to be created.
If a router decides that you are using it for a wrong purpose (ie, it needsto resend your packet on the same interface), it will send us a ICMPRedirect. This is a slight security risk however, so you may want to turn itoff, or use secure redirects.
Not used very much anymore. You used to be able to give a packet a list ofIP addresses it should visit on its way. Linux can be made to honor this IPoption.
Accept packets with source address 0.b.c.d with destinations not to this hostas local ones. It is supposed that a BOOTP relay daemon will catch and forwardsuch packets.
The default is 0, since this feature is not implemented yet (kernel version2.2.12).
Enable or disable IP forwarding on this interface.
See the section on Reverse Path Filtering.
If we do multicast forwarding on this interface
If you set this to 1, this interface will respond to ARP requests foraddresses the kernel has routes to. Can be very useful when building 'ippseudo bridges'. Do take care that your netmasks are very correct beforeenabling this! Also be aware that the rp_filter, mentioned elsewhere, alsooperates on ARP queries!
See the section on Reverse Path Filtering.
Accept ICMP redirect messages only for gateways, listed in default gatewaylist. Enabled by default.
If we send the above mentioned redirects.
If it is not set the kernel does not assume that different subnets on thisdevice can communicate directly. Default setting is 'yes'.
FIXME: fill this in
Dev can either stand for a real interface, or for 'all' or 'default'.Default also changes settings for interfaces yet to be created.
Maximum for random delay of answers to neighbor solicitation messages injiffies (1/100 sec). Not yet implemented (Linux does not have anycast supportyet).
Determines the number of requests to send to the user level ARP daemon. Use 0to turn off.
A base value used for computing the random reachable time value as specifiedin RFC2461.
Delay for the first time probe if the neighbor is reachable. (seegc_stale_time)
Determines how often to check for stale ARP entries. After an ARP entry isstale it will be resolved again (which is useful when an IP address migratesto another machine). When ucast_solicit is greater than 0 it first tries tosend an ARP packet directly to the known host When that fails andmcast_solicit is greater than 0, an ARP request is broadcast.
An ARP/neighbor entry is only replaced with a new one if the old is at leastlocktime old. This prevents ARP cache thrashing.
Maximum number of retries for multicast solicitation.
Maximum time (real time is random [0..proxytime]) before answering to an ARPrequest for which we have an proxy ARP entry. In some cases, this is used toprevent network flooding.
Maximum queue length of the delayed proxy arp timer. (see proxy_delay).
The time, expressed in jiffies (1/100 sec), between retransmitted NeighborSolicitation messages. Used for address resolution and to determine if aneighbor is unreachable.
Maximum number of retries for unicast solicitation.
Maximum queue length for a pending arp request - the number of packets whichare accepted from other layers while the ARP address is still resolved.
This parameters are used to limit the warning messages written to the kernellog from the routing code. The higher the error_cost factor is, the fewermessages will be written. Error_burst controls when messages will be dropped.The default settings limit warning messages to one every five seconds.
Writing to this file results in a flush of the routing cache.
Values to control the frequency and behavior of the garbage collectionalgorithm for the routing cache. This can be important for when doingfail over. At least gc_timeout seconds will elapse before Linux will skipto another route because the previous one has died. By default set to 300,you may want to lower it if you want to have a speedy fail over.
Also see this post by Ard van Breemen.
See /proc/sys/net/ipv4/route/gc_elasticity.
See /proc/sys/net/ipv4/route/gc_elasticity.
See /proc/sys/net/ipv4/route/gc_elasticity.
See /proc/sys/net/ipv4/route/gc_elasticity.
Maximum delay for flushing the routing cache.
Maximum size of the routing cache. Old entries will be purged once the cachereached has this size.
FIXME: fill this in
Minimum delay for flushing the routing cache.
FIXME: fill this in
FIXME: fill this in
Factors which determine if more ICMP redirects should be sent to a specifichost. No redirects will be sent once the load limit or the maximum number ofredirects has been reached.
See /proc/sys/net/ipv4/route/redirect_load.
Timeout for redirects. After this period redirects will be sent again, even ifthis has been stopped, because the load or number limit has been reached.
Should you find that you have needs not addressed by the queues mentionedearlier, the kernel contains some other more specialized queues mentioned here.
These classless queues are even simpler than pfifo_fast in that they lackthe internal bands - all traffic is really equal. They have one importantbenefit though, they have some statistics. So even if you don't need shapingor prioritizing, you can use this qdisc to determine the backlog on yourinterface.
pfifo has a length measured in packets, bfifo in bytes.
Specifies the length of the queue. Measured in bytes for bfifo, in packetsfor pfifo. Defaults to the interface txqueuelen (see pfifo_fast chapter)packets long or txqueuelen*mtu bytes for bfifo.
This is so theoretical that not even Alexey (the main CBQ author) claims tounderstand it. From his source:
David D. Clark, Scott Shenker and Lixia ZhangSupporting Real-Time Applications in an Integrated Services PacketNetwork: Architecture and Mechanism.
As I understand it, the main idea is to create WFQ flows for each guaranteedservice and to allocate the rest of bandwith to dummy flow-0. Flow-0comprises the predictive services and the best effort traffic; it is handledby a priority scheduler with the highest priority band allocated forpredictive services, and the rest --- to the best effort packets.
Note that in CSZ flows are NOT limited to their bandwidth. It is supposedthat the flow passed admission control at the edge of the QoS network and itdoesn't need further shaping. Any attempt to improve the flow or to shape itto a token bucket at intermediate hops will introduce undesired delays andraise jitter.
At the moment CSZ is the only scheduler that provides true guaranteedservice. Another schemes (including CBQ) do not provide guaranteed delay andrandomize jitter."
Does not currently seem like a good candidate to use, unless you've read andunderstand the article mentioned.
Esteve Camps
This text is an extract from my thesis on QoS Support in Linux, September 2000.
Source documents:
Draft-almesberger-wajhak-diffserv-linux-01.txt.
Examples in iproute2 distribution.
White Paper-QoS protocols and architectures and IP QoS Frequently Asked Questions both by Quality of Service Forum.
This chapter was written by Esteve Camps
First of all, it would be a great idea for you to read RFCswritten about this (RFC2474, RFC2475, RFC2597 and RFC2598) at IETF DiffServ working Group web site and Werner Almesberger web site(he wrote the code to support Differentiated Services on Linux).
Dsmark is a queueing discipline that offers the capabilities needed inDifferentiated Services (also called DiffServ or, simply, DS). DiffServ isone of two actual QoS architectures (the other one is called IntegratedServices) that is based on a value carried by packets in the DS field of theIP header.
One of the first solutions in IP designed to offer some QoS level wasthe Type of Service field (TOS byte) in IP header. By changing that value,we could choose a high/low level of throughput, delay or reliability.But this didn't provide sufficient flexibility to the needs of newservices (such as real-time applications, interactive applications andothers). After this, new architectures appeared. One of these was DiffServwhich kept TOS bits and renamed DS field.
Differentiated Services is group-oriented. I mean, we don't know anythingabout flows (this will be the Integrated Services purpose); we know aboutflow aggregations and we will apply different behaviours depending on whichaggregation a packet belongs to.
When a packet arrives to an edge node (entry node to a DiffServ domain)entering to a DiffServ Domain we'll have to policy, shape and/or mark thosepackets (marking refers to assigning a value to the DS field. It's just like thecows :-) ). This will be the mark/value that the internal/core nodes on ourDiffServ Domain will look at to determine which behaviour or QoS levelapply.
As you can deduce, Differentiated Services involves a domain on whichall DS rules will have to be applied. In fact you can think Iwill classify all the packets entering my domain. Once they enter mydomain they will be subjected to the rules that my classification dictatesand every traversed node will apply that QoS level.
In fact, you can apply your own policies into your local domains, but someService Level Agreements should be considered when connecting toother DS domains.
At this point, you maybe have a lot of questions. DiffServ is more than I'veexplained. In fact, you can understand that I can not resume more than 3RFCs in just 50 lines :-).
As the DiffServ bibliography specifies, we differentiate boundary nodes andinterior nodes. These are two important points in the traffic path. Bothtypes perform a classification when the packets arrive. Its result may beused in different places along the DS process before the packet is releasedto the network. It's just because of this that the diffserv code supplies anstructure called sk_buff, including a new field called skb->tc_indexwhere we'll store the result of initial classification that may be used inseveral points in DS treatment.
The skb->tc_index value will be initially set by the DSMARK qdisc,retrieving it from the DS field in IP header of every received packet.Besides, cls_tcindex classifier will read all or part of skb->tcindexvalue and use it to select classes.
But, first of all, take a look at DSMARK qdisc command and its parameters:
... dsmark indices INDICES [ default_index DEFAULT_INDEX ] [ set_tc_index ]What do these parameters mean?
indices: size of table of (mask,value) pairs. Maximum value is 2ˆn, where n>=0.
Default_index: the default table entry index if classifier finds no match.
Set_tc_index: instructs dsmark discipline to retrieve the DS field and store it onto skb->tc_index.
This qdisc will apply the next steps:
If we have declared set_tc_index option in qdisc command, DS field is retrieved and stored ontoskb->tc_index variable.
Classifier is invoked. The classifier will be executed and it will return a class ID that will be stored inskb->tc_index variable. If no filter matches are found, we consider the default_index option to determine theclassId to store. If neither set_tc_index nor default_index has been declared results may beunpredictable.
After been sent to internal qdiscs where you can reuse the result of the filter, the classid returned bythe internal qdisc is stored into skb->tc_index. We will use this value in the future to index a mask-value table. The final result to assign to the packet will be that resulting from next operation:
New_Ds_field = ( Old_DS_field & mask ) | value
Thus, new value will result from "anding" ds_field and mask values and next, this result "ORed" withvalue parameter. See next diagram to understand all this process:
skb->ihp->tos - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > | | ^ | -- If you declare set_tc_index, we set DS | | <-----May change | value into skb->tc_index variable | |O DS field | A| |R +-|-+ +------+ +---+-+ Internal +-+ +---N|-----|----+ | | | | tc |--->| | |--> . . . -->| | | D| | | | | |----->|index |--->| | | Qdisc | |---->| v | | | | | |filter|--->| | | +---------------+ | ---->(mask,value) | -->| O | +------+ +-|-+--------------^----+ / | (. , .) | | | | ^ | | | | (. , .) | | | +----------|---------|----------------|-------|--+ (. , .) | | | sch_dsmark | | | | | +-|------------|---------|----------------|-------|------------------+ | | | <- tc_index -> | | | |(read) | may change | | <--------------Index to the | | | | | (mask,value) v | v v | pairs table - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -> skb->tc_index
How to do marking? Just change the mask and value of the class you want to remark. See next line of code:
tc class change dev eth0 classid 1:1 dsmark mask 0x3 value 0xb8This changes the (mask,value) pair in hash table, to remark packets belonging to class 1:1.You have to "change" this valuesbecause of default values that (mask,value) gets initially (see table below).
Now, we'll explain how TC_INDEX filter works and how fits into this. Besides, TCINDEX filter can beused in other configurations rather than those including DS services.
This is the basic command to declare a TC_INDEX filter:
... tcindex [ hash SIZE ] [ mask MASK ] [ shift SHIFT ] [ pass_on | fall_through ] [ classid CLASSID ] [ police POLICE_SPEC ]
Next, we show the example used to explain TC_INDEX operation mode. Pay attention to bolded words:
tc qdisc add dev eth0 handle 1:0 root dsmark indices 64 set_tc_index tc filter add dev eth0 parent 1:0 protocol ip prio 1 tcindex mask 0xfc shift 2 tc qdisc add dev eth0 parent 1:0 handle 2:0 cbq bandwidth 10Mbit cell 8 avpkt 1000 mpu 64 # EF traffic class tc class add dev eth0 parent 2:0 classid 2:1 cbq bandwidth 10Mbit rate 1500Kbit avpkt 1000 prio 1 bounded isolated allot 1514 weight 1 maxburst 10 # Packet fifo qdisc for EF traffic tc qdisc add dev eth0 parent 2:1 pfifo limit 5 tc filter add dev eth0 parent 2:0 protocol ip prio 1 handle 0x2e tcindex classid 2:1 pass_on(This code is not complete. It's just an extract from EFCBQ example included in iproute2 distribution).
First of all, suppose we receive a packet marked as EF . If you read RFC2598, you'll see that DSCPrecommended value for EF traffic is 101110. This means that DS field will be 10111000 (remember thatless significant bits in TOS byte are not used in DS) or 0xb8 in hexadecimal codification.
TC INDEX FILTER +---+ +-------+ +---+-+ +------+ +-+ +-------+ | | | | | | | |FILTER| +-+ +-+ | | | | | |----->| MASK | -> | | | -> |HANDLE|->| | | | -> | | -> | | | | . | =0xfc | | | | |0x2E | | +----+ | | | | | | | . | | | | | +------+ +--------+ | | | | | | . | | | | | | | | | -->| | . | SHIFT | | | | | | | |--> | | . | =2 | | | +----------------------------+ | | | | | | | | | CBQ 2:0 | | | | | +-------+ +---+--------------------------------+ | | | | | | | +-------------------------------------------------------------+ | | DSMARK 1:0 | +-------------------------------------------------------------------------+
The packet arrives, then, set with 0xb8 value at DS field. As we explained before, dsmark qdisc identifiedby 1:0 id in the example, retrieves DS field and store it in skb->tc_index variable.Next step in the example will correspond to the filter associated to this qdisc (second line in the example).This will perform next operations:
Value1 = skb->tc_index & MASK Key = Value1 >> SHIFT
In the example, MASK=0xFC and SHIFT=2.
Value1 = 10111000 & 11111100 = 10111000 Key = 10111000 >> 2 = 00101110 -> 0x2E in hexadecimal
The returned value will correspond to a qdisc internal filter handle (in the example, identifier 2:0). If afilter with this id exists, policing and metering conditions will be verified (in case that filter includes this)and the classid will be returned (in our example, classid 2:1) and stored in skb->tc_index variable.
But if any filter with that identifier is not found, the result will depend on fall_through flag declaration. If so,value key is returned as classid. If not, an error is returned and process continues with the rest filters. Becareful if you use fall_through flag; this can be done if a simple relation exists between valuesof skb->tc_index variable and class id's.
The latest parameters to comment on are hash and pass_on. The first onerelates to hash table size. Pass_on will be used to indicate that if no classidequal to the result of this filter is found, try next filter.The default action is fall_through (look at next table).
Finally, let's see which possible values can be set to all this TCINDEX parameters:
TC Name Value Default ----------------------------------------------------------------- Hash 1...0x10000 Implementation dependent Mask 0...0xffff 0xffff Shift 0...15 0 Fall through / Pass_on Flag Fall_through Classid Major:minor None Police ..... None
This kind of filter is very powerful. It's necessary to explore all possibilities. Besides, this filter is not only used in DiffServ configurations.You can use it as any other kind of filter.
I recommend you to look at all DiffServ examples included in iproute2 distribution. I promise I will try tocomplement this text as soon as I can. Besides, all I have explained is the result of a lot of tests.I would thank you tell me if I'm wrong in any point.
All qdiscs discussed so far are egress qdiscs. Each interface however canalso have an ingress qdisc which is not used to send packetsout to the network adaptor. Instead, it allows you to apply tc filters topackets coming in over the interface, regardless of whether they have a localdestination or are to be forwarded.
As the tc filters contain a full Token Bucket Filter implementation, and arealso able to match on the kernel flow estimator, there is a lot offunctionality available. This effectively allows you to police incomingtraffic, before it even enters the IP stack.
The ingress qdisc itself does not require any parameters. It differs fromother qdiscs in that it does not occupy the root of a device. Attach it likethis:
# tc qdisc add dev eth0 ingressThis allows you to have other, sending, qdiscs on your device besides theingress qdisc.
For a contrived example how the ingress qdisc could be used, see theCookbook.
This section is meant as an introduction to the queuing at backbone networks, which ofteninvolves >100 megabit bandwidths, which requires a different approach thanyour ADSL modem at home.
The normal behaviour of router queues on the Internet is called tail-drop.Tail-drop works by queueing up to a certain amount, then dropping all trafficthat 'spills over'. This is very unfair, and also leads to retransmit synchronization. When retransmit synchronization occurs, the sudden burstof drops from a router that has reached its fill will cause a delayed burstof retransmits, which will over fill the congested router again.
In order to cope with transient congestion on links, backbone routers willoften implement large queues. Unfortunately, while these queues are good forthroughput, they can substantially increase latency and cause TCPconnections to behave very burstily during congestion.
These issues with tail-drop are becoming increasingly troublesome on theInternet because the use of network unfriendly applications is increasing.The Linux kernel offers us RED, short for Random Early Detect, also calledRandom Early Drop, as that is how it works.
RED isn't a cure-all for this, applications which inappropriately fail to implement exponential backoff still get an unfair share of the bandwidth,however, with RED they do not cause as much harm to the throughput andlatency of other connections.
RED statistically drops packets from flows before it reaches its hardlimit. This causes a congested backbone link to slow more gracefully, andprevents retransmit synchronization. This also helps TCP find its 'fair'speed faster by allowing some packets to get dropped sooner keeping queuesizes low and latency under control. The probability of a packet beingdropped from a particular connection is proportional to its bandwidth usagerather than the number of packets it transmits.
RED is a good queue for backbones, where you can't afford the complexity of per-session state tracking needed by fairness queueing.
In order to use RED, you must decide on three parameters: Min, Max, andburst. Min sets the minimum queue size in bytes before dropping will begin,Max is a soft maximum that the algorithm will attempt to stay under, andburst sets the maximum number of packets that can 'burst through'.
You should set the min by calculating that highest acceptable base queueing latency you wish, and multiply it by your bandwidth. For instance, on my 64kbit/s ISDN link, I might want a base queueing latency of 200ms so I setmin to 1600 bytes. Setting min too small will degrade throughput and toolarge will degrade latency. Setting a small min is not a replacement forreducing the MTU on a slow link to improve interactive response.
You should make max at least twice min to prevent synchronization. On slowlinks with small Min's it might be wise to make max perhaps four ormore times large then min.
Burst controls how the RED algorithm responds to bursts. Burst must be setlarger then min/avpkt. Experimentally, I've found (min+min+max)/(3*avpkt) towork ok.
Additionally, you need to set limit and avpkt. Limit is a safety value, afterthere are limit bytes in the queue, RED 'turns into' tail-drop. I typical setlimit to eight times max. Avpkt should be your average packet size. 1000works OK on high speed Internet links with a 1500byte MTU.
Read the paper on RED queueing by Sally Floyd and Van Jacobson for technicalinformation.
Not a lot is known about GRED. It looks like GRED with several internalqueues, whereby the internal queue is chosen based on the Diffserv tcindexfield. According to a slide foundhere,it contains the capabilities of Cisco's 'Distributed Weighted RED', as well as Dave Clark's RIO.
Each virtual queue can have its own Drop Parameters specified.
FIXME: get Jamal or Werner to tell us more
This is quite a major effort by Werner Almesberger to allow you to buildVirtual Circuits over TCP/IP sockets. A Virtual Circuit is a concept fromATM network theory.
For more information, see the ATM on Linux homepage.
This qdisc is not included in the standard kernels but can be downloaded from here. Currently the qdisc is only tested with Linux 2.2 kernels but it will probably work with 2.4/2.5 kernels too.
The WRR qdisc distributes bandwidth between its classes using the weighted round robin scheme. That is, like the CBQ qdisc it contains classes into which arbitrary qdiscs can be plugged. All classes which have sufficient demand will get bandwidth proportional to the weights associated with the classes.The weights can be set manually using the tc program. But theycan also be made automatically decreasing for classes transferring much data.
The qdisc has a built-in classifier which assigns packets coming from or sent to different machines to different classes. Either the MAC or IP and either source or destination addresses can be used. The MAC address can only be used when the Linux box is acting as an ethernet bridge, however. The classes are automatically assigned to machines based on the packets seen.
The qdisc can be very useful at sites such as dorms where a lot of unrelated individuals share an Internet connection. A set of scripts setting up a relevant behavior for such a site is a central part of the WRR distribution.
This section contains 'cookbook' entries which may help you solve problems.A cookbook is no replacement for understanding however, so try and comprehendwhat is going on.
You can do this in several ways. Apache has some support for this with amodule, but we'll show how Linux can do this for you, and do so for otherservices as well. These commands are stolen from a presentation by JamalHadi that's referenced below.
Let's say we have two customers, with http, ftp and streaming audio, and wewant to sell them a limited amount of bandwidth. We do so on the server itself.
Customer A should have at most 2 megabits, customer B has paid for 5megabits. We separate our customers by creating virtual IP addresses on ourserver.
# ip address add 188.177.166.1 dev eth0 # ip address add 188.177.166.2 dev eth0
It is up to you to attach the different servers to the right IP address. Allpopular daemons have support for this.
We first attach a CBQ qdisc to eth0:
# tc qdisc add dev eth0 root handle 1: cbq bandwidth 10Mbit cell 8 avpkt 1000 \ mpu 64
We then create classes for our customers:
# tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth 10Mbit rate \ 2MBit avpkt 1000 prio 5 bounded isolated allot 1514 weight 1 maxburst 21 # tc class add dev eth0 parent 1:0 classid 1:2 cbq bandwidth 10Mbit rate \ 5Mbit avpkt 1000 prio 5 bounded isolated allot 1514 weight 1 maxburst 21
Then we add filters for our two classes:
##FIXME: Why this line, what does it do?, what is a divisor?: ##FIXME: A divisor has something to do with a hash table, and the number of ## buckets - ahu # tc filter add dev eth0 parent 1:0 protocol ip prio 5 handle 1: u32 divisor 1 # tc filter add dev eth0 parent 1:0 prio 5 u32 match ip src 188.177.166.1 flowid 1:1 # tc filter add dev eth0 parent 1:0 prio 5 u32 match ip src 188.177.166.2 flowid 1:2
And we're done.
FIXME: why no token bucket filter? is there a default pfifo_fast fallbacksomewhere?
From Alexey's iproute documentation, adapted to netfilter and with moreplausible paths. If you use this, take care to adjust the numbers toreasonable values for your system.
If you want to protect an entire network, skip this script, which is bestsuited for a single host.
It appears that you need the very latest version of the iproute2 tools toget this to work with 2.4.0.
#! /bin/sh -x # # sample script on using the ingress capabilities # this script shows how one can rate limit incoming SYNs # Useful for TCP-SYN attack protection. You can use # IPchains to have more powerful additions to the SYN (eg # in addition the subnet) # #path to various utilities; #change to reflect yours. # TC=/sbin/tc IP=/sbin/ip IPTABLES=/sbin/iptables INDEV=eth2 # # tag all incoming SYN packets through $INDEV as mark value 1 ############################################################ $iptables -A PREROUTING -i $INDEV -t mangle -p tcp --syn \ -j MARK --set-mark 1 ############################################################ # # install the ingress qdisc on the ingress interface ############################################################ $TC qdisc add dev $INDEV handle ffff: ingress ############################################################ # # # SYN packets are 40 bytes (320 bits) so three SYNs equals # 960 bits (approximately 1kbit); so we rate limit below # the incoming SYNs to 3/sec (not very useful really; but #serves to show the point - JHS ############################################################ $TC filter add dev $INDEV parent ffff: protocol ip prio 50 handle 1 fw \ police rate 1kbit burst 40 mtu 9k drop flowid :1 ############################################################ # echo "---- qdisc parameters Ingress ----------" $TC qdisc ls dev $INDEV echo "---- Class parameters Ingress ----------" $TC class ls dev $INDEV echo "---- filter parameters Ingress ----------" $TC filter ls dev $INDEV parent ffff: #deleting the ingress qdisc #$TC qdisc del $INDEV ingress
Recently, distributed denial of service attacks have become a major nuisanceon the Internet. By properly filtering and rate limiting your network, you canboth prevent becoming a casualty or the cause of these attacks.
You should filter your networks so that you do not allow non-local IP sourceaddressed packets to leave your network. This stops people from anonymouslysending junk to the Internet.
Rate limiting goes much as shown earlier. To refresh your memory, ourASCIIgram again:
[The Internet] ------ [Linux router] --- [Office+ISP] eth1 eth0
We first set up the prerequisite parts:
# tc qdisc add dev eth0 root handle 10: cbq bandwidth 10Mbit avpkt 1000 # tc class add dev eth0 parent 10:0 classid 10:1 cbq bandwidth 10Mbit rate \ 10Mbit allot 1514 prio 5 maxburst 20 avpkt 1000
If you have 100Mbit, or more, interfaces, adjust these numbers. Now you needto determine how much ICMP traffic you want to allow. You can performmeasurements with tcpdump, by having it write to a file for a while, andseeing how much ICMP passes your network. Do not forget to raise thesnapshot length!
If measurement is impractical, you might want to choose 5% of your availablebandwidth. Let's set up our class:
# tc class add dev eth0 parent 10:1 classid 10:100 cbq bandwidth 10Mbit rate \ 100Kbit allot 1514 weight 800Kbit prio 5 maxburst 20 avpkt 250 \ bounded
This limits at 100Kbit. Now we need a filter to assign ICMP traffic to thisclass:
# tc filter add dev eth0 parent 10:0 protocol ip prio 100 u32 match ip protocol 1 0xFF flowid 10:100
If lots of data is coming down your link, or going up for that matter, andyou are trying to do some maintenance via telnet or ssh, this may not go toowell. Other packets are blocking your keystrokes. Wouldn't it be great ifthere were a way for your interactive packets to sneak past the bulktraffic? Linux can do this for you!
As before, we need to handle traffic going both ways. Evidently, this worksbest if there are Linux boxes on both ends of your link, although otherUNIX's are able to do this. Consult your local Solaris/BSD guru for this.
The standard pfifo_fast scheduler has 3 different 'bands'. Traffic in band 0is transmitted first, after which traffic in band 1 and 2 gets considered.It is vital that our interactive traffic be in band 0!
We blatantly adapt from the (soon to be obsolete) ipchains HOWTO:
There are four seldom-used bits in the IP header, called the Type of Service(TOS) bits. They effect the way packets are treated; the four bits are"Minimum Delay", "Maximum Throughput", "Maximum Reliability" and "MinimumCost". Only one of these bits is allowed to be set. Rob van Nieuwkerk, theauthor of the ipchains TOS-mangling code, puts it as follows:
Especially the "Minimum Delay" is important for me. I switch it on for"interactive" packets in my upstream (Linux) router. I'mbehind a 33k6 modem link. Linux prioritizes packets in 3 queues. Thisway I get acceptable interactive performance while doing bulkdownloads at the same time.
The most common use is to set telnet & ftp control connections to "MinimumDelay" and FTP data to "Maximum Throughput". This would bedone as follows, on your upstream router:
# iptables -A PREROUTING -t mangle -p tcp --sport telnet \ -j TOS --set-tos Minimize-Delay # iptables -A PREROUTING -t mangle -p tcp --sport ftp \ -j TOS --set-tos Minimize-Delay # iptables -A PREROUTING -t mangle -p tcp --sport ftp-data \ -j TOS --set-tos Maximize-Throughput
Now, this only works for data going from your telnet foreign host to yourlocal computer. The other way around appears to be done for you, ie, telnet,ssh & friends all set the TOS field on outgoing packets automatically.
Should you have an application that does not do this, you can always do it with netfilter. On your local box:
# iptables -A OUTPUT -t mangle -p tcp --dport telnet \ -j TOS --set-tos Minimize-Delay # iptables -A OUTPUT -t mangle -p tcp --dport ftp \ -j TOS --set-tos Minimize-Delay # iptables -A OUTPUT -t mangle -p tcp --dport ftp-data \ -j TOS --set-tos Maximize-Throughput
This section was sent in by reader Ram Narula from Internet for Education(Thailand).
The regular technique in accomplishing this in Linuxis probably with use of ipchains AFTER making surethat the "outgoing" port 80(web) traffic gets routed throughthe server running squid.
There are 3 common methods to make sure "outgoing"port 80 traffic gets routed to the server running squidand 4th one is being introduced here.
If you can tell your gateway router to match packets that has outgoing destination portof 80 to be sent to the IP address of squid server.
BUT
This would put additional load on the router andsome commercial routers might not even support this.
Layer 4 switches can handle this without any problem.
BUT
The cost for this equipment is usually very high. Typicallayer 4 switch would normally cost more thana typical router+good linux server.
You can force ALL traffic through cache server.
BUT
This is quite risky because Squid does utilize lots of CPU power which mightresult in slower over-all network performance or the server itself might crash and no one on thenetwork will be able to access the Internet if that occurs.
By using NetFilter another technique can be implementedwhich is using NetFilter for "mark"ing the packetswith destination port 80 and using iproute2 toroute the "mark"ed packets to the Squid server.
|----------------| | Implementation | |----------------| Addresses used 10.0.0.1 naret (NetFilter server) 10.0.0.2 silom (Squid server) 10.0.0.3 donmuang (Router connected to the Internet) 10.0.0.4 kaosarn (other server on network) 10.0.0.5 RAS 10.0.0.0/24 main network 10.0.0.0/19 total network |---------------| |Network diagram| |---------------| Internet | donmuang | ------------hub/switch---------- | | | | naret silom kaosarn RAS etc.First, make all traffic pass through naret by making sure it is the default gateway except for silom.Silom's default gateway has to be donmuang (10.0.0.3) or this would create web traffic loop.
(all servers on my network had 10.0.0.1 as the default gateway which was the former IP address of donmuang router so what I didwas changed the IP address of donmuang to 10.0.0.3 and gave naret ip address of 10.0.0.1)
Silom ----- -setup squid and ipchains
Setup Squid server on silom, make sure it does support transparent caching/proxying, the default port is usually3128, so all traffic for port 80 has to be redirected to port 3128 locally. This can be done by using ipchains with the following:
silom# ipchains -N allow1 silom# ipchains -A allow1 -p TCP -s 10.0.0.0/19 -d 0/0 80 -j REDIRECT 3128 silom# ipchains -I input -j allow1
Or, in netfilter lingo:
silom# iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
(note: you might have other entries as well)
For more information on setting Squid server please refer to Squid FAQ page on http://squid.nlanr.net).
Make sure ip forwarding is enabled on this server and the default gateway for this server is donmuang router (NOT naret).
Naret ----- -setup iptables and iproute2 -disable icmp REDIRECT messages (if needed)
"Mark" packets of destination port 80 with value 2
naret# iptables -A PREROUTING -i eth0 -t mangle -p tcp --dport 80 \ -j MARK --set-mark 2
Setup iproute2 so it will route packets with "mark" 2 to silom
naret# echo 202 www.out >> /etc/iproute2/rt_tables naret# ip rule add fwmark 2 table www.out naret# ip route add default via 10.0.0.2 dev eth0 table www.out naret# ip route flush cache
If donmuang and naret is on the same subnet then naret should not send out icmp REDIRECT messages. In this case it is, so icmp REDIRECTs has to be disabled by:
naret# echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects naret# echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects naret# echo 0 > /proc/sys/net/ipv4/conf/eth0/send_redirects
The setup is complete, check the configuration
On naret: naret# iptables -t mangle -L Chain PREROUTING (policy ACCEPT) target prot opt source destination MARK tcp -- anywhere anywhere tcp dpt:www MARK set 0x2 Chain OUTPUT (policy ACCEPT) target prot opt source destination naret# ip rule ls 0: from all lookup local 32765: from all fwmark 2 lookup www.out 32766: from all lookup main 32767: from all lookup default naret# ip route list table www.out default via 203.114.224.8 dev eth0 naret# ip route 10.0.0.1 dev eth0 scope link 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.1 127.0.0.0/8 dev lo scope link default via 10.0.0.3 dev eth0 (make sure silom belongs to one of the above lines, in this case it's the line with 10.0.0.0/24) |------| |-DONE-| |------|
|-----------------------------------------| |Traffic flow diagram after implementation| |-----------------------------------------| INTERNET /\ || \/ -----------------donmuang router--------------------- /\ /\ || || || || || \/ || naret silom || *destination port 80 traffic=========>(cache) || /\ || || || \/ \/ \\===================================kaosarn, RAS, etc.
Note that the network is asymmetric as there is one extra hop on general outgoing path.
Here is run down for packet traversing the network from kaosarnto and from the Internet.
kaosarn http request->naret->silom->donmuang->internet http replies from Internet->donmuang->silom->kaosarn
kaosarn outgoing data->naret->donmuang->internet incoming data from Internet->donmuang->kaosarn
For sending bulk data, the Internet generally works better when using largerpackets. Each packet implies a routing decision, when sending a 1 megabytefile, this can either mean around 700 packets when using packets that are aslarge as possible, or 4000 if using the smallest default.
However, not all parts of the Internet support full 1460 bytes of payloadper packet. It is therefore necessary to try and find the largest packetthat will 'fit', in order to optimize a connection.
This process is called 'Path MTU Discovery', where MTU stands for 'MaximumTransfer Unit.'
When a router encounters a packet that's too big too send in one piece, ANDit has been flagged with the "Don't Fragment" bit, it returns an ICMPmessage stating that it was forced to drop a packet because of this. Thesending host acts on this hint by sending smaller packets, and by iteratingit can find the optimum packet size for a connection over a certain path.
This used to work well until the Internet was discovered by hooligans who dotheir best to disrupt communications. This in turn lead administrators toeither block or shape ICMP traffic in a misguided attempt to improvesecurity or robustness of their Internet service.
What has happened now is that Path MTU Discovery is working less and lesswell and fails for certain routes, which leads to strange TCP/IP sessionswhich die after a while.
Although I have no proof for this, two sites who I used to have this problemwith both run Alteon Acedirectors before the affected systems - perhapssomebody more knowledgeable can provide clues as to why this happens.
When you encounter sites that suffer from this problem, you can disable PathMTU discovery by setting it manually. Koos van den Hout, slightly edited,writes:
The following problem: I set the mtu/mru of my leased line running ppp to296 because it's only 33k6 and I cannot influence the queueing on theother side. At 296, the response to a key press is within a reasonabletime frame.
And, on my side I have a masqrouter running (of course) Linux.
Recently I split 'server' and 'router' so most applications are run on adifferent machine than the routing happens on.
I then had trouble logging into irc. Big panic! Some digging did findout that I got connected to irc, even showed up as 'connected' on ircbut I did not receive the motd from irc. I checked what could be wrongand noted that I already had some previous trouble reaching certainwebsites related to the MTU, since I had no trouble reaching them whenthe MTU was 1500, the problem just showed when the MTU was set to 296.Since irc servers block about every kind of traffic not needed for theirimmediate operation, they also block icmp.
I managed to convince the operators of a webserver that this was the causeof a problem, but the irc server operators were not going to fix this.
So, I had to make sure outgoing masqueraded traffic started with the lowermtu of the outside link. But I want local ethernet traffic to have thenormal mtu (for things like nfs traffic).
Solution:
ip route add default via 10.0.0.1 mtu 296(10.0.0.1 being the default gateway, the inside address of themasquerading router)
In general, it is possible to override PMTU Discovery by setting specificroutes. For example, if only a certain subnet is giving problems, thisshould help:
ip route add 195.96.96.0/24 via 10.0.0.1 mtu 1000
As explained above, Path MTU Discovery doesn't work as well as it shouldanymore. If you know for a fact that a hop somewhere in your network has alimited (<1500) MTU, you cannot rely on PMTU Discovery finding this out.
Besides MTU, there is yet another way to set the maximum packet size, the socalled Maximum Segment Size. This is a field in the TCP Options part of aSYN packet.
Recent Linux kernels, and a few PPPoE drivers (notably, the excellentRoaring Penguin one), feature the possibility to 'clamp the MSS'.
The good thing about this is that by setting the MSS value, you are tellingthe remote side unequivocally 'do not ever try to send me packets biggerthan this value'. No ICMP traffic is needed to get this to work.
The bad thing is that it's an obvious hack - it breaks 'end to end' bymodifying packets. Having said that, we use this trick in many places and itworks like a charm.
In order for this to work you need at least iptables-1.2.1a and Linux 2.4.3or higher. The basic command line is:
# iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
This calculates the proper MSS for your link. If you are feeling brave, orthink that you know best, you can also do something like this:
# iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 128
This sets the MSS of passing SYN packets to 128. Use this if you have VoIPwith tiny packets, and huge http packets which are causing chopping in yourvoice calls.
Note: This script has recently been upgraded and previously only worked forLinux clients in your network! So you might want to update if you haveWindows machines or Macs in your network and noticed that they were not ableto download faster while others were uploading.
I attempted to create the holy grail:
This means that downloading or uploading files should not disturb SSH oreven telnet. These are the most important things, even 200ms latency issluggish to work over.
Even though http is 'bulk' traffic, other traffic should not drown it out too much.
This is a much observed phenomenon where outgress traffic simply destroysdownload speed.
The next section explains in depth what causes the delays, and how we canfix them. You can safely skip it and head straight for the script if youdon't care how the magic is performed.
ISPs know that they are benchmarked solely on how fast people can download.Besides available bandwidth, download speed is influenced heavily by packetloss, which seriously hampers TCP/IP performance. Large queues can helpprevent packet loss, and speed up downloads. So ISPs configure large queues.
These large queues however damage interactivity. A keystroke must firsttravel the upstream queue, which may be seconds (!) long and go to yourremote host. It is then displayed, which leads to a packet coming back, whichmust then traverse the downstream queue, located at your ISP, before itappears on your screen.
This HOWTO teaches you how to mangle and process the queue in many ways, butsadly, not all queues are accessible to us. The queue over at the ISP iscompletely off-limits, whereas the upstream queue probably lives inside yourcable modem or DSL device. You may or may not be able to configure it. Mostprobably not.
So, what next? As we can't control either of those queues, they must beeliminated, and moved to your Linux router. Luckily this is possible.
By limiting our upload speed to slightly less than the truly available rate,no queues are built up in our modem. The queue is now moved to Linux.
This is slightly trickier as we can't really influence how fast the internetships us data. We can however drop packets that are coming in too fast,which causes TCP/IP to slow down to just the rate we want. Because we don't want to drop traffic unnecessarily, we configure a 'burst' size we allow athigher speed.
Now, once we have done this, we have eliminated the downstream queue totally(except for short bursts), and gain the ability to manage the upstream queuewith all the power Linux offers.
What remains to be done is to make sure interactive traffic jumps to thefront of the upstream queue. To make sure that uploads don't hurt downloads,we also move ACK packets to the front of the queue. This is what normallycauses the huge slowdown observed when generating bulk traffic both ways.The ACKnowledgements for downstream traffic must compete with upstreamtraffic, and get delayed in the process.
If we do all this we get the following measurements using an excellent ADSLconnection from xs4all in the Netherlands:
Baseline latency: round-trip min/avg/max = 14.4/17.1/21.7 ms Without traffic conditioner, while downloading: round-trip min/avg/max = 560.9/573.6/586.4 ms Without traffic conditioner, while uploading: round-trip min/avg/max = 2041.4/2332.1/2427.6 ms With conditioner, during 220kbit/s upload: round-trip min/avg/max = 15.7/51.8/79.9 ms With conditioner, during 850kbit/s download: round-trip min/avg/max = 20.4/46.9/74.0 ms When uploading, downloads proceed at ~80% of the available speed. Uploads at around 90%. Latency then jumps to 850 ms, still figuring out why.
What you can expect from this script depends a lot on your actual uplinkspeed. When uploading at full speed, there will always be a single packetahead of your keystroke. That is the lower limit to the latency you canachieve - divide your MTU by your upstream speed to calculate. Typicalvalues will be somewhat higher than that. Lower your MTU for better effects!
Next, two versions of this script, one with Devik's excellent HTB, the otherwith CBQ which is in each Linux kernel, unlike HTB. Both are tested and workwell.
Works on all kernels. Within the CBQqdisc we place two Stochastic Fairness Queues that make sure that multiplebulk streams don't drown each other out.
Downstream traffic is policed using a tc filter containing a Token BucketFilter.
You might improve on this script by adding 'bounded' to the line that startswith 'tc class add .. classid 1:20'. If you lowered your MTU, also lower theallot & avpkt numbers!
#!/bin/bash # The Ultimate Setup For Your Internet Connection At Home # # # Set the following values to somewhat less than your actual download # and uplink speed. In kilobits DOWNLINK=800 UPLINK=220 DEV=ppp0 # clean existing down- and uplink qdiscs, hide errors tc qdisc del dev $DEV root 2> /dev/null > /dev/null tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null ###### uplink # install root CBQ tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 10mbit # shape everything at $UPLINK speed - this prevents huge queues in your # DSL modem which destroy latency: # main class tc class add dev $DEV parent 1: classid 1:1 cbq rate ${UPLINK}kbit \ allot 1500 prio 5 bounded isolated # high prio class 1:10: tc class add dev $DEV parent 1:1 classid 1:10 cbq rate ${UPLINK}kbit \ allot 1600 prio 1 avpkt 1000 # bulk and default class 1:20 - gets slightly less traffic, # and a lower priority: tc class add dev $DEV parent 1:1 classid 1:20 cbq rate $[9*$UPLINK/10]kbit \ allot 1600 prio 2 avpkt 1000 # both get Stochastic Fairness: tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10 tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10 # start filters # TOS Minimum Delay (ssh, NOT scp) in 1:10: tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ match ip tos 0x10 0xff flowid 1:10 # ICMP (ip protocol 1) in the interactive class 1:10 so we # can do measurements & impress our friends: tc filter add dev $DEV parent 1:0 protocol ip prio 11 u32 \ match ip protocol 1 0xff flowid 1:10 # To speed up downloads while an upload is going on, put ACK packets in # the interactive class: tc filter add dev $DEV parent 1: protocol ip prio 12 u32 \ match ip protocol 6 0xff \ match u8 0x05 0x0f at 0 \ match u16 0x0000 0xffc0 at 2 \ match u8 0x10 0xff at 33 \ flowid 1:10 # rest is 'non-interactive' ie 'bulk' and ends up in 1:20 tc filter add dev $DEV parent 1: protocol ip prio 13 u32 \ match ip dst 0.0.0.0/0 flowid 1:20 ########## downlink ############# # slow downloads down to somewhat less than the real speed to prevent # queuing at our ISP. Tune to see how high you can set it. # ISPs tend to have *huge* queues to make sure big downloads are fast # # attach ingress policer: tc qdisc add dev $DEV handle ffff: ingress # filter *everything* to it (0.0.0.0/0), drop everything that's # coming in too fast: tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \ 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1If you want this script to be run by ppp on connect, copy it to/etc/ppp/ip-up.d.
If the last two lines give an error, update your tc tool to a newer version!
The following script achieves all goals using the wonderful HTB queue, seethe relevant chapter. Well worth patching your kernel for!
#!/bin/bash # The Ultimate Setup For Your Internet Connection At Home # # # Set the following values to somewhat less than your actual download # and uplink speed. In kilobits DOWNLINK=800 UPLINK=220 DEV=ppp0 # clean existing down- and uplink qdiscs, hide errors tc qdisc del dev $DEV root 2> /dev/null > /dev/null tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null ###### uplink # install root HTB, point default traffic to 1:20: tc qdisc add dev $DEV root handle 1: htb default 20 # shape everything at $UPLINK speed - this prevents huge queues in your # DSL modem which destroy latency: tc class add dev $DEV parent 1: classid 1:1 htb rate ${UPLINK}kbit burst 6k # high prio class 1:10: tc class add dev $DEV parent 1:1 classid 1:10 htb rate ${UPLINK}kbit \ burst 6k prio 1 # bulk & default class 1:20 - gets slightly less traffic, # and a lower priority: tc class add dev $DEV parent 1:1 classid 1:20 htb rate $[9*$UPLINK/10]kbit \ burst 6k prio 2 # both get Stochastic Fairness: tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10 tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10 # TOS Minimum Delay (ssh, NOT scp) in 1:10: tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ match ip tos 0x10 0xff flowid 1:10 # ICMP (ip protocol 1) in the interactive class 1:10 so we # can do measurements & impress our friends: tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ match ip protocol 1 0xff flowid 1:10 # To speed up downloads while an upload is going on, put ACK packets in # the interactive class: tc filter add dev $DEV parent 1: protocol ip prio 10 u32 \ match ip protocol 6 0xff \ match u8 0x05 0x0f at 0 \ match u16 0x0000 0xffc0 at 2 \ match u8 0x10 0xff at 33 \ flowid 1:10 # rest is 'non-interactive' ie 'bulk' and ends up in 1:20 ########## downlink ############# # slow downloads down to somewhat less than the real speed to prevent # queuing at our ISP. Tune to see how high you can set it. # ISPs tend to have *huge* queues to make sure big downloads are fast # # attach ingress policer: tc qdisc add dev $DEV handle ffff: ingress # filter *everything* to it (0.0.0.0/0), drop everything that's # coming in too fast: tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \ 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1
If you want this script to be run by ppp on connect, copy it to/etc/ppp/ip-up.d.
If the last two lines give an error, update your tc tool to a newer version!
Although this is described in stupendous details elsewhere and in our manpages, this question gets asked a lot and happily there is a simple answer that does not need full comprehension of traffic control.
This three line script does the trick:
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 10mbit tc class add dev $DEV parent 1: classid 1:1 cbq rate 512kbit \ allot 1500 prio 5 bounded isolated tc filter add dev $DEV parent 1: protocol ip prio 16 u32 \ match ip dst 195.96.96.97 flowid 1:1
The first line installs a class based queue on your interface, and tells the kernel that for calculations,it can be assumed to be a 10mbit interface. If you get this wrong, no real harm is done. But getting it right will make everything more precise.
The second line creates a 512kbit class with some reasonable defaults. For details, see the cbq manpages andChapter 9.
The last line tells which traffic should go to the shaped class. Traffic not matched by this rule is NOT shaped. To make more complicated matches (subnets, source ports, destination ports), see Section 9.6.2.
If you changed anything and want to reload the script, execute 'tc qdisc del dev $DEV root' to clean up your existingconfiguration.
The script can further be improved by adding a last optional line 'tc qdisc add dev $DEV parent 1:1 sfq perturb 10'. See Section 9.2.3 for details on what this does.
I'm Pedro Larroy
. Here I'm describing a common set up where we have lots of users in a private network connected to the Internet trough a Linux router with a public ip address that is doing network address translation (NAT). I use this QoS setup to give access to the Internet to 198 users in a university dorm, in which I live and I'm netadmin of. The users here do heavy use of peer to peer programs, so proper traffic control is a must. I hope this serves as a practical example for all interested lartc readers.
At first I make a practical approach with step by step configuration, and in the end I explain how to make the process automatic at bootime. The network to which this example applies is a private LAN connected to the Internet through a Linux router which has one public ip address. Extending it to several public ip address should be very easy, a couple of iptables rules should be added.In order to get things working we need:
If you use 2.4.18 you will have to apply HTB patch available here.
Also ensure the "tc" binary is HTB ready, a precompiled binary is distributed with HTB.
First we set up some qdiscs in which we will classify the traffic. We create a htb qdisc with 6 classes with ascending priority. Then we have classes that will always get allocated rate, but can use the unused bandwidth that other classes don't need. Recall that classes with higher priority ( i.e with a lower prio number ) will get excess of bandwith allocated first. Our connection is 2Mb down 300kbits/s up Adsl. I use 240kbit/s as ceil rate just because it's the higher I can set it before latency starts to grow, due to buffer filling in whatever place between us and remote hosts. This parameter should be timed experimentally, raising and lowering it while observing latency between some near hosts.
Adjust CEIL to 75% of your upstream bandwith limit by now, and where I use eth0, you should use the interface which has a public Internet address. To begin our example execute the following in a root shell:
CEIL=240 tc qdisc add dev eth0 root handle 1: htb default 15 tc class add dev eth0 parent 1: classid 1:1 htb rate ${CEIL}kbit ceil ${CEIL}kbit tc class add dev eth0 parent 1:1 classid 1:10 htb rate 80kbit ceil 80kbit prio 0 tc class add dev eth0 parent 1:1 classid 1:11 htb rate 80kbit ceil ${CEIL}kbit prio 1 tc class add dev eth0 parent 1:1 classid 1:12 htb rate 20kbit ceil ${CEIL}kbit prio 2 tc class add dev eth0 parent 1:1 classid 1:13 htb rate 20kbit ceil ${CEIL}kbit prio 2 tc class add dev eth0 parent 1:1 classid 1:14 htb rate 10kbit ceil ${CEIL}kbit prio 3 tc class add dev eth0 parent 1:1 classid 1:15 htb rate 30kbit ceil ${CEIL}kbit prio 3 tc qdisc add dev eth0 parent 1:12 handle 120: sfq perturb 10 tc qdisc add dev eth0 parent 1:13 handle 130: sfq perturb 10 tc qdisc add dev eth0 parent 1:14 handle 140: sfq perturb 10 tc qdisc add dev eth0 parent 1:15 handle 150: sfq perturb 10We have just created a htb tree with one level depth. Something like this:
+---------+ | root 1: | +---------+ | +---------------------------------------+ | class 1:1 | +---------------------------------------+ | | | | | | +----+ +----+ +----+ +----+ +----+ +----+ |1:10| |1:11| |1:12| |1:13| |1:14| |1:15| +----+ +----+ +----+ +----+ +----+ +----+
This is the highest priority class. The packets in this class will have the lowest delay and would get the excess of bandwith first so it's a good idea to limit the ceil rate to this class. We will send through this class the following packets that benefit from low delay, such as interactive traffic: ssh, telnet, dns, quake3, irc, and packets with the SYN flag.
Here we have the first class in which we can start to put bulk traffic. In my example I have traffic from the local web server and requests for web pages: source port 80, and destination port 80 respectively.
In this class I will put traffic with Maximize-Throughput TOS bit set and the rest of the traffic that goes from local processes on the router to the Internet. So the following classes will only have traffic that is "routed through" the box.
This class is for the traffic of other NATed machines that need higher priority in their bulk traffic.
Here goes mail traffic (SMTP,pop3...) and packets with Minimize-Cost TOS bit set.
And finally here we have bulk traffic from the NATed machines behind the router. All kazaa, edonkey, and others will go here, in order to not interfere with other services.
We have created the qdisc setup but no packet classification has been made, so now all outgoing packets are going out in class 1:15 ( because we used: tc qdisc add dev eth0 root handle 1: htb default 15 ). Now we need to tell which packets go where. This is the most important part.
Now we set the filters so we can classify the packets with iptables. I really prefer to do it with iptables, because they are very flexible and you have packet count for each rule. Also with the RETURN target packets don't need to traverse all rules. We execute the following commands:
tc filter add dev eth0 parent 1:0 protocol ip prio 1 handle 1 fw classid 1:10 tc filter add dev eth0 parent 1:0 protocol ip prio 2 handle 2 fw classid 1:11 tc filter add dev eth0 parent 1:0 protocol ip prio 3 handle 3 fw classid 1:12 tc filter add dev eth0 parent 1:0 protocol ip prio 4 handle 4 fw classid 1:13 tc filter add dev eth0 parent 1:0 protocol ip prio 5 handle 5 fw classid 1:14 tc filter add dev eth0 parent 1:0 protocol ip prio 6 handle 6 fw classid 1:15We have just told the kernel that packets that have a specific FWMARK value ( handle x fw ) go in the specified class ( classid x:x). Next you will see how to mark packets with iptables.
First you have to understand how packet traverse the filters with iptables:
+------------+ +---------+ +-------------+ Packet -| PREROUTING |--- routing-----| FORWARD |-------+-------| POSTROUTING |- Packets input +------------+ decision +---------+ | +-------------+ out | | +-------+ +--------+ | INPUT |---- Local process -| OUTPUT | +-------+ +--------+I assume you have all your tables created and with default policy ACCEPT ( -P ACCEPT ) if you haven't poked with iptables yet, It should be ok by default. Ours private network is a class B with address 172.17.0.0/16 and public ip is 212.170.21.172
Next we instruct the kernel to actually do NAT, so clients in the private network can start talking to the outside.
echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -s 172.17.0.0/255.255.0.0 -o eth0 -j SNAT --to-source 212.170.21.172Now check that packets are flowing through 1:15:
tc -s class show dev eth0
You can start marking packets adding rules to the PREROUTING chain in the mangle table.
iptables -t mangle -A PREROUTING -p icmp -j MARK --set-mark 0x1 iptables -t mangle -A PREROUTING -p icmp -j RETURNNow you should be able to see packet count increasing when pinging from machines within the private network to some site on the Internet. Check packet count increasing in 1:10
tc -s class show dev eth0We have done a -j RETURN so packets don't traverse all rules. Icmp packets won't match other rules below RETURN. Keep that in mind.Now we can start adding more rules, lets do proper TOS handling:
iptables -t mangle -A PREROUTING -m tos --tos Minimize-Delay -j MARK --set-mark 0x1 iptables -t mangle -A PREROUTING -m tos --tos Minimize-Delay -j RETURN iptables -t mangle -A PREROUTING -m tos --tos Minimize-Cost -j MARK --set-mark 0x5 iptables -t mangle -A PREROUTING -m tos --tos Minimize-Cost -j RETURN iptables -t mangle -A PREROUTING -m tos --tos Maximize-Throughput -j MARK --set-mark 0x6 iptables -t mangle -A PREROUTING -m tos --tos Maximize-Throughput -j RETURNNow prioritize ssh packets:
iptables -t mangle -A PREROUTING -p tcp -m tcp --sport 22 -j MARK --set-mark 0x1 iptables -t mangle -A PREROUTING -p tcp -m tcp --sport 22 -j RETURNA good idea is to prioritize packets to begin tcp connections, those with SYN flag set:
iptables -t mangle -I PREROUTING -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN -j MARK --set-mark 0x1 iptables -t mangle -I PREROUTING -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN -j RETURNAnd so on.When we are done adding rules to PREROUTING in mangle, we terminate the PREROUTING table with:
iptables -t mangle -A PREROUTING -j MARK --set-mark 0x6So previously unmarked traffic goes in 1:15. In fact this last step is unnecessary since default class was 1:15, but I will mark them in order to be consistent with the whole setup, and furthermore it's useful to see the counter in that rule.
It will be a good idea to do the same in the OUTPUT rule, so repeat those commands with -A OUTPUT instead of PREROUTING. ( s/PREROUTING/OUTPUT/ ). Then traffic generated locally (on the Linux router) will also be classified. I finish OUTPUT chain with -j MARK --set-mark 0x3 so local traffic has higher priority.
Now we have all our setup working. Take time looking at the graphs, and watching where your bandwith is spent and how do you want it. Doing that for lots of hours, I finally got the Internet connection working really well. Otherwise continuous timeouts and nearly zero allotment of bandwith to newly created tcp connections will occur.
If you find that some classes are full most of the time it would be a good idea to attach another queueing discipline to them so bandwith sharing is more fair:
tc qdisc add dev eth0 parent 1:13 handle 130: sfq perturb 10 tc qdisc add dev eth0 parent 1:14 handle 140: sfq perturb 10 tc qdisc add dev eth0 parent 1:15 handle 150: sfq perturb 10
It sure can be done in many ways. In mine, I have a shell script in /etc/init.d/packetfilter that accepts [start | stop | stop-tables | start-tables | reload-tables] it configures qdiscs and loads needed kernel modules, so it behaves much like a daemon. The same script loads iptables rules from /etc/network/iptables-rules which can be saved with iptables-save and restored with iptables-restore.
Bridges are devices which can be installed in a network without anyreconfiguration. A network switch is basically a many-port bridge. A bridgeis often a 2-port switch. Linux does however support multiple interfaces ina bridge, making it a true switch.
Bridges are often deployed when confronted with a broken network that needsto be fixed without any alterations. Because the bridge is a layer-2 device,one layer below IP, routers and servers are not aware of its existence.This means that you can transparently block or modify certain packets, or doshaping.
Another good thing is that a bridge can often be replaced by a cross cableor a hub, should it break down.
The bad news is that a bridge can cause great confusion unless it is verywell documented. It does not appear in traceroutes, but somehow packetsdisappear or get changed from point A to point B ('this network isHAUNTED!'). You should also wonder if an organization that 'does not want tochange anything' is doing the right thing.
The Linux 2.4/2.5 bridge is documented onthis page.
As of Linux 2.4.20, bridging and iptables do not 'see' each other withouthelp. If you bridge packets from eth0 to eth1, they do not 'pass' byiptables. This means that you cannot do filtering, or NAT or mangling orwhatever. In Linux 2.5.45 and higher, this is fixed.
You may also see 'ebtables' mentioned which is yet another project - itallows you to do wild things as MACNAT and 'brouting'. It is truly scary.
This does work as advertised. Be sure to figure out which side eachinterface is on, otherwise you might be shaping outbound traffic in yourinternal interface, which won't work. Use tcpdump if needed.
If you just want to implement a Pseudo-bridge, skip down a few sections to 'Implementing it', but it is wise to read a bit about how it works inpractice.
A Pseudo-bridge works a bit differently. By default, a bridge passes packetsunaltered from one interface to the other. It only looks at the hardwareaddress of packets to determine what goes where. This in turn means that youcan bridge traffic that Linux does not understand, as long as it has anhardware address it does.
A 'Pseudo-bridge' works differently and looks more like a hidden router thana bridge, but like a bridge, it has little impact on network design.
An advantage of the fact that it is not a bridge lies in the fact thatpackets really pass through the kernel, and can be filtered, changed,redirected or rerouted.
A real bridge can also be made to perform these feats, but it needs specialcode, like the Ethernet Frame Diverter, or the above mentioned patch.
Another advantage of a pseudo-bridge is that it does not pass packets itdoes not understand - thus cleaning your network of a lot of cruft. In caseswhere you need this cruft (like SAP packets, or Netbeui), use a real bridge.
When a host wants to talk to another host on the same physical networksegment, it sends out an Address Resolution Protocol packet, which, somewhatsimplified, reads like this 'who has 10.0.0.1, tell 10.0.0.7'. In responseto this, 10.0.0.1 replies with a short 'here' packet.
10.0.0.7 then sends packets to the hardware address mentioned in the 'here' packet. It caches this hardware address for a relatively long time, andafter the cache expires, it re-asks the question.
When building a Pseudo-bridge, we instruct the bridge to reply to these ARPpackets, which causes the hosts in the network to send its packets to thebridge. The bridge then processes these packets, and sends them to therelevant interface.
So, in short, whenever a host on one side of the bridge asks for thehardware address of a host on the other, the bridge replies with a packetthat says 'hand it to me'.
This way, all data traffic gets transmitted to the right place, and alwayspasses through the bridge.
In the bad old days, it used to be possible to instruct the Linux Kernel toperform 'proxy-ARP' for just any subnet. So, to configure a pseudo-bridge,you would have to specify both the proper routes to both sides of the bridgeAND create matching proxy-ARP rules. This is bad in that it requires a lotof typing, but also because it easily allows you to make mistakes which makeyour bridge respond to ARP queries for networks it does not know how toroute.
With Linux 2.4/2.5 (and possibly 2.2), this possibility has been withdrawn andhas been replaced by a flag in the /proc directory, called 'proxy_arp'. Theprocedure for building a pseudo-bridge is then:
Assign an IP address to both interfaces, the 'left' and the 'right'one
Create routes so your machine knows which hosts reside on the left,and which on the right
Turn on proxy-ARP on both interfaces, echo 1 >/proc/sys/net/ipv4/conf/ethL/proxy_arp, echo 1 >/proc/sys/net/ipv4/conf/ethR/proxy_arp, where L and R stand for the numbersof your interfaces on the left and on the right side
Also, do not forget to turn on the ip_forwarding flag! When converting froma true bridge, you may find that this flag was turned off as it is notneeded when bridging.
Another thing you might note when converting is that you need to clear thearp cache of computers in the network - the arp cache might contain oldpre-bridge hardware addresses which are no longer correct.
On a Cisco, this is done using the command 'clear arp-cache', underLinux, use 'arp -d ip.address'. You can also wait for the cache to expiremanually, which can take rather long.
You can speed this up using the wonderful 'arping' tool, which on manydistributions is part of the 'iputils' package. Using 'arping' you can sendout unsolicited ARP messages so as to update remote arp caches.
This is a very powerful technique that is also used by 'black hats' tosubvert your routing!
On Linux 2.4, you may need to execute 'echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind' before being able to sendout unsolicited ARP messages! |
You may also discover that your network was misconfigured if you are/were ofthe habit of specifying routes without netmasks. To explain, some versionsof route may have guessed your netmask right in the past, or guessed wrongwithout you noticing. When doing surgical routing like described above, itis *vital* that you check your netmasks!
Once your network starts to get really big, or you start to consider 'theinternet' as your network, you need tools which dynamically route your data.Sites are often connected to each other with multiple links, and more arepopping up all the time.
The Internet has mostly standardized on OSPF (RFC 2328) and BGP4 (RFC 1771).Linux supports both, by way of gated and zebra.
While currently not within the scope of this document, we would like topoint you to the definitive works:
Overview:
Cisco SystemsDesigning large-scale IP Internetworks
For OSPF:
Moy, John T."OSPF. The anatomy of an Internet routing protocol"Addison Wesley. Reading, MA. 1998.
Halabi has also written a good guide to OSPF routing design, but thisappears to have been dropped from the Cisco web site.
For BGP:
Halabi, Bassam"Internet routing architectures"Cisco Press (New Riders Publishing). Indianapolis, IN. 1997.
also
Cisco Systems
Using the Border Gateway Protocol for interdomain routing
Although the examples are Cisco-specific, they are remarkably similarto the configuration language in Zebra :-)
Please, let me know if any of the following information is not accurate or if you have any suggestions.Zebra is a great dynamic routing software written by Kunihiro Ishiguro, Toshiaki Takada and Yasuhiro Ohara. With Zebra, setting up OSPF is fast an simple, but in practice there's a lot of parameters to tune if you have very specific needs. OSPF stands for Open Shortest Path First, and some of its principal features are:
Networks are grouped by areas, which are interconnected by a backbone area which will be designated as area 0. All traffic goes through area 0, and all the routers in area 0 have routing information about all the other areas.
Routes are propagated very fast, compared with RIP, for example.
Uses multicasting instead of broadcasting, so it doesn't flood other hosts with routing information that may not be of interest for them, thus reducing network overhead. Also, Internal Routers (those which only have interfaces in one area) don't have routing information about other areas. Routers with interfaces in more than one area are called Area Border Routers, and hold topological information about the areas they are connected to.
OSPF is based on Dijkstra's Shortest Path First algorithm, which is expensive compared to other routing algorithms. But really is not that bad, since the Shortest Path is onlycalculated for each area, also for small to medium sized networks this won't be an issue, and you won't even notice.
OSPF counts with the special characteristics of networks and interfaces, such as bandwith, link failures, and monetary cost.
OSPF is an open protocol, and Zebra is GPL software, which has obvious advantages over propietary software and protocols.
Compiled with CONFIG_NETLINK_DEV and CONFIG_IP_MULTICAST (I am not sure if anything more is also needed).
Get it with your favorite package manager or from http://www.zebra.org.
Let's take this network as an example:
---------------------------------------------------- | 192.168.0.0/24 | | | | Area 0 100BaseTX Switched | | Backbone Ethernet | ---------------------------------------------------- | | | | | | | | |eth1 |eth1 |eth0 | |100BaseTX |100BaseTX |100BaseTX |100BaseTX |.1 |.2 |.253 | --------- ------------ ----------- ---------------- |R Omega| |R Atlantis| |R Legolas| |R Frodo | --------- ------------ ----------- ---------------- |eth0 |eth0 | | | | | | | | |2MbDSL/ATM |100BaseTX |10BaseT |10BaseT |10BaseT ------------ ------------------------------------ ------------------------------- | Internet | | 172.17.0.0/16 Area 1 | | 192.168.1.0/24 wlan Area 2| ------------ | Student network (dorm) | | barcelonawireless | ------------------------------------ -------------------------------Don't be afraid by this diagram, zebra does most of the work automatically, so it won't take any work to put all the routes up with zebra. It would be painful to maintain all those routes by hand in a day to day basis. The most important thing you must make clear, is the network topology. And take special care with Area 0, since it's the most important.First configure zebra, editing zebra.conf and adapt it to your needs:
hostname omega password xxx enable password xxx ! ! Interface's description. ! !interface lo ! description test of desc. ! interface eth1 multicast ! ! Static default route ! ip route 0.0.0.0/0 212.170.21.129 ! log file /var/log/zebra/zebra.logIn Debian, I will also have to edit /etc/zebra/daemons so they start at boot:
zebra=yes ospfd=yesNow we have to edit ospfd.conf if you are still running IPV4 or ospf6d.conf if you run IPV6. My ospfd.conf looks like:
hostname omega password xxx enable password xxx ! router ospf network 192.168.0.0/24 area 0 network 172.17.0.0/16 area 1 ! ! log stdout log file /var/log/zebra/ospfd.logHere we instruct ospf about our network topology.
Now, we have to start Zebra; either by hand by typing "zebra -d" or with some script like "/etc/init.d/zebra start". Then carefully watching the ospdfd logs we should see something like:
2002/12/13 22:46:24 OSPF: interface 192.168.0.1 join AllSPFRouters Multicast group. 2002/12/13 22:46:34 OSPF: SMUX_CLOSE with reason: 5 2002/12/13 22:46:44 OSPF: SMUX_CLOSE with reason: 5 2002/12/13 22:46:54 OSPF: SMUX_CLOSE with reason: 5 2002/12/13 22:47:04 OSPF: SMUX_CLOSE with reason: 5 2002/12/13 22:47:04 OSPF: DR-Election[1st]: Backup 192.168.0.1 2002/12/13 22:47:04 OSPF: DR-Election[1st]: DR 192.168.0.1 2002/12/13 22:47:04 OSPF: DR-Election[2nd]: Backup 0.0.0.0 2002/12/13 22:47:04 OSPF: DR-Election[2nd]: DR 192.168.0.1 2002/12/13 22:47:04 OSPF: interface 192.168.0.1 join AllDRouters Multicast group. 2002/12/13 22:47:06 OSPF: DR-Election[1st]: Backup 192.168.0.2 2002/12/13 22:47:06 OSPF: DR-Election[1st]: DR 192.168.0.1 2002/12/13 22:47:06 OSPF: Packet[DD]: Negotiation done (Slave). 2002/12/13 22:47:06 OSPF: nsm_change_status(): scheduling new router-LSA origination 2002/12/13 22:47:11 OSPF: ospf_intra_add_router: StartIgnore the SMUX_CLOSE message by now, since it's about SNMP. We can see that 192.168.0.1 is the Designated Router and 192.168.0.2 is the Backup Designated Router
We can also interact with the zebra or the ospfd interface by executing:
$ telnet localhost zebra $ telnet localhost ospfdLet's see how to view if the routes are propagating, log into zebra and type:
root@atlantis:~# telnet localhost zebra Trying 127.0.0.1... Connected to atlantis. Escape character is '^]'. Hello, this is zebra (version 0.92a). Copyright 1996-2001 Kunihiro Ishiguro. User Access Verification Password: atlantis> show ip route Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF, B - BGP, > - selected route, * - FIB route K>* 0.0.0.0/0 via 192.168.0.1, eth1 C>* 127.0.0.0/8 is directly connected, lo O 172.17.0.0/16 [110/10] is directly connected, eth0, 06:21:53 C>* 172.17.0.0/16 is directly connected, eth0 O 192.168.0.0/24 [110/10] is directly connected, eth1, 06:21:53 C>* 192.168.0.0/24 is directly connected, eth1 atlantis> show ip ospf border-routers ============ OSPF router routing table ============= R 192.168.0.253 [10] area: (0.0.0.0), ABR via 192.168.0.253, eth1 [10] area: (0.0.0.1), ABR via 172.17.0.2, eth0Or with iproute directly:
root@omega:~# ip route 212.170.21.128/26 dev eth0 proto kernel scope link src 212.170.21.172 192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.1 172.17.0.0/16 via 192.168.0.2 dev eth1 proto zebra metric 20 default via 212.170.21.129 dev eth0 proto zebra root@omega:~#We can see the zebra routes, that weren't there before. It's really nice to see routes appearing just a few seconds after you start zebra and ospfd. You can check connectivity to other hosts with ping. Zebra routes are automatic, you can just add another router to the network, configure zebra, and voila!
Hint: You can use:
tcpdump -i eth1 ip[9] == 89To capture OSPF packets for analysis. OSPF ip protocol number is 89, and the protocol field is the 9th octet on the ip header.
OSPF has a lot of tunable parameters, specially for large networks. In further ampliations of the howto we will show some methodologies for fine tunning OSPF.
The Border Gateway Protocol Version 4 (BGP4) is a dynamic routingprotocol described in RFC 1771. It allows the distribution ofreachability information, i.e. routing tables, to other BGP4enabled nodes. It can either be used as EGP or IGP, in EGP modeeach node must have its own Autonomous System (AS) number.BGP4 supports Classless Inter Domain Routing (CIDR) and routeaggregation (merge multiple routes into one).
The following network map is used for further examples. AS 1 and 50have more neighbors but we only need to configure 1 and 50 as ourneighbor. The nodes itself communicate over tunnels in this examplebut that is not a must.
Note: The AS numbers used in this example are reserved, pleaseget your own AS from RIPE if you set up official peerings.
-------------------- | 192.168.23.12/24 | | AS: 23 | -------------------- / \ / \ / \ ------------------ ------------------ | 192.168.1.1/24 |-------| 10.10.1.1/16 | | AS: 1 | | AS: 50 | ------------------ ------------------
The following configuration is written for node 192.168.23.12/24,it is easy to adapt it for the other nodes.
It starts with some general stuff like hostname, passwords anddebug switches:
! hostname hostname anakin ! login password password xxx ! enable password (super user mode) enable password xxx ! path to logfile log file /var/log/zebra/bgpd.log ! debugging: be verbose (can be removed afterwards) debug bgp events debug bgp filters debug bgp fsm debug bgp keepalives debug bgp updates
Access list, used to limit the redistribution to private networks (RFC 1918).
! RFC 1918 networks access-list local_nets permit 192.168.0.0/16 access-list local_nets permit 172.16.0.0/12 access-list local_nets permit 10.0.0.0/8 access-list local_nets deny any
Next step is to do the per AS configuration:
! Own AS number router bgp 23 ! IP address of the router bgp router-id 192.168.23.12 ! announce our own network to other neighbors network 192.168.23.0/24 ! advertise all connected routes (= directly attached interfaces) redistribute connected ! advertise kernel routes (= manually inserted routes) redistribute kernel
Every 'router bgp' block contains a list of neighbors to whichthe router is connected to:
neighbor 192.168.1.1 remote-as 1 neighbor 192.168.1.1 distribute-list local_nets in neighbor 10.10.1.1 remote-as 50 neighbor 10.10.1.1 distribute-list local_nets in
Note: vtysh is a multiplexer and connects all the Zebra interfacestogether.
anakin# sh ip bgp summary BGP router identifier 192.168.23.12, local AS number 23 2 BGP AS-PATH entries 0 BGP community entries Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.10.0.1 4 50 35 40 0 0 0 00:28:40 1 192.168.1.1 4 1 27574 27644 0 0 0 03:26:04 14 Total number of neighbors 2 anakin# anakin# sh ip bgp neighbors 10.10.0.1 BGP neighbor is 10.10.0.1, remote AS 50, local AS 23, external link BGP version 4, remote router ID 10.10.0.1 BGP state = Established, up for 00:29:01 .... anakin#
Let's see which routes we got from our neighbors:
anakin# sh ip ro bgp Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF, B - BGP, > - selected route, * - FIB route B>* 172.16.0.0/14 [20/0] via 192.168.1.1, tun0, 2d10h19m B>* 172.30.0.0/16 [20/0] via 192.168.1.1, tun0, 10:09:24 B>* 192.168.5.10/32 [20/0] via 192.168.1.1, tun0, 2d10h27m B>* 192.168.5.26/32 [20/0] via 192.168.1.1, tun0, 10:09:24 B>* 192.168.5.36/32 [20/0] via 192.168.1.1, tun0, 2d10h19m B>* 192.168.17.0/24 [20/0] via 192.168.1.1, tun0, 3d05h07m B>* 192.168.17.1/32 [20/0] via 192.168.1.1, tun0, 3d05h07m B>* 192.168.32.0/24 [20/0] via 192.168.1.1, tun0, 2d10h27m anakin#
This chapter is a list of projects having to do with advanced Linux routing& traffic shaping. Some of these links may deserve chapters of theirown, some are documented very well of themselves, and don't need more HOWTO.
VLANs are a very cool way to segregate yournetworks in a more virtual than physical way. Good information on VLANs canbe found here. With this implementation, you can have your Linux box talkVLANs with machines like Cisco Catalyst, 3Com: {Corebuilder, Netbuilder II,SuperStack II switch 630}, Extreme Ntwks Summit 48, Foundry: {ServerIronXL,FastIron}.
A great HOWTO about VLANs can be found here.
Update: has been included in the kernel as of 2.4.14 (perhaps 13).
Alternative VLAN implementation for linux. This project was started out ofdisagreement with the 'established' VLAN project's architecture and codingstyle, resulting in a cleaner overall design.
These people are brilliant. The Linux Virtual Server is a highly scalable andhighly available server built on a cluster of real servers, with the loadbalancer running on the Linux operating system. The architecture of thecluster is transparent to end users. End users only see a single virtualserver.
In short whatever you need to load balance, at whatever level of traffic, LVSwill have a way of doing it. Some of their techniques are positively evil!For example, they let several machines have the same IP address on asegment, but turn off ARP on them. Only the LVS machine does ARP - it thendecides which of the backend hosts should handle an incoming packet, andsends it directly to the right MAC address of the backend server. Outgoingtraffic will flow directly to the router, and not via the LVS machine, whichdoes therefore not need to see your 5Gbit/s of content flowing to the world,and cannot be a bottleneck.
The LVS is implemented as a kernel patch in Linux 2.0 and 2.2, but as aNetfilter module in 2.4/2.5, so it does not need kernel patches! Their 2.4support is still in early development, so beat on it and give feedback orsend patches.
Configuring CBQ can be a bit daunting, especially if all you want to do isshape some computers behind a router. CBQ.init can help you configure Linuxwith a simplified syntax.
For example, if you want all computers in your 192.168.1.0/24 subnet(on 10mbit eth1) to be limited to 28kbit/s download speed, putthis in the CBQ.init configuration file:
DEVICE=eth1,10Mbit,1Mbit RATE=28Kbit WEIGHT=2Kbit PRIO=5 RULE=192.168.1.0/24
By all means use this program if the 'how and why' don't interest you.We're using CBQ.init in production and it works very well. It can even dosome more advanced things, like time dependent shaping. The documentation isembedded in the script, which explains why you can't find a README.
Stephan Mueller ([email protected]) wrote two useful scripts, 'limit.conn'and 'shaper'. The first one allows you to easily throttle a single downloadsession, like this:
# limit.conn -s SERVERIP -p SERVERPORT -l LIMIT
It works on Linux 2.2 and 2.4/2.5.
The second script is more complicated, and can be used to make lots ofdifferent queues based on iptables rules, which are used to mark packetswhich are then shaped.
FIXME: This link died, anybody know where it went?
This is purely for redundancy. Two machines with their own IP address andMAC Address together create a third IP Address and MAC Address, which isvirtual. Originally intended purely for routers, which need constant MACaddresses, it also works for other servers.
The beauty of this approach is the incredibly easy configuration. No kernelcompiling or patching required, all userspace.
Just run this on all machines participating in a service:
# vrrpd -i eth0 -v 50 10.0.0.22
And you are in business! 10.0.0.22 is now carried by one of your servers,probably the first one to run the vrrp daemon. Now disconnect that computerfrom the network and very rapidly one of the other computers will assume the10.0.0.22 address, as well as the MAC address.
I tried this over here and had it up and running in 1 minute. For somestrange reason it decided to drop my default gateway, but the -n flagprevented that.
This is a 'live' fail over:
64 bytes from 10.0.0.22: icmp_seq=3 ttl=255 time=0.2 ms 64 bytes from 10.0.0.22: icmp_seq=4 ttl=255 time=0.2 ms 64 bytes from 10.0.0.22: icmp_seq=5 ttl=255 time=16.8 ms 64 bytes from 10.0.0.22: icmp_seq=6 ttl=255 time=1.8 ms 64 bytes from 10.0.0.22: icmp_seq=7 ttl=255 time=1.7 ms
Not *one* ping packet was lost! Just after packet 4, I disconnected my P200from the network, and my 486 took over, which you can see from the higherlatency.
tc_config is set of scripts for linux 2.4+ traffic controlconfiguration on RedHat systems and (hopefully) derivatives(linux 2.2.X with ipchains is obsotete).Uses cbq qdisc as root one, and sfq qdisc at leafs.
Includes snmp_pass utility for getting stats on traffic control via snmp.FIXME: Write
Contains lots of technical information, comments from the kernel
Slides by Jamal Hadi Salim, one of the authors of Linux traffic control
HTML version of Alexey's LaTeX documentation - explains part of iproute2 ingreat detail
Sally Floyd has a good page on CBQ, including her original papers. None ofit is Linux specific, but it does a fair job discussing the theory and usesof CBQ.Very technical stuff, but good reading for those so inclined.
This document by Werner Almesberger, Jamal Hadi Salim and AlexeyKuznetsov describes DiffServ facilities in the Linux kernel, amongst whichare TBF, GRED, the DSMARK qdisc and the tcindex classifier.
Yet another HOWTO, this time in Polish! You can copy/paste command lineshowever, they work just the same in every language. The author iscooperating with us and may soon author sections of this HOWTO.
From the helpful folks of Cisco who have the laudable habit of puttingtheir documentation online. Cisco syntax is different but the concepts arethe same, except that we can do more and do it without routers the price ofcars :-)
Stef Coene is busy convincing his boss to sell Linux support, and so he isexperimenting a lot, especially with managing bandwidth. His site has a lotof practical information, examples, tests and also points out some CBQ/tc bugs.
Required reading if you truly want to understand TCP/IP. Entertaining aswell.
A introduction to policy routing with lots of examples.
Hardcover textbook covering topicsrelated to Quality of Service. Good for understanding basic concepts.
It is our goal to list everybody who has contributed to this HOWTO, orhelped us demystify how things work. While there are currently no plansfor a Netfilter type scoreboard, we do like to recognize the people who arehelping.
Junk Alins
Joe Van Andel
Michael T. Babcock
Christopher Barton
Peter Bieringer
Adam Burke
Ard van Breemen
Ron Brinker
Lukasz Bromirski
Lennert Buytenhek
Esteve Camps
Ricardo Javier Cardenes
Nelson Castillo
Stef Coene
Don Cohen
Jonathan Corbet
Gerry N5JXS Creager
Marco Davids
Jonathan Day
Martin aka devik Devera
Hannes Ebner
Derek Fawcus
David Fries
Stephan "Kobold" Gehring
Jacek Glinkowski
Andrea Glorioso
Thomas Graf
Sandy Harris
Nadeem Hasan
Erik Hensema
Vik Heyndrickx
Spauldo Da Hippie
Koos van den Hout
Stefan Huelbrock
Ayotunde Itayemi
Alexander W. Janssen
Andreas Jellinghaus
Gareth John
Dave Johnson
Martin Josefsson
Andi Kleen
Andreas J. Koenig
Pawel Krawczyk
Amit Kucheria
Pedro Larroy
Chapter 15, section 10: Example of a full nat solution with QoS
Chapter 17, section 1: Setting up OSPF with Zebra
Edmund Lau
Philippe Latu
Arthur van Leeuwen
Jose Luis Domingo Lopez
Robert Lowe
Jason Lunz
Stuart Lynne
Alexey Mahotkin
Predrag Malicevic
Patrick McHardy
Andreas Mohr
James Morris
Andrew Morton
Wim van der Most
Stephan Mueller
Togan Muftuoglu
Chris Murray
Takeo NAKANO
Patrick Nagelschmidt
Ram Narula
Jorge Novo
Patrik
Pál Osgyány
Lutz Preßler
Jason Pyeron
Rod Roark
Pavel Roskin
Rusty Russell
Mihai RUSU
Rob Pitman
Jamal Hadi Salim
René Serral
David Sauer
Sheharyar Suleman Shaikh
Stewart Shields
Nick Silberstein
Konrads Smelkov
William Stearns
Andreas Steinmetz
Matthew Strait
Jason Tackaberry
Charles Tassell
Jason Thomas
Glen Turner
Tea Sponsor: Eric Veldhuyzen
Thomas Walpuski
Song Wang
Frank v Waveren
Chris Wilson
Lazar Yanackiev