Monitoring Xen network traffic and usage with network resources

Learn the best practices when monitoring Xen network traffic, find out how to monitor network usage and why network resources are vital to shared hosting operations.

Solutions Provider Takeaway: Solutions providers will find this chapter excerpt to be useful for the information it offers on monitoring Xen network traffic and usage. You'll also learn the role network resources play in shared hosting operations.

About the book:
This chapter excerpt on Hosting untrusted users under Xen: Lessons from the trenches (download PDF) is taken from The Book of Xen: A practical guide for the system administrator. This book advises solutions providers on the best practices for Xen installation, networking, memory management and virtualized storage. You'll also find information on virtual hosting, installing and managing multiple guests, easily migrating systems and troubleshooting common Xen issues.

Controlling Network Resources

Network resource controls are, frankly, essential to any kind of shared hosting operation. Among the many lessons that we've learned from Xen hosting has been that if you provide free bandwidth, some users will exploit it for all it's worth. This isn't a Xen-specific observation, but it's especially noticeable with the sort of cheap VPS hosting Xen lends itself to.

We prefer to use network-bridge, since that's the default. For a more thorough look at network-bridge, take a look at Chapter 5.

Monitoring Network Usage

Given that some users will consume as much bandwidth as possible, it's vital
to have some way to monitor network traffic.

To monitor network usage, we use BandwidthD on a physical SPAN port. It's a simple tool that counts bytes going through a switch -- nothing Xen-specific here. We feel comfortable doing this because our provider doesn't allow anything but IP packets in or out, and our antispoof rules are good enough to protect us from users spoofing their IP on outgoing packets.

A similar approach would be to extend the dom0 is a switch analogy and use SNMP monitoring software. As mentioned in Chapter 5, it's important to specify a vifname for each domain if you're doing this. In any case, we'll leave the particulars of bandwidth monitoring up to you.

Note: ARP Cache Poisoning
If you use the default network-bridge setup, you are vulnerable to ARP cache poisoning, just as on any layer 2 switch.

The idea is that the interface counters on a layer 2 switch -- such as the virtual switch used by network-bridge -- watch traffic as it passes through a particular port. Every time a switch sees an Ethernet frame or ARP is-at, it keeps track of what port and MAC it came from. If it gets a frame destined for a MAC address in its cache, it sends that frame down the proper port (and only the proper port). If the bridge sees a frame destined for a MAC that is not in the cache, it sends that frame to all ports.

Clever, no? In most cases this means that you almost never see Ethernet frames destined for other MAC addresses (other than broadcasts, etc.). However, this feature is designed purely as an optimization, not a security measure. As those of you with cable providers who do MAC address verification know quite well, it is fairly trivial to fake a MAC address. This means that a malicious user can fill the (limited in size) ARP cache with bogus MAC addresses, drive out the good data, and force all packets to go down all interfaces. At this point the switch becomes basically a hub, and the counters on all ports will show all traffic for any port.

There are two ways we have worked around the problem. You could use Xen's network-route networking model, which doesn't use a virtual bridge. The other approach is to ignore the interface counters and use something like BandwidthD, which bases its accounting on IP packets.

Once you can examine traffic quickly, the next step is to shape the users. The principles for network traffic shaping and policing are the same as for standalone boxes, except that you can also implement policies on the Xen host. Let's look at how to limit both incoming and outgoing traffic for a particular interface -- as if, say, you have a customer who's going over his bandwidth allotment.

Network Shaping Principles

The first thing to know about shaping is that it only works on outgoing traffic. Although it is possible to police incoming traffic, it isn't as effective. Fortunately, both directions look like outgoing traffic at some point in their passage through the dom0, as shown in Figure 7-2. (When we refer to outgoing and incoming traffic in the following description, we mean from the perspective of the domU.)

Figure 7-2: Incoming traffic comes from the Internet, goes through the virtual bridge, and gets shaped by a simple nonhierarchical filter. Outgoing traffic, on the other hand, needs to go through a system of filters that assign packets to classes in a hierarchical queuing discipline.

Shaping Incoming Traffic

We'll start with incoming traffic because it's much simpler to limit than outgoing traffic. The easiest way to shape incoming traffic is probably the token bucket filter queuing discipline, which is a simple, effective, and lightweight way to slow down an interface.

The token bucket filter, or TBF, takes its name from the metaphor of a bucket of tokens. Tokens stream into the bucket at a defined and constant rate. Each byte of data sent takes one token from the bucket and goes out immediately -- when the bucket's empty, data can only go as tokens come in. The bucket itself has a limited capacity, which guarantees that only a reasonable amount of data will be sent out at once. To use the TBF, we add a qdisc (queuing discipline) to perform the actual work of traffic limiting. To limit the virtual interface osric to 1 megabit per second, with bursts up to 2 megabits and maximum allowable latency of 50 milliseconds:

# tc qdisc add dev osric root tbf rate 1mbit latency 50ms peakrate 2mbit maxburst 40MB

This adds a qdisc to the device osric. The next arguments specify where to add it (root) and what sort of qdisc it is (tbf). Finally, we specify the rate, latency, burst rate, and amount that can go at burst rate. These parameters correspond to the token flow, amount of latency the packets are allowed to have (before the driver signals the operating system that its buffers are full), maximum rate at which the bucket can empty, and the size of the bucket.

Shaping Outgoing Traffic

About the authors:
Luke S. Crawford is a Xen consultant, working on corporate server consolidation in a Fortune 100 corporate environment. Crawford also works on a Xen hosting venture at prgmr.com.

 

Chris Takemura is a recent graduate and occasional Xen consultant. He is currently working on a Xen hosting venture at prgmr.com.

Having shaped incoming traffic, we can focus on limiting outgoing traffic. This is a bit more complex because the outgoing traffic for all domains goes through a single interface, so a single token bucket won't work. The policing filters might work, but they handle the problem by dropping packets, which is . . . bad. Instead, we're going to apply traffic shaping to the outgoing physical Ethernet device, peth0, with a Hierarchical Token Bucket, or HTB qdisc.

The HTB discipline acts like the simple token bucket, but with a hierarchy of buckets, each with its own rate, and a system of filters to assign packets to buckets. Here's how to set it up.

First, we have to make sure that the packets on Xen's virtual bridge traverse iptables:

 

# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

This is so that we can mark packets according to which domU emitted them. There are other reasons, but that's the important one in terms of our traffic-shaping setup. Next, for each domU, we add a rule to mark packets from the corresponding network interface:

 

# iptables -t mangle -A FORWARD -m physdev --physdev-in baldr -j MARK --set-mark 5

Here the number 5 is an arbitrary mark -- it's not important what the number is, as long as there's a useful mapping between number and domain. We're using the domain ID. We could also use tc filters directly that match on source IP address, but it feels more elegant to have everything keyed to the domain's physical network device. Note that we're using physdev-in -- traffic that goes out from the domU comes in to the dom0, as Figure 7-3 shows.

Figure 7-3: We shape traffic coming into the domU as it comes into the dom0 from the physical device, and shape traffic leaving the domU as it enters the dom0 on the virtual device.

Next we create a HTB qdisc. We won't go over the HTB options in too much detail -- see the documentation at http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm for more details:

 

# tc qdisc add dev peth0 root handle 1: htb default 12

Then we make some classes to put traffic into. Each class will get traffic from one domU. (As the HTB docs explain, we're also making a parent class so that they can share surplus bandwidth.)

 

# tc class add dev peth0 parent 1: classid 1:1 htb rate 100mbit

# tc class add dev peth0 parent 1:1 classid 1:2 htb rate 1mbit

Now that we have a class for our domU's traffic, we need a filter that will assign packets to it.

 

# tc filter add dev peth0 protocol ip parent 1:0 prio 1 handle 5 fw flowid 1:2

Note that we're matching on the "handle" that we set earlier using iptables. This assigns the packet to the 1:2 class, which we've previously limited to 1 megabit per second.

At this point traffic to and from the target domU is essentially shaped, as demonstrated by Figure 7-4. You can easily add commands like these to the end of your vif script, be it vif-bridge, vif-route, or a wrapper. We would also like to emphasize that this is only an example and that the Linux Advanced Routing and Traffic Control how-to at http://lartc.org/ is an excellent place to look for further documentation. The tc man page is also informative.

Figure 7-4: The effect of the shaping filters


Hosting Untrusted Users under Xen: Lessons from the Trenches
   Managing Xen shared resources: Credit scheduler and Xen scheduler
   Monitoring Xen network traffic and usage with network resources
   Using Xen PyGRUB, ionice to manage storage and disks

Printed with permission from No Starch Press Inc . Copyright 2009. The Book of Xen: A Practical Guide for the System Administrator by Chris Takemura and Luke S. Crawford. For more information about this title and other similar books, please visit No Starch Press Inc.

This was first published in February 2010

Dig deeper on Virtualization Technology and Services

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

MicroscopeUK

SearchCloudProvider

SearchSecurity

SearchStorage

SearchNetworking

SearchCloudComputing

SearchConsumerization

SearchDataManagement

SearchBusinessAnalytics

Close