"How do I test Snort?" is one of the most popular questions asked on the snort-users mailing list. While a seemingly...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
simple question, the answer depends on your intent. Value-added resellers (VARs) and systems integrators (SIs) may need to provide customers with validation that the network intrusion detection system (IDS) is working as expected. This edition of Snort Report explains what it means to test Snort. I reveal some common misperceptions and offer alternatives to satisfy the majority of readers.
Snort test options
"Testing Snort" requires recognizing the sort of data you expect from running a test. The following are all legitimate reasons why you might test Snort.
1. "I want to know if Snort is working." This is the most common reason users post test questions to Snort mailing lists, and an important one for VARs and SIs who should always validate that Snort is working properly for customers. If you're unfamiliar with Snort or your customer installed the open source IDS using a binary package or following a guide, you'll want to know if the procedure resulted in Snort being capable of detecting suspicious or malicious activity.
2. "I want to know if Snort will drop packets." This is the second most common reason for testing Snort. VARs and SIs should understand the conditions that might cause Snort to not keep track of all the network activity it's inspecting. Discovering indications that Snort is dropping an unacceptable number of packets should trigger an evaluation of Snort's configuration and the hardware specifications of the platform on which it runs. Also, you should know the conditions where Snort performance will begin to degrade in order to properly size equipment and processes.
3. "I want to know how a rule I wrote affects Snort's performance." This is a rare reason for testing Snort. VARs and SIs who write custom rules for clients need to know how their new rules will affect overall Snort performance. Depending on how they're written, the rules could have no, some or devastating impact on Snort's ability to detect activity.
4. "I want to know how to evade Snort." This is another rare question, and security researchers are most likely to ask it. However, VARs and SIs should be sure they understand how intruders will try to negate the value provided by an IDS or IPS running Snort. Such a test demonstrates how Snort performs when a malicious user deliberately tries to evade detection -- a topic worthy of its own article.
This edition of Snort Report discusses the first three reasons for testing Snort. I'll cover how to evade Snort in a future Snort Report.
Stateless rule parsing tools
When the topic of testing Snort is raised on a mailing list, someone usually recommends one or more of the following tools:
These tools are all stateless. They parse Snort rule sets and generate packets, which, to some degree, emulate traffic seen in those rules. In other words, a rule that inspects a UDP packet to port 161 containing the pattern "public" prompts a stateless tool to create a UDP packet to port 161 containing the pattern "public".
On the surface this may seem like a
sound tactic. For stateless protocols like ICMP and UDP traffic, this approach may work. However, tools that parse Snort rules to generate packets for each rule suffer two big problems. First, for stateful protocols like TCP, this approach is almost worthless. Stateless tools don't establish a full TCP connection in order to conduct their tests. The tool examines the Snort rule set, creates a TCP segment and fires it. Snort's stateful inspection capabilities, first introduced in 2001, have rendered TCP-based stateless tests largely irrelevant.
The second problem with stateless tools is their inability to understand newer Snort rules. Sneeze was written in 2001 for Snort 1.8. Stick was also written in 2001. Source code for Mucus dates form 2004 but was tested against Snort 1.8.3. The Bleeding Threats project hosts an updated version of Mucus maintained by James Gregory at Sensory Networks as part of his CoreMark Tools. This newer version of Mucus dates from 2005 but supports rules from Snort 2.3.
The primary way to "test" Snort using a stateless tool is to disable the Stream4 preprocessor, which requires editing the snort.conf file. This artificially disables a key component of Snort that's designed to handle these very sorts of stateless attacks.
Stateless packet generation tools
A related stateless approach for triggering Snort alerts is to generate traffic that should trigger Snort rules, but doesn't rely on parsing Snort rule sets. IDSWakeup is a stateless packet generation tool. The following shows how IDSWakeup performs against Snort 22.214.171.124. I used the Debian package net/idswakeup on Ubuntu Linux against a FreeBSD sensor running Snort 126.96.36.199 and Sguil 0.6.1.
IDSWakeup generates single packets that reflect traffic that might trigger Snort or other intrusion detection systems. Like other stateless tools, IDSWakeup forges packets without establishing full sessions as needed by TCP. In the following example we tell IDSWakeup to send traffic from 192.168.2.8 to 10.1.13.4, one packet per attack, with a time to live of 10. Note that we can specify any source IP because no full session is expected for TCP tests.
IDSWakeup generated 181 packets of which 134 were TCP, 22 were UDP, 24 were ICMP and one was malformatted IP (i.e., "IPv5").
Here's what some of that traffic looks like when viewed with Tshark.
Notice that some of the TCP traffic includes the warning TCP CHECKSUM INCORRECT. Unless Snort's told to ignore incorrect TCP checksums via the -k switch, Snort will not alert on these sorts of packets.
Checksum mode (all,noip,notcp,noudp,noicmp,none)
The reason is the target should discard the traffic, so Snort assumes the traffic with the bad checksum has no effect on the target.
Snort's interpretation of IDSWakeup
When logging alerts in FAST mode, Snort records details like the following while inspecting traffic generated by IDSWakeup.
This assortment of alerts contains a variety of traffic types. In the full output, the majority of alerts address ICMP and UDP traffic. This makes sense, because those stateless protocols don't depend on setting up a full session and therefore are not affected by Snort's Stream4 preprocessors.
Watching Snort drop traffic
Snort offers a feature that reports on its packet drops. When Snort shuts down, it creates output like the following:
Snort dropped zero traffic, and it created 26 alerts. Given the number of "tests" IDSWakeup ran, you can guess that the vast majority of the traffic wasn't suitable for testing Snort.
Another way to check for Snort dropping traffic (at least on FreeBSD) is to use Bpfstat. Bpfstat can profile packet dropping for any process that relies on Berkeley Packet Filter for sniffing traffic. For example, we know that Snort is running as process 39183 watching interface em0. We tell Bpfstat to report statistics every 10 seconds as it watches that process and interface.
When Bpfstat starts, we see Snort has dropped 130 packets.
This matches output seen when we stop this instance of Snort:
Snort received 1628 packets Analyzed: 1495(91.830%) Dropped: 130(7.985%) Outstanding: 3(0.184%)
These drops happened before we ran another IDSWakeup test. During the test, the drop column never increased beyond 130. This indicates that Snort didn't drop any traffic while we were running our IDSWakeup test.
Snort rule performance
Sourcefire devotes millions of dollars of high-end testing equipment to ensuring that new Vulnerability Research Team (VRT) rules work efficiently within Snort. I personally saw this equipment when I visited Sourcefire in 2005.
One option for checking the performance hit caused by rules is offered by the Turbo Snort Rules project hosted by Vigilant Minds.
Visitors to the site can submit a rule to see how it compares from rules in the 2.3.x and 2.4.x rule sets. For example, this test evaluates the performance of the following rule:
alert tcp any any -> any 25 (content:"|00|";sid:12345678;rev:1;classtype:misc-attack;)
This rule looks for binary content 0x00 in any TCP segment to port 25. Turbo Snort Rules reports this rule is slightly slower than the average rule in the 2.3.3 and 2.4.0 Snort rule sets.
Turbo Snort Rules is a great idea, but the site does not appear to have been updated since 2005. It's functional, but more modern rule sets (2.6.x, 2.7.x) haven't been benchmarked.
As pointed out in the 2005 article by JP Vossen, Using IDS rules to test Snort, the easiest way to ensure Snort is actually seeing any traffic is to create a simple rule and see if Snort generates an alert. If you wish to run a tool like IDSWakeup, it will indeed generate some alerts. A simple Nmap scan will most likely generate some alerts as well. Setting up a target system and running an actual malicious attack, such as exploitation via Metasploit, is a means to test Snort via server-side attack. More elaborate client-side attacks can also be devised to test Snort's ability to detect that attack pattern.
The bottom line is to figure out the goal of your test, and then devise the simplest way to accomplish that goal. It's always best to begin by running Snort with a very basic rule, explained in the first Snort Report (Intrusion Detection Mode). If you can't get Snort to fire on the most basic activity, then a serious problem exists.
About the author
Richard Bejtlich is founder of TaoSecurity, author of several books on network security monitoring, including Extrusion Detection: Security Monitoring for Internal Intrusions, and operator of the TaoSecurity blog (taosecurity.blogspot.com).