Welcome to the third installment of Traffic Talk, our regular series for networking solution providers and consultants who troubleshoot business networks.
Recently I read an interview with network security pioneer Marcus Ranum,
There are two huge problems: software development and network awareness [...] All through the 1990s until today, organizations were building massive networks and many of them have no idea whatsoever what's actually out there, which systems are crucial, which systems hold sensitive data, etc.
The 1990s were this period of irrational exuberance from a security standpoint -- I think we are going to be paying the price for that, for a long time indeed. Not knowing what's on your network is going to continue to be the biggest problem for most security practitioners [...]
The real best practices have been the same since the 1970s: know where your data is, who has access to what, read your logs, guard your perimeter, minimize complexity, reduce access to "need only" and segment your networks.
The motto of TaoSecurity is "Know your network before an intruder does." But knowing your network is a difficult proposition. Most network owners wish they could buy a magic box to identify and protect all their information assets. This approach has never worked and will never work because the modern enterprise is too complicated for any machine to make these decisions.
Since products can't do the job, many organizations assign tasks to individuals and expect them to maintain inventories of networks, hosts and information. But manually maintained inventories are prone to error and omission.
The only realistic answer is a hybrid approach, where an automated system does most of the network security monitoring, but is validated by a person with business knowledge. For certain tasks, this should be a simple problem. For example, it shouldn't be difficult to identify all the systems connected to the network. Consider the following ways to identify live hosts:
- Conduct network-based scans that send ICMP, UDP or TCP traffic to various subnets. Advantages of this approach include simple operation, but the disadvantages can be numerous. More systems these days run host-based firewalls that might block network scans; the network owner must know where to look; hosts might be down at the time of the scan; scans might crash the target and so on. Even if a system responds to a scan, recognizing it might be difficult. Despite these challenges, broad network scanning should still be an important component of network discovery programs.
- Conduct passive assessments that listen for any traffic traversing a monitored transit point. The advantage of this network security monitoring approach is that it doesn't affect the observed hosts. The passive approach can also build a profile of observed traffic, and its continuous nature means any traffic whatsoever from the target can populate an asset database. Disadvantages include monitoring a multitude of transit points and possibly obscuring source addresses due to network address translation.
- Mine application logs for evidence of host activity. Any network has data about who uses various resources. If those logs aren't kept, activate them! Think of the variety of infrastructure elements even the most basic host is likely to use. Most systems use Dynamic Host Configuration Protocol (DHCP) to obtain an IP address and Domain Name System (DNS) to resolve host names. DHCP servers and DNS servers can keep logs of what boxes contact them. Switches maintain Content Addressable Memory (CAM) tables showing what Media Access Control (MAC) addresses reside on which switch ports. Routers and firewalls can log activity using NetFlow or Access Control Lists, respectively. Web proxies can record hosts that browse the Web. Servers can log access to file and print services. All of these logs can be mined to profile the network.
You might think, "Great, I have this data. But I still don't know what matters!" The traditional solution might involve assigning a person to analyze the data and identify files or resources that matter the most. But this subjective approach to network security monitoring is tedious and prone to problems.
Perhaps it makes more sense to form certain hypotheses that could describe what matters in an enterprise. Consider the following, where I use the term "file" to describe a piece of data:
1. More important files are accessed by more people than files of lesser importance.
2. More important files are accessed by "more important people"; files of lesser importance are ignored by "more important people."
3. More important files are accessed at a higher frequency than files of lesser importance.
Try expanding this analysis beyond files to servers:
1. More important servers are accessed by more people than servers of lesser importance.
2. More important servers are accessed by "more important people"; servers of lesser importance are ignored by "more important people."
3. More important servers are accessed at a higher frequency than servers of lesser importance.
If you disagree with this assessment, form your own hypotheses and share them with me. You could develop these hypotheses, then test them against the data collected earlier. Testing requires some work to identify "important people" and the like, but that process isn't as difficult as you might assume. This network security monitoring approach is backed by real data and validated by a person. You can decide what matters in your enterprise by seeing how it is used, instead of relying on a person's opinion of what matters.
Please let me know what you think of this approach, and how you might implement it in your enterprise.
About the author
Richard Bejtlich is the founder of TaoSecurity, author of several books on network security monitoring, including Extrusion Detection: Security Monitoring for Internal Intrusions, and operator of the TaoSecurity blog.
This was first published in December 2008