We tested five VA scanners to see how well they illuminate holes in your
BY Joel Snyder
Information Security, March 2003
Quick: What's on your network? What's it running? Is it patched? Up to date? Properly configured? Are you vulnerable? A vulnerability analyzer (VA) is designed to help you answer these questions.
Many security managers first see the results of a vulnerability analyzer when a consultant drops an annual audit report on their desk. But as January's SQL Slammer worm reminded us, exploits aren't timed to coincide with audits. Like Code Red in 2001, Slammer exploited a well-documented vulnerability; and as with Code Red, a patch was available well before the worm struck. The point is, admins need an up-to-date picture of what's running on their network, where the holes are, and what's patched and what's not. We tested five1 tools to see which had the best detection engines and reporting tools, and which did the best job managing the data from their findings:
To determine how well these tools support real-world tasks, we staged our testing in phases to get a full picture of our lab network and its vulnerabilities:
Our test network was large enough to give the analyzers a run for their money. We scanned 14,025 IP addresses, with more than 1,130 active systems. Then, we randomly picked 25 of those systems and investigated them in depth (see Vulnerability Hunters).
Because scanning is network and processor intensive, all VA products offer different types of scans, usually called "policies." We set the vulnerability analyzers at their highest, or most intensive, setting. We took the advice in each product's documentation to pick the heaviest "safe" scan--safe in the sense that the scan was not supposed to crash any system, although this definitely wasn't the case (see "Caveat Emptor").
We installed all the analyzers on dual-processor 1.7 GHz systems with 512 MB of memory.
Based on our test results, each product received letter grades in four evaluation categories, and a final grade based on overall performance (see Comparison Chart). The four criteria were
As the number of security threats to networks and servers grows, security managers have turned to vulnerability analysis tools to identify a wide variety of potential problems on their networks. While host-oriented patch tools such as UpdateEXPERT from St. Bernard Software and HFNetChkPro from Shavlik Technologies focus on the myriad patches needed to keep Windows servers up to date, network vulnerability analyzers look for more than just missing patches.
These tools can search for misconfigured application servers, such as Web servers; and network components, such as switches and routers, that are vulnerable to known problems. They look for out-of-date applications, especially those with known problems. And they often search for applications that are enabled by default--but perhaps shouldn't be, such as RPC services on Unix or the UDP ECHO program on Windows NT/2000. Vulnerability analyzers are also security oriented, so they often look for "information leakage" from systems through DNS and other avenues, including SNMP and Windows registry.
Most vulnerability analyzers take a three-phase approach to testing:
There are many variations within these three phases. Some products try to brute-force
guess passwords on accounts. Others assume a "friendly" environment
and connect to servers with administrative access to look for problems at the
system level. Some are more devious, and will try to evade a network IDS.
Mapping the Network
The first task of a vulnerability analyzer is to discover what's on the network, and what it's running. In addition to discovering systems and services, we tested each product's ability to spot services running on nonstandard ports, and what OSes were running on each system.
Discovering systems: In our behind-the-firewall tests, all systems would respond to a simple ping, so finding the systems was easy. However, ping isn't really adequate when testing through a firewall or validating a firewall configuration. All of the vulnerability analyzers had additional techniques for discovering resources, although not all were very well documented.
In testing outside the firewall, Nessus offers the best control, with six port-scanning techniques, from simple ping to SNMP discovery to actually trying a TCP connection. eEye's Retina allows you to scan a host even if it doesn't respond to ping packets.
Discovering services: Pinging systems is one thing, but discovering what TCP/IP network services are on each system, such as Web or mail servers, is another matter entirely.
ISS's Internet Scanner identified the most services correctly, throwing in a few false positives, and only missing a few. Nessus came in with a slightly lower score, but with no false positives.
Retina and SAINT did well in most cases, but both had major functional flaws. On two of our selected systems, Retina simply went berserk, reporting hundreds of nonexistent TCP and UDP services, which casts doubt on the accuracy of everything it did.
SAINT generally did an excellent job, even finding some services that the others missed (for example, an SMTP server hidden on port 2525). However, its internal database was confused by a DNS trap we laid for it, and one system couldn't be scanned or reported on by IP number or DNS name. It also performed poorly on a Windows system with many simultaneous Web servers, missing not only the nonstandard ones, but one running on port 80 as well.
Symantec's NetRecon is useless as a mapping tool. While it might be possible to dig through the alerts and reports and discover what's running, NetRecon's designers definitely didn't envision this as a use for their product.
Locating services on nonstandard ports: Identifying TCP/IP network services as "open" isn't sufficient for most network managers. They want to know what services are actually running on what ports. For example, it's easy to assume that port 80 is running HTTP, but what if it's really running an SMTP server? Or, what if someone has started an HTTP server on port 25? These kinds of configuration exceptions point to holes in a security infrastructure.
This capability is essential for a VA tool. It's the kind of mapping data needed to keep the IDS accurate and comprehensive.
Nessus and Retina were tops at finding TCP/IP network services running on nonstandard ports. Still, they had some problems. Retina identified all of the services we had stashed on nonstandard ports, but didn't follow through. For example, an HTTP server on port 81 was correctly tagged and analyzed, setting off alarms. However, an SMTP server on port 80 was identified, but Retina didn't call it out as a relay--even though it flagged the same SMTP server running on port 25.
We also ran into GUI defects and bugs in Retina. For example, although it knows that the server running on a particular port isn't what you'd expect, the GUI gives you no clue. You have to click on something clearly labeled as a Web server to drill down and discover that Retina knows it's really a mail server.
Nessus did a good job of identifying services on nonstandard ports, but as with many open-source products, it was inconsistent on quality control. Since each vulnerability test is actually a short script written by a contributing volunteer, each script has to be port-independent. In one case, we got both HTTP and SMTP alerts on the SMTP server running on the HTTP port because of a poorly written script.
NetRecon and SAINT failed to flag any TCP/IP network services on nonstandard ports. Internet Scanner can search for vulnerabilities on HTTP servers on particular ports, but only if you already know they're there.
OS identification. This is a fairly perilous feature, since it's easy to get wrong--and most of the products did, most of the time. Internet Scanner and Retina were best, getting the OS right about half of the time.
Many of these products take OS identification into account when scanning, both to expedite the scan process and to reduce false alarms. This led NetRecon to miss a critical vulnerability in our network--incorrectly identifying a Cisco switch as an OpenVMS server. It also failed to alert us to a major Cisco-specific bug. Conversely, getting the operating system right is no guarantee of reduced false alarms: Retina showed us over a dozen Unix-specific alerts when it was analyzing an OpenVMS server.
Before you start scanning your network, be forewarned: These vulnerability analyzers need to be used with extreme care. They are dangerous, and they will crash your systems. Although most of the configuration tools have options to disable "dangerous" or "denial-of-service" scanning, that isn't always sufficient to keep them up.
Nessus gets our vote for "Most Unsafe Program to Have on Your Network." We not only crashed servers and clients consistently with Nessus; we even confused our GPS-based NTP server enough that it had to be re-FLASHed with new firmware. But Nessus wasn't alone in taking systems down: Every one of the VAs crashed at least one system or application during the month of testing. Sometimes it was the firewalls that suffered: our normally rock-solid NetScreen firewall locked up once during testing.
This is an important and critical point. You can't just take VAs and let them
loose on your network to keep scanning it over and over. Even if you don't get
consistent crashes, you'll discover a race condition or load-related problem
eventually, which will cause the analyzer to crash something. We also have complaints
about the huge number of alarms the scanning process creates. We had to completely
shut off our IDS while testing, because the large number of false alarms rapidly
Nessus performed best on a full-scale scan of our 14,025-address network, covering it in less than 19 hours. But don't read too much into that number--most of the competition was dismal, either failing to complete a full scan or scanning too slowly to be practical. Nessus almost failed, too. Several times during our large scans, Nessus locked up and never completed the scans.
We had a similar problem with Internet Scanner. It would get to what appeared to be the end of the scan (although we could never really tell) and then hang forever. So, we couldn't calculate an actual finishing time for Internet Scanner, but it seemed to be on par with Nessus.
Retina performed well on the tests that worked, but we couldn't complete the scans with all features turned on. The Retina developers worked out the bugs we discovered, after which we completed a scan in 17 hours. To do so, however, we had to disable SNMP and SQL testing. When eEye sent fixes, we re-enabled SNMP and SQL tests, but Retina crashed again.
SAINT had a different problem. Because of the technique SAINT uses to detect hosts, we had to keep splitting our large scan into smaller parts. Still, in some of our single-host tests, SAINT ran much more quickly than, for example, Internet Scanner, but at a price--the scans were significantly less thorough.
NetRecon should get a prize for never once failing. Every time we ran our large scan, it ground and hacked and coughed through it without error. It's the most stable of the products tested, but while "slow and steady wins the race" may be good enough for tortoises, it doesn't cut it for security applications. Our big scan took NetRecon 71 hours to complete, more than three times longer than Nessus, Retina or Internet Scanner.
NetRecon was slow at just about everything. We couldn't work with the whole data set generated by NetRecon when we tried to analyze our subset of 25 special hosts. Instead, we had to rescan just those hosts. The alternative would be to have 15 to 20 seconds of delay every time we clicked the mouse.
ISS's Internet Scanner did a good job reporting vulnerabilities--particularly on Windows systems--such as the Windows XP buffer overflow shown here.
The core of VA products is their engine. Each of the engines had problems both with false positives (a vulnerability reported which was not actually there) and false negatives (failing to report a known problem). We were happier to see false positives--which could easily be ignored in future runs--than false negatives.
Internet Scanner, with especially good Windows coverage (see screen, above), and Nessus (see screen, below) performed best overall. Internet Scanner gave us the fewest false negatives, but a number of false positives. Nessus was a close second, giving us better information on *nix systems than on the Windows side, and more false positives across all architectures. Retina, SAINT and NetRecon missed a lot of problems on all the servers and systems we tested, but had a fairly low false positive rate.
The "weighting" of vulnerabilities is a real concern. Most of the engines use a three-level score--low, medium and high. (NetRecon was an exception, with a 100-point scale.) Most network managers will work on vulnerabilities identified as the highest risk, and the engines encourage that.
But these ratings were sometimes inaccurate. For example, Internet Scanner said that we were running a VNC server on one Windows system (true) with no password (false). VNC is a popular remote-control application, similar to pcAnywhere or Timbuktu. If this had been an actual vulnerability, it would have been an ultra-critical problem. But Internet Scanner merely rated it in its lowest-risk "information" category.
Let's look at some examples to illustrate the ups and downs of ferreting out vulnerabilities with these tools:
This screen shows a scan of seven hosts in progress. The entire scan, or a scan of a given host, can be stopped with a mouse click.
With new vulnerabilities being published with alarming frequency, keeping these
tools current is essential. The best fully automated updates are in Retina,
Internet Scanner and NetRecon. SAINT offers a Perl plug-in to help simplify
the process as well. Nessus' vulnerability database is updated regularly, but
you have to go to the Web site.
While these tools may come with a fairly comprehensive set of tests, admins often need to create custom tests quickly and easily to examine specific conditions on their network--preferably without having to be a master programmer. For example, on our network, a common management application needed patching. We tried to write custom tests for each tool that would let us detect the unpatched application by the version number in its welcome banner.
Retina makes it very easy to add some types of tests, such as checking for a particular banner on a service, a registry key or some script on a Web server. You use a wizard-like GUI to define what to look for, and you're done in just a few seconds. More complex tests have to be written in a programming language.
Nessus facilitates test creation with Nessus Attack Scripting Language (NASL), written for vulnerability testing. Because Nessus is open source, you get more than 1,000 templates.
Internet Scanner doesn't make the job easy: additional vulnerability tests are written in either C or Perl, and you get just a single example to work from.
SAINT is better. Although you still have to write code in a traditional programming language, SAINT offers you many more examples.
NetRecon has no provision for creating custom tests.
SAINT's use of hyperlinks facilitates navigation, as with this dynamic data analysis of discovered vulnerabilities. It's difficult, however, to export the hyperlinked information.
Data management is vital, especially if you plan to scan your network more than once. It's important to be able to see results in many different formats, pivoting across different views, and comparing data over time. We were disappointed in the functionality we found.
None of the five test products stood out for their data management capabilities.
Internet Scanner, marginally the best of the tools in this category, has an easy-to-navigate and robust data management tool. Vulnerability scans are organized by "session," which combines host lists and scanning policies. A session has a set of systems to be scanned and a policy for which vulnerabilities to look for. Multiple scans across the same session are saved separately in an ODBC database. Unfortunately, there are few intrasession manipulation tools, so you're stuck looking at each session one at a time. We were disappointed, too, that Internet Scanner really didn't have a good way to manipulate data over time--for example, to show trend analysis of multiple scans of the same systems.
Viewing session data is easy. Internet Scanner lets you pivot across four views: by host, by vulnerability, by service (such as SMTP or HTTP) or by account. Its vulnerability database is outstanding. Each discovered vulnerability is accompanied by comprehensive documentation, including remedies, false positives and negatives, and multiple URLs for research.
NetRecon's data manipulation tools gave us the sense of having great control over our data. That's important, because NetRecon generates a lot of data. For the 25 systems we looked at in detail, NetRecon generated 4,984 records, but more than 4,000 of these were entries like "this system responded to ping" or "I found this name in the DNS." You can drown in irrelevancies. NetRecon has great slice-and-dice capabilities, but the problem is that you're slicing and dicing an enormous amount of data, mixing important and unimportant, critical and irrelevant.
In addition to host- and vulnerability-oriented sorting, NetRecon sorts by "objectives." For example, pick "find SMTP vulnerabilities," and it will show you all of the vulnerabilities related to SMTP.
The list of vulnerabilities can be sorted by any one of 24 different columns, ascending or descending, with a single click-assuming you're looking at a small number of hosts. When we tried to do this on our entire network, NetRecon's analysis tools were too slow to be usable.
SAINT does a poor job integrating the results of scans, their configuration and some repeatable way to run the same scan. It's the best tool for exploring the characteristics of your network, and for running ad hoc vulnerability tests as you explore the web of trust. But in a more production-oriented environment with regular, repeated, tests, the tools just aren't there.
The reporting capabilities in Symantec's NetRecon are its strong suit, with numerous options and great flexibility. Users can report by host, vulnerability or risk level.
SAINT's Web-based GUI was both its strength and weakness. The developers took the idea of hyperlinking to heart, with the result that you can drill down, through, across and over your data very quickly (see screen). Start, perhaps, by looking at a list of vulnerabilities, then click on one to see which hosts are affected, then click on an affected host to see what other problems it might have, or services, and then click...it's a maze of possibilities which lets you jump around very quickly in your data set. Unfortunately, even with all this functionality, it's hard to export the hyperlinked data stream.
Nessus itself doesn't have a data management tool. We decided to try NessusWX, an open-source Windows-based client that has more sophisticated data management functions than the Nessus native tool. A Unix tool is also available.
Like Internet Scanner, NessusWX organizes vulnerability scans by session. At any time, you can bring up an old session and have Nessus rescan using the same set of saved parameters and vulnerability tests. It's also easy to compare the results of two scans.
Unfortunately, NessusWX isn't a very sophisticated information browsing tool. You can only look at your results by host, sorted alphabetically (not numerically). Plodding through vulnerabilities one at a time was pretty tedious. On the plus side, you can mark a vulnerability as a false positive, which means that it won't show up in reports. The vulnerability description text associated with each alarm is thin compared to Internet Scanner and Retina.
We got off to a bad start with Retina. eEye has a suite of Web-based management tools that wrap around the scanner. Unfortunately, they didn't work very well for us, and we spent so much time working out bugs in the scanner itself that we didn't get back to these. (However, in an off-site demo after our testing was completed, eEye was able to demonstrate its enterprise-level management system, which would have improved our perception of the capabilities of the product.)
We fell back to just looking at Retina-with a sigh of relief! The Windows-based GUI on Retina doesn't have a lot of data management features, but it's one of the nicest we've seen at giving you an overview on a system-by-system basis of the problems found in vulnerability analysis. Unfortunately, Retina only allows you to look at your results on a system-by-system basis.
The ability to generate reports in different formats and with different data sets is important in summarizing and presenting information.
Internet Scanner and NetRecon earned the highest marks as reporting tools. The others were just fair.
Internet Scanner's reporting is sophisticated and flexible, with varying levels of detail. You can sort by host name or address, OS, service or vulnerability severity. You can pick which scan to report on, and which vulnerabilities, hosts and services to include. Reports are available in HTML, PDF and RTF formats.
NetRecon's powerful built-in reporting tool, based on Crystal Reports, lets you generate one of three types of reports-executive summary, detail sorted by host or detail sorted by vulnerability. Reports also can be filtered and trimmed, so you can select particular hosts, vulnerabilities or risk levels (see screen, above). You can print the report directly, or save it in Excel, Word or HTML.
SAINT's hyperlink format is especially hard to represent in a printed report. Reports are only useful when pushed out in HTML format--the ASCII report we created was formatted so poorly that it was useless. Although you do have a number of different types of reports, you can't modify which data go into them. Its reporting component, SAINTwriter, generates a lot more graphs than the standard reports, but these are limited. For example, one required us to distinguish between 20 different colors across a 15-screen report, remembering the difference between light-light-green and medium-light-green.
We found that it was easier to run reports with NessusWX than to stay with the Nessus GUI. The report writer is slim, but has the features we needed for our testing. Reports can be sorted by host name or vulnerability, filtered by the severity of the vulnerability. The report writer will generate text, HTML or PDF files.
Retina reports are also limited. Retina offers three types of reports, and while there are many options to customize the style, there are no options for customizing content (see screen). The Retina report writer only generates HTML.
Making a Choice
None of the products we looked at excelled in all areas. Almost any might be good for a once-a-year scan of your network. But as day-to-day tools in the real world of corporate security, each had significant weaknesses.
That being said, ISS's Internet Scanner, a commercial product, and the open-source Nessus/NessusWX team were the best of the five tools we tested. Both did well at finding, reporting and managing vulnerability information on our network.
eEye's Retina gives the user great flexibility in customizing the style of its three types of reports, but there's no way to customize report content, and output is limited to HTML. Cascading windows here show an executive report of audits over the last 30 days, with other reports behind.
Internet Scanner stood out as being good in almost all areas, with an especially strong ability to track down problems in Windows environments.
While Nessus did well tracking problems all over the network, the lack of an automated update service and weak data management tools keep it from being a certain win. However, if you haven't done much with vulnerability analysis before, Nessus is an outstanding start--and you may find that you never go commercial.
eEye's Retina is one of the most promising products we looked at, but isn't mature. Our experience wasn't outstanding because of multiple bugs, and the data management and reporting weren't very impressive. But Retina looks like the one to watch, and does have features (such as the ability to detect problems on nonstandard ports) that Internet Scanner does not.
We're less enthusiastic about SAINT and NetRecon. Neither stood out in any of the areas we cared about most: data management, documentation, reporting, or accuracy and completeness.