When will we ever learn?

By Joel Snyder
Network World, 09/29/03

Original Article on Network World Web Site

Two years ago this month I wrote a column called "Learning lessons from Code Red.". Code Red had hit hard, taking over servers all over the Internet. It's still there - we get dozens of Code Red attempts every day from a worm that's 2 years old.

Two worms that hit this summer, W32/Blaster (also known as W32/Welchia, W32/Nachi and Lovsan) and SoBig (also known as SoBig.F) spread exactly the same way. Microsoft published bulletins, but people ignored them. Patches were issued, but no one applied them. The worms came in through firewalls that shouldn't have let them in. Infected systems continued spreading the worms because we didn't have adequate tools to contain them. Two years after Code Red, there are still fundamental problems in the way we manage and secure systems that make us vulnerable to this kind of attack.

The first problem concerns ISPs. Worms spread like this partially because of the widespread availability of broadband Internet, specifically unfirewalled broadband Internet. People want to learn at home, so they bring up a Windows server. Why bother with a firewall; it's just a test box, right? ISPs traditionally have sold unfiltered bits to their customers.

At the enterprise level, we could count on firewalls. At the residential level, how much damage could a 28.8K bit/sec modem do? During the transition to broadband, ISPs have not changed their model. They insist on selling high-speed connections at rock-bottom prices, which is great for consumers - until the irresponsibility of ISPs in providing adequate security for their customers causes the whole Internet to fall to its knees. ISPs need to re-evaluate their policies on open access to customers, especially residential broadband customers who cannot be expected to firewall their own systems properly.

The second problem involves tools. Although network managers generally keep their houses in order, it's not because they know what's going on; it's because the system is so over-engineered that they don't have to know. Recent research shows an enormous amount of Internet traffic is plain garbage: packets that should never have gotten where they are, or even been allowed to leave their original network.

The bottom line is that we generally don't have a good way to say who is doing what on our networks. There are lots of tools out there, from URL watchers to intrusion-detection systems to IP-layer flow tools. Even most Cisco routers have flow-analysis tools built in. But few of us have installed them, and fewer still know how to use them.

Just answering the question "Who on my network is infected?" is not easy, even though these systems will stand out like a sore thumb. Network managers need to take a closer inventory of their networks and add tools that will help them monitor what's going on with all those bits. Without data, we're flying blind.