The truth behind the Outlook/IE bug

I first became aware of the Outlook/IE bug when I received an e-mail warning from SANS (System Administration, Networking, and Security) with advice to take care of the problem immediately by applying a patch available from the Microsoft Web site. Even if I had already gone home for the day, I was told, I should drive back into the office to make sure this was done. This urgency was unusual.

At first the reports were a bit confusing. The problem affected Outlook and Outlook Express, but was related to a library (INETCOMM.DLL) that was shipped with Outlook Express and Internet Explorer.

MSNBC.com called the bug a "dangerous new attack method" and warned that it could potentially affect nearly 100 million users. Even the subtitle of the report - "Now, just receiving an e-mail can be dangerous" - caught readers' attention. It had the same dramatic tone as that often-quoted movie line: "This time, it's personal". These descriptions created the impression that the terrain of systems security had just changed dramatically.

For the most part, it had not. Though many people, such as myself, were forced to acknowledge that e-mail vulnerabilities could appear on our Windows clients no matter how effectively we warned our users against clicking on attachments, the bug was really just another instance of one of the most common attacks on the Internet. It wasn't a virus; it was a buffer overflow.

Sysadmins and security specialists alike had grudgingly come to expect more of what we'd already seen - stylised viruses like those that had been plaguing us earlier. Though the Love Bug virus (one noteworthy example) had caused problems for many organisations and spread quickly around the world, the fact that computer users had to double-click on a Visual Basic attachment to set it in motion left some of us feeling that we were still in control.

My current user base consists primarily of Java programmers - the kind of people who aren't easily distracted by "ILOVEYOU" messages from strangers. Luckily, our natural inclinations (to be cautious of attachments) and our location (on the West Coast of the US, which meant that local radio stations were reporting the bug before many of us even reached our offices) safeguarded us from these sorts of attacks.

The Outlook/IE bug was different to the Love Bug and its ilk in another very important way - it was discovered before it was exploited. This may be the normal course of events for many security-related bugs, but it is generally not the case with the kind of security problems that most Windows users deal with. These users generally become aware of a virus only when it's spreading wildly, or after they have been personally affected.

The Outlook/IE bug was discovered and reported to Microsoft by an Argentine security research team known as Underground Security Systems Research (or USSR Labs). Shortly after the group had reported the bug to Microsoft (and while a patch was still being prepared), the problem was inadvertently reported to the NTBugTraq list by an independent discoverer identified as Aaron Drew. Though the public announcement about the vulnerability was delayed long enough for some of the needed patches to be made available, people were scrambling to apply them.

White rabbit, pink eyes

A buffer overflow is a security hole that provides opportunities for exploitation. In the Outlook/IE bug's case, the opportunity was created because the date/time field in the e-mail header wasn't properly examined.

E-mail headers identify the sender and recipient, and provide the subject line, date/time field, and other information that determines how each message is delivered and displayed in the user's inbox. The date/time field should look something like this:

Date: Sun, 20 August 2000 20:35:06 +0700In a malformed message, a stream of characters might be tacked onto the end of this valid information.

Two factors in the online reports added to the confusion. One of these involved the issue of when the problem could arise. The two reporters (USSR and Drew) seemed to hold slightly different opinions. The former claimed that the bug could be exploited when e-mail was being downloaded from the server - even before being added to the user's inbox. The latter seemed to believe that some action on the part of the user (such as opening an e-mail folder) might be necessary. When we examine the nature of the problem a little more closely, this difference seems very small.

In order to fetch e-mail from a mail server using POP or IMAP (two protocols for enquiring about and retrieving e-mail), client software has to make a connection, supply a username and password (to authenticate the user), and download the messages. Mail clients process messages to a certain extent before users have a chance to read them: among other things, they extract sender names and subject lines for display, and usually read the date/time field so that the messages can be sorted. Presumably, it is when this scanning is done that the Outlook client would be subject to subversion.

Contributing to the confusion over the bug was a statement that claimed corporate users weren't affected. There was no follow-up explanation as to why corporate users might be any more or less vulnerable than anyone else. Still, even this claim falls into perspective when we more closely examine what is known about the software flaw.

POP and IMAP (generally POP3 and IMAP4) are the most common protocols that people logging into the Internet from home, or in small ISP-supported offices, will use to fetch their e-mail. Other protocols for fetching e-mail from a server operate in different ways and don't use the same libraries. The Message Application Programming Interface (MAPI) protocol, which is most often used with Microsoft Exchange, was not affected by this bug. This is undoubtedly the source of the comment about "corporate" users.

You know what 'IT' means

Tucked into Microsoft's FAQ on the bug (MS00-043) is a description of the bug as a buffer overrun vulnerability. Also referred to as a buffer overflow, a buffer overrun is a condition that occurs in an executable when an operation, such as copying a string from one memory location to another, causes the new location (such as the buffer) to overflow. This happens because the string is too large for the intended destination. As a result of this oversight the running process will often crash with a segment violation (like, an attempt to use memory not assigned to it).

To understand how a buffer overflow can cause arbitrary commands to be executed, it helps to understand the concept of a stack. A stack is basically a segment of memory that is conceptually managed like a stack of dishes. Items are put onto the stack (pushed) and taken off the stack (popped) in a last-in, first-out order. One of the primary uses of the stack is to hold the return address for a process before a subroutine is called.

Stuffing too much data into an unchecked data field is likely to crash a process, but it won't do any more harm than that. Inserting carefully crafted assembler code, on the other hand, can cause the system to put the code on the processor stack, replace the return address of the subverted process with the address of these instructions, and execute the code.

If the process is running under a privileged account, and the extra text within the string was designed with skill, it is possible for a buffer overflow of this type to dump the attacker into a shell account where commands that can seriously compromise the system area can be issued.

A buffer overflow attack thus occurs when an attacker deliberately takes advantage of flaws in executables to gain access (preferably privileged access) to a computer system. For a buffer overflow attack to be successful, the flaw must first exist and the attacker must know how to exploit it.

The intention of those who take advantage of buffer overflow opportunities is to run code of their own creation, further compromising the system, or to drop themselves into a privileged account from which they can issue privileged commands.

One critical point should be made in recapping this description of the Outlook/IE bug: we didn't see it coming and we weren't looking for it. If I can be so bold as to speak for most computer users, our understanding of what is happening below the surface when we retrieve and open a piece of e-mail did not include the possibility of remote users causing damage to our systems by intentionally sending e-mail messages with malformed headers. In fact, many of us (myself included) pooh-poohed the idea that simply reading e-mail could cause damage back when the Good Times hoax was making its second and third rounds across the Internet.

Of course, now that we are prepared for such contingencies as having our systems attacked when we simply retrieve e-mail messages, we may never see this particular problem again. So, then, why am I writing this column? What do I think we can learn from this experience? Why is this a good point with which to embark on yet another security column?

First, we can come to a greater understanding of basic security problems - in this case, buffer overflow problems, the primary route by which systems on the Internet are compromised. Second, by gaining a better understanding of how software is vulnerable and how systems are attacked, we put ourselves in a better position to know what can and should be done in response. My intention in writing this is to expose some of the basic flaws and mechanisms that leave systems vulnerable, and to discuss the basics of tools and techniques that help secure them.

I want to dispel the notion that computer security is a black art. It is no more a black art than driving a car. Driving a car, stealing a car, and fixing a car all require the same skill set, though a mechanic would have a skill level different to that of a driver - or a thief. In the same way, using computers, breaking into computers, and fixing computers all require the same fundamental understanding, albeit to different degrees.

Second, security attacks are not random events. They are calculated and incremental, and they fall into easily definable categories. Just as clothing rips at the seams and tools rust on their surfaces, software breaks in ways that programmers and attackers understand.

The number of basic ways that a system can be attacked is not as large as one might expect. Just as houses can only be broken into a small number of ways, computer systems only fail in a number of well-defined modes. Granted, enough bugs are discovered every day to make keeping track of them a full-time job. At the same time, these bugs are variations on a handful of basic themes. The best defence is to know as much as possible about how tools are built, what operations occur below the surface (without the user's awareness), and what fundamental insecurities exist.

Once you understand these things, you will be able to make smart decisions about what you should do to keep your systems as safe as possible.

Note: Microsoft recommends that even those users not directly affected by the Outlook/IE bug apply the patch, because other bugs are fixed in the same service pack.

* Columnist and author Sandra Henry-Stocker (aka S Lee Henry) has been administering Unix systems since 1983, when she took a job with the CIA. She's also served on the board of the Sun User Group, taught Unix and network security and now works for E-Trade and writes newsletters for ITworld.com.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about MicrosoftSecurity Systems

Show Comments
[]