The Folly of Controling Knowledge on Software Vulnerabilities
Published at the Politech mailing list politechbot.com on Nov 14, 2001 Prof. Pedro Antonio Dourado de Rezende
University of Brasilia - Computer Science Deptartment
Aug 13, 2001
Should security flaws like Code Red be disclosed publicly?Date: Sun, 12 Aug 2001 18:41:49 -0400
[This is a debate from the Bugtraq list. Richard also forwarded me his
comments separately. My comments are near the end. --Declan]
**********
From: rms@privacyfoundation.org (Richard M. Smith)
Subject: Can we afford full disclosure of security holes?
Date: Fri, 10 Aug 2001 14:39:06 -0400
Hello,
The research company Computer Economics is calling Code Red the most expensive computer virus in the history of the Internet. They put the estimated clean-up bill so far at $2 billion. I happen to think the $2 billion figure is total hype, but clearly a lot of time and money has been spent cleaning up after Code Red.For the sake of argument, let's say that Computer Economics is off by a factor of one hundred. That still puts the clean-up costs at $20 million.
This $20 million figure begs the question was it really necessary for eEye Digital Security to release full details of the IIS buffer overflow that made the Code Red I and II worms possible? I think the answer is clearly no. Wouldn't it have been much better for eEye to give the details of the buffer overflow only to Microsoft? They could have still issued a security advisory saying that they found a problem in IIS and where to get the Microsoft patch. I realized that a partial disclosure policy isn't as sexy as a full disclosure policy, but I believe that less revealing eEye advisory would have saved a lot companies a lot of money and grief.
Unlike the eEye advisory, the Microsoft advisory on the IIS security hole shows the right balance. It gives IIS customers enough information about the buffer overflow without giving a recipe to virus writers of how to exploit it.
Thanks,
Richard M. Smith
CTO, Privacy Foundation
http://www.privacyfoundation.org
Links
Code Red Virus 'Most Expensive in History of Internet'
http://www.newsfactor.com/perl/story/12668.html
eEye security advisory -- All versions of Microsoft
IIS Remote buffer overflow (SYSTEM LevelAccess)
http://www.eeye.com/html/Research/Advisories/AD20010618.html
eEye security advisory -- .ida "Code Red" Worm
http://www.eeye.com/html/Research/Advisories/AL20010717.html
Unchecked Buffer in Index Server ISAPI Extension Could Enable Web Server
Compromise
http://www.microsoft.com/technet/treeview/default.asp? url=/technet/security/bulletin/MS01-033.asp
**********
From: "Marc Maiffret" <marc@eeye.com>
Subject: RE: Can we afford full disclosure of security holes?
Date: Fri, 10 Aug 2001 13:10:51 -0700
After about 3 weeks of little to no sleep and spending lots of my (and Ryan Permeh's) personal time researching CodeRed and its many variants I have grown tired of the small number of people who so ignorantly have pointed a finger at eEye and are trying to somehow get people to think that we are responsible. As an employee of a company I must hold back some of my feelings... however as an individual I can tell you that this is all complete and utter crap.
|Hello,Where the hell do you or anyone get off by saying that eEye's advisory made CodeRed possible? This sort of ignorance being spread in a public forum is just one of the many things wrong with the security industry. Your making claims that you have no data to back other than "well I think so."
[...] This $20 million figure begs the question was it really necessary for eEye Digital Security to release full details of the IIS buffer overflow that made the Code Red I and II worms possible? I think the answer is clearly no.
Wouldn't it have been much better for eEye to give the details of the buffer overflow only to Microsoft? They could have still issued a security advisory saying that they found a problem in IIS and where to get the Microsoft patch. I realized that a partial disclosure policy isn't as sexy as a full disclosure policy, but I believe that less revealing eEye advisory would have saved a lot companies a lot of money and grief.Lets get the facts straight. CodeRed is based off of another worm that was written for a .htr ISAPI buffer overflow. CodeRed is an almost identical copy of the .htr worm. A worm which was released back in April. A worm which exploited an UNPUBLISHED vulnerability within IIS which was silently patched by Microsoft without notification to anyone. Therefore IDS vendors never had a signature and the .htr worm went unnoticed. To bad a security company had not found the flaw, then there would have been details, signatures made, and IDS systems would have detected the first instance of CodeRed back in April.
So the facts are: Someone found an unknown buffer overflow vulnerability within the IIS .htr ISAPI filter, without any data from eEye. Someone exploited that unknown buffer overflow vulnerability in order to execute code on remote systems, without any data from eEye. Someone took that exploit even further and turned it into a worm (Which is what CodeRed is explicitly based off of) and launched it at the Internet, without any data from eEye.
Now a few months later someone took that .htr worm and modified it to attack the .ida vulnerability. They already had ALL of the knowledge they needed in order to modify the .htr worm to be the .ida worm. There was nothing that eEye gave them that made it any easier.
In fact when it comes down to it technically... eEye's technical exploit information within the .ida ISAPI overflow advisory was actually put to shame by a skilled programmer by the name of hsj. hsj published a working .ida ISAPI overflow exploit which used a wide character overflow technique which was far beyond (and nothing like) anything we talked about in our advisory. So technically the CodeRed worm and hsj .ida exploit were technically superior to anything that we (eEye) discussed in our .ida advisory. They did not use ANY technique that had anything to do with our advisory. If you, or any of the other small percentage of people pointing fingers at eEye, actually had any technical understanding of buffer overflow exploits then you might have understood that and not sent an eMail to a public mailing list making harsh accusations which are totally inaccurate and untrue.
Unlike the eEye advisory, the Microsoft advisory on the IIS security hole shows the right balance. It gives IIS customers enough information about the buffer overflow without giving a recipe to virus writers of how to exploit it.This isn't the 70's. People are easily able to write exploits simply from the data that Microsoft gives within their advisories. To say that hackers are not able to write exploits solely based off of a Microsoft Advisory is to underestimate the underground, which is a _bad_ thing to do. Most of the hackers we know have automated tools that allow them to compare the files held within a Microsoft security patch to system files that are being replaced and after running them through custom modules for IDA etc... they have pinpointed overflows etc... by ONLY using the information held within a Microsoft security bulletin and its patch.
Thanks,There is a big bad world out there far beyond the technical information seen on mailing lists like Bugtraq.
Richard M. Smith
CTO, Privacy Foundation
http://www.privacyfoundation.org
Signed,
Marc Maiffret
Chief Hacking Officer
eEye Digital Security
T.949.349.9062
F.949.349.9538
http://eEye.com/Retina - Network Security Scanner
http://eEye.com/Iris - Network Traffic Analyzer
http://eEye.com/SecureIIS - Stop known and unknown IIS vulnerabilities
**********
From: rms@privacyfoundation.org (Richard M. Smith)Those working on computer security in the internet are painfully aware of the futility of the suggestion offered by Mr. Smith. Companies like MS and others in the proprietary software business, specially those with monopolistic power, don't bother to pay attention to vulnerability reports submited directly and privately to them. The pattern is very well known, and inescapable. Like a hockey game, this business of reporting vulnerabilities in proprietary software unravels in three parts.
Subject: Can we afford full disclosure of security holes?
Date: Fri, 10 Aug 2001 14:39:06 -0400Hello,
The research company Computer Economics is calling Code Red the most expensive computer virus in the history of the Internet. They put the estimated clean-up bill so far at $2 billion. I happen to think the $2 billion figure is total hype, but clearly a lot of time and money has been spent cleaning up after Code Red.
For the sake of argument, let's say that Computer Economics is off by a factor of one hundred. That still puts the clean-up costs at $20 million.
This $20 million figure begs the question was it really necessary for eEye Digital Security to release full details of the IIS buffer overflow that made the Code Red I and II worms possible? I think the answer is clearly no.
Wouldn't it have been much better for eEye to give the details of the buffer overflow only to Microsoft?
They could have still issued a security advisory saying that they found a problem in IIS and where to get the Microsoft patch.
I realized that a partial disclosure policy isn't as sexy as a full disclosure policy, but I believe that less revealing eEye advisory would have saved a lot companies a lot of money and grief.
Unlike the eEye advisory, the Microsoft advisory on the IIS security hole shows the right balance.
It gives IIS customers enough information about the buffer overflow without giving a recipe to virus writers of how to exploit it.
In the first stage into a proprietary software product's evolution game, the producer begins by ignoring private vulnerability reports. Why bother with the hassle, if action to patch up the product only incurs in extra costs, in the possibility of negative publicity, with no increase in revenue? Besides, the budget for its testing cycle has allready blown out. So long as nobody else knows about the problem, it is not a problem. As George W. Bush has said very well, when he dumped the Kyoto protocol, the effort to clean up "doesn't make economic sense".
Then, while the issue resists dying away, and before its consequences hit big media in a big way, generating bad publicity for the product, the producer enters into the second stage of the game. The posture moves away from ignoring the reports, into questioning its nature: "It's not a bug, it's a feature!".
A classic example of this second stage happened around the Melissa virus. MS's project decision to ignore 8 year old RFCs on MIME and implement, on new versions of its emailer, default configuration triggering automatic interpretation of scripts in MIME attachments, was not the issue. And worse, of a script language which also controls communication processes within its native's operating system! That was not considered, yet, the source of the problem. They got away with the strategy of sweeping dirt under the rug, steering the debate to the "bug versus feature" smokescreeen controversy, because before Joel Klein no one in the media dared pointing fingers towards sacred cows ruminating in Redmond.
Melissa was not enough of a warning about the company's arrogance and self righteousness. We had to wait until the ILoveYou debacle, for the company to wake up and humble itself a little, admitting to the very remote possibility of having made unwise decisions in its software projects, exposing most costumers to unjustified risks. I remember seeing somewhere a report estimating that only 4% of MS costumers could benefit from that automatic script interpretation "feature", and another one about a widely disclosed vulnerability that took MS 13 months to patch.
The third stage, is when the game is decided. This is the stage where full disclosure writes the bottom line. Full disclosure is the only effective path towards the evolution of any software, proprietary or free, in the direction of better quality. Only the prospect of negative media exposure about careless conduct in development and testing, can drive software into and trhough a healthy natural selection process. It is the only tool able to keep software developers in the honesty path.
This is why free software is, on the average, of better quality than proprietary counterparts. Where full disclosure is the norm, darwinian forces act on the software evolutionary process unhindered. Full disclosure is the only force that can drive proprietary software agents to steer their products into the evolutionary course towards higher quality altitude. That is the correct path from the user's standpoint, but one that runs a colision course with the managing path steered by expectations of stockholders of proprietary model software companies. It's Ecological versus Economic sense, revisited in cibersapce.
Therefore, if society chooses to tag a bumpy price on its ride, with the choice it makes on what software business model it prefers while driving software through its evolutionary process, the responsibility for full disclosure's consequences has to be ascribed to consumer's choice, and not to second-guessed political or ideological standings of agents in the computer security field. Softwares, like biological species, have to evolve, one way or another. Yelling at the umpire to wistle the end of the game out, when the game gets rough before the clock runs out is not really sexy, we all have to admit.
Internet Information Server is a fundamenatally flawed project, for its architectural features are incompatible with the security demands of its global operating enviroment. It is ultimately, hopelessly unpatchable, for in it the line between public and private has been blured by a decision to make its platform's process control language an "active content" scripting language. That decision seems based on greed, towards turning DOS programmers into webmasters, and not on prudent engineering. A language cannot be all things to all people, without putting them inside a babel tower. To propose the banning of full disclosure at this point becomes an attempt to sweep bad decisions under the rug. Full disclosure is with us, whether or not bumpy to the point of blowing our explorer tires at high speed, because this is the only road for software to evolve, in the ecossystem it is set up to evolve: the system where economic logic, consumer choice patterns and social expectations about software reliability weave its course. If we dont like its bumps, we have to give up at least one of these three guiding threads. It is up to consumers to decide.
To blame the computer security community for the way full disclosure enters into software's evolutionary scene, and the way it announces social costs, is an instance of human nature's tendency to shoot the messenger upon bad news. Shooting the messenger won't balance what ultimately will have to be balanced by software's evolutionary process, only what Mr. Smith wants to see balanced. However, at the cost of breeding one more monopoly, this time in the computer security field. With all the bad consequences that make its necessity arguable, with sofisms from those who can only reason with greed logic.
-----------------------------------------------------
v.2
November 14, 2001