Grey Hat Hacking Series Part 1 Chapter 3 Proper and Ethical Disclosure art4haxk
Grey Hat Hacking Series Part 1 Chapter 3 Proper and Ethical Disclosure

Grey Hat Hacking Series Part 1 Chapter 3 Proper and Ethical Disclosure

This is 3rd chapter of grey hat hacking series but I am not getting proper visitors on 1st and 2nd chapter and this is not very good thing for me. Because it discourages me that no one is responsing on my hard work. But anyways I am not here for appreciation I have started this work and I'll end it...
  • • Different points of view pertaining to vulnerability disclosure 
  • • The evolution and pitfalls of vulnerability discovery and reporting procedures 
  • • CERT’s approach to work with ethical hackers and vendors 
  • • Full Disclosure Policy (RainForest Puppy Policy) and how it differs between CERT and OIS’s approaches 
  • • Function of the Organization for Internet Safety (OIS)
For years customers have demanded operating systems and applications that provide more and more functionality. Vendors have scrambled to continually meet this demand while attempting to increase profits and market share. The combination of the race to market and keeping a competitive advantage has resulted in software going to the market containing many flaws. 
The flaws in different software packages range from mere nuisances to critical and dangerous vulnerabilities that directly affect the customer’s protection level. Microsoft products are notorious for having issues in their construction that can be exploited to compromise the security of a system. The number of vulnerabilities that were discovered in Microsoft Office in 2006 tripled from the number that had been discovered in 2005. 
The actual number of vulnerabilities has not been released, but it is common knowledge that at least 45 of these involved serious and critical vulnerabilities. A few were zero-day exploits. A common method of attack against systems that have Office applications installed is to use malicious Word, Excel, or PowerPoint documents that are transmitted via e-mail. 
Once the user opens one of these document types, malicious code that is embedded in the document, spreadsheet, or presentation file executes and can allow a remote attacker administrative access to the now-infected system. SANS top 20 security attack targets 2006 annual update:
  • • Operating Systems 
  • • W1. Internet Explorer 
  • • W2. Windows Libraries 
  • • W3. Microsoft Office 
  • • W4. Windows Services
  • • W5. Windows Configuration Weaknesses 
  • • M1. Mac OS X 
  • • U1. UNIX Configuration Weaknesses 
  • • Cross-Platform Applications 
  • • C1 Web Applications 
  • • C2. Database Software 
  • • C3. P2P File Sharing Applications 
  • • C4 Instant Messaging 
  • • C5. Media Players 
  • • C6. DNS Servers 
  • • C7. Backup Software 
  • • C8. Security, Enterprise, and Directory Management Servers 
  • • Network Devices 
  • • N1. VoIP Servers and Phones 
  • • N2. Network and Other Devices Common Configuration Weaknesses 
  • • Security Policy and Personnel 
  • • H1. Excessive User Rights and Unauthorized Devices 
  • • H2. Users (Phishing/Spear Phishing) 
  • • Special Section 
  •  Z1. Zero Day Attacks and Prevention Strategies
One vulnerability is a Trojan horse that can be spread through various types of Microsoft Office files and programmer kits. The Trojan horse’s reported name is syosetu.doc. If a user logs in as an administrator on a system and the attacker exploits this vulnerability, the attacker can take complete control over the system working under the context of an administrator. 
The attacker can then delete data, install malicious code, create new accounts, and more. If the user logs in under a less powerful account type, the attacker is limited to what she can carry out under that user’s security context. 
A vulnerability in PowerPoint allowed attackers to install a key-logging Trojan horse (which also attempted to disable antivirus programs) onto computers that executed a specially formed slide deck. The specially created presentation was a PowerPoint slide deck that discussed the difference between men and women in a humorous manner, which seems to always be interesting to either sex.
NOTE: Creating some chain letters, cute pictures, or slides that appeal to many people is a common vector of infecting other computers. One of the main problems today is that many of these messages contain zero-day attacks, which means that victims are vulnerable until the vendor releases some type of fix or patch.
In the past, attackers’ goals were usually to infect as many systems as possible or to bring down a well-known system or website, for bragging rights. Today’s attackers are not necessarily out for the “fun of it”; they are more serious about penetrating their targets for financial gains and attempt to stay under the radar of the corporations they are attacking and of the press. Examples of this shift can be seen in the uses of the flaws in Microsoft Office previously discussed. Exploitation of these vulnerabilities was not highly publicized for quite some time. 
While the attacks did not appear to be a part of any kind of larger global campaign, they also didn’t seem to happen to more than one target at a time, but they have occurred. Because these attacks cannot be detected through the analysis of large traffic patterns or even voluminous intrusion detection system (IDS) and firewall logs, they are harder to track. If they continue this pattern, it is unlikely that they will garner any great attention. 
This does have the potential to be a dangerous combination.
Why? If it won’t grab anyone’s attention, especially compared with all the higher profile attacks that flood the sea of other security software and hardware output, then it can go unnoticed and not be addressed. While on the large scale it has very little impact, for those few who are attacked, it could still be a massively damaging event. 
That is one of the major issues with small attacks like these. 
They are considered to be small problems as long as they are scattered and infrequent attacks that only affect a few. Even systems and software that were once relatively unbothered by these kinds of attacks are finding that they are no longer immune. 
Where Microsoft products once were the main or only targets of these kinds of attacks due to their inherent vulnerabilities and extensive use in the market, there has been a shift toward exploits that target other products. Security researchers have noted that hackers are suddenly directing more attention to Macintosh and Linux systems and Firefox browsers. 
There has also been a major upswing in the types of attacks that exploit flaws in programs that are designed to process media files such as Apple QuickTime, iTunes, Windows Media Player, RealNetworks RealPlayer, Macromedia Flash Player, and Nullsoft Winamp. Attackers are widening their net for things to exploit, including mobile phones and PDAs. Macintosh systems, which were considered to be relatively safe from attacks, had to deal with their own share of problems with zero-day attacks during 2006. In February, a pair of worms that targeted Mac OS X were identified in conjunction with an easily exploitable severe security flaw. 
Then at Black Hat in 2006, Apple drew even more fire when Jon Ellch and Dave Maynor demonstrated how a rootkit could be installed on an Apple laptop by using third-party Wi-Fi cards. The vulnerability supposedly lies in the third-party wireless card device drivers. Macintosh users did not like to hear that their systems could potentially be vulnerable and have questioned the validity of the vulnerability. Thus debate grows in the world of vulnerability discovery. Mac OS X was once thought to be virtually free from flaws and vulnerabilities. 
But in the wake of the 2006 pair of worms and the Wi-Fi vulnerability just discussed, that perception could be changing. While overall the MAC OS systems don’t have the number of identified flaws as Microsoft products, enough has been discovered to draw attention to the virtually ignored operating system. Industry experts are calling for Mac users to be vigilant and not to become complacent.
Complacency is the greatest threat now for Mac users. 
Windows users are all too familiar with the vulnerabilities of their systems and have learned to adapt to the environment as necessary. Mac users aren’t used to this, and the misconception of being less vulnerable to attacks could be their undoing. Experts warn that Mac malware is not a myth and cite the creation of the Inqtana worm, which targeted Mac OS X by using a vulnerability in the Apple Bluetooth software that was more than eight months old, as an example of the vulnerability that threatens Mac users. 
Still another security flaw came to light for Apple in early 2006. It was reported that visiting a malicious website by use of Apple’s Safari web browser could result in a rootkit, backdoor, or other malicious software being installed onto the computer without the user’s knowledge. Apple did develop a patch for the vulnerability. This came close on the heels of the discovery of a Trojan horse and worm that also targeted Mac users. 
Apparently the new problem lies in the way that Mac OS X was processing archived files. An attacker could embed malicious code into a ZIP file and then host it on a website. The file and the embedded code would run when a Mac user would visit the malicious site using the Safari browser. The operating system would execute the commands that came in the metadata for the ZIP files. This problem was made even worse by the fact that these files would automatically be opened by Safari when it encountered them on the Web. 
There is evidence that even ZIP files are not necessary to conduct this kind of attack. The shell script can be disguised as practically anything. This is due to the Mac OS Finder, which is the component of the operating system that is used to view and organize the files. This kind of malicious file can even be hidden as a JPEG image. This can occur because the operating system assigns each file an identifying image that is based on the file extensions, but also decides which application will handle the file based on the file permissions. 
If the file has any executable bits set, it will be run using Terminal, the Unix command-line prompt used in Mac OS X. While there have been no large-scale reported attacks that have taken advantage of this vulnerability, it still represents a shift in the security world. At the writing of this edition, Mac OS X users can protect themselves by disabling the “Open safe files after downloading” option in Safari. With the increased proliferation of fuzzing tools and the combination of financial motivations behind many of the more recent network attacks, it is unlikely that we can expect any end to this trend of attacks in the near future. 
Attackers have come to understand that if they discover a flaw that was previously unknown, it is very unlikely that their targets will have any kind of protection against it until the vendor gets around to providing a fix. This could take days, weeks, or months. Through the use of fuzzing tools, the process for discovering these flaws has become largely automated. 
Another aspect of using these tools is that if the flaw is discovered, it can be treated as an expendable resource. This is because if the vector of an attack is discovered and steps are taken to protect against these kinds of attacks, the attackers know that it won’t be long before more vectors will be found to replace the ones that have been negated. It’s simply easier for the attackers to move on to the next flaw than to dwell on how a particular flaw can continue to be exploited.
With 2006 being the named “the year of zero-day attacks” it wasn’t surprising that security experts were quick to start using the phrase “zero-day Wednesdays.” This term came about because hackers quickly found a way to exploit the cycles in which Microsoft issued its software patches. 
The software giant issues its patches on the second Tuesday of every month, and hackers would use the identified vulnerabilities in the patches to produce exploitable code in an amazingly quick turnaround time. Since most corporations and home users do not patch their systems every week, or every month, this provides a window of time for attackers to use the vulnerabilities against the targets. 
In January, 2006 when a dangerous Windows Meta File flaw was identified, many companies implemented Ilfak Guilfanov’s non-Microsoft official patch instead of waiting for the vendor. Guilfanov is a Russian software developer and had developed the fix for himself and his friends. He placed the fix on his website, and after SANS and F-Secure advised people to use this patch, his website was quickly overwhelmed by downloading.
Evolution of the Process Many years ago the majority of vulnerabilities were those of a “zero-day” style because there were no fixes released by vendors. It wasn’t uncommon for vendors to avoid talking about, or even dealing with, the security defects in their products that allowed these attacks to occur. The information about these vulnerabilities primarily stayed in the realm of those that were conducting the attacks. A shift occurred in the mid-‘90s, and it became more common to discuss security bugs. This practice continued to become more widespread. Vendors, once mute on the topic, even started to assume roles that became more and more active, especially in areas that involved the dissemination of information that provided protective measures. Not wanting to appear as if they were deliberately hiding information, and instead wanting to continue to foster customer loyalty, vendors began to set up security-alert mailing lists and websites. Although this all sounds good and gracious, in reality gray hat attackers, vendors, and customers are still battling with each other and among themselves on how to carry out this process. Vulnerability discovery is better than it was, but it is still a mess in many aspects and continually controversial.
Guilfanov’s release caused a lot of controversy. First, attackers used the information in the fix to create exploitable code and attacked systems with their exploit (same thing that happens after a vendor releases a patch). Second, some feel uneasy about trusting the downloading of third-party fixes compared with the vendors’ fixes. 
(Many other individuals felt safer using Guilfanov’s code because it was not compiled; thus individuals could scan the code for any malicious attributes.) And third, this opens a whole new can of worms pertaining to companies installing third-party fixes instead of waiting for the vendor. As you can tell, vulnerability discovery is in flux about establishing one specific process, which causes some chaos followed by a lot of debates.
NOTE: The Windows Meta File flaw uses images to execute malicious code on systems. It can be exploited just by a user viewing the image.

You Were Vulnerable for How Long?

Even when a vulnerability has been reported, there is still a window where the exploit is known about but a fix hasn’t been created by the vendors or the antivirus and antispyware companies. This is because they need to assess the attack and develop the appropriate response. Figure 3-1 displays how long it took for vendors to release fixes to identified vulnerabilities.
The increase in interest and talent in the black hat community translates to quicker and more damaging attacks and malware for the industry. It is imperative for vendors not to sit on the discovery of true vulnerabilities, but to work to get the fixes to the customers who need them as soon as possible.
For this to take place properly, ethical hackers must understand and follow the proper methods of disclosing identified vulnerabilities to the software vendor. As mentioned in Chapter 1, if an individual uncovers a vulnerability and illegally exploits it and/or tells others how to carry out this activity, he is considered a black hat. If an individual uncovers a vulnerability and exploits it with authorization, he is considered a white hat.
If a different person uncovers a vulnerability, does not illegally exploit it or tell others how to do it, but works with the vendor—this person gets the label of gray hat. Unlike other books and resources that are available today, we are promoting the use of the knowledge that we are sharing with you to be used in a responsible manner that will only help the industry—not hurt it.
This means that you should understand the policies, procedures, and guidelines that have been developed to allow the gray hats and the vendors to work together in a concerted effort. These items have been created because of the difficulty in the past of teaming up these different parties (gray hats and vendors) in a way that was beneficial.
Many times individuals identify a vulnerability and post it (along with the code necessary to exploit it) on a website without giving the vendor the time to properly develop and release a fix. On the other hand, many times when gray hats have tried to contact vendors with their useful information, the vendor has ignored repeated requests for communication pertaining to a particular weakness in a product. This lack of communication and participation from the vendor’s side usually resulted in the individual—who attempted to take a more responsible approach—posting the vulnerability and exploitable code to the world.
This is then followed by successful attacks taking place and the vendor having to scramble to come up with a patch and endure a reputation hit. This is a sad way to force the vendor to react to a vulnerability, but in the past it has at times been the only way to get the vendor’s attention. So before you jump into the juicy attack methods, tools, and coding issues we cover, make sure you understand what is expected of you once you uncover the security flaws in products today. There are enough people doing the wrong things in the world. We are looking to you to step up and do the right thing.

Different Teams and Points of View

Unfortunately, almost all of today’s software products are riddled with flaws. The flaws can present serious security concerns to the user. For customers who rely extensively on applications to perform core business functions, the effects of bugs can be crippling and thus must be dealt with. How to address the problem is a complicated issue because it involves a few key players who usually have very different views on how to achieve a resolution. The first player is the consumer.
An individual or company buys the product, relies on it, and expects it to work. Often, the customer owns a community of interconnected systems that all rely on the successful operation of the software to do business. When the customer finds a flaw, she reports it to the vendor and expects a solution in a reasonable timeframe. The software vendor is the second player. It develops the product and is responsible for its successful operation. The vendor is looked to by thousands of customers for technical expertise and leadership in the upkeep of the product.
When a flaw is reported to the vendor, it is usually one of many that must be dealt with, and some fall through the cracks for one reason or another. Gray hats are also involved in this dance when they find software flaws. Since they are not black hats, they want to help the industry and not hurt it. They, in one manner or another, attempt to work with the vendor to develop a fix. Their stance is that customers should not have to be vulnerable to attacks for an extended period. Sometimes vendors will not address the flaw until the next scheduled patch release or the next updated version of the product altogether. In these situations the customers and industry have no direct protection and must fend for themselves. The issue of public disclosure has created quite a stir in the computing industry, because each group views the issue so differently.
Many believe knowledge is the public’s right and all security vulnerability information should be disclosed as a matter of principle. Furthermore, many individuals feel that the only way to truly get quick results from a large software vendor is to pressure it to fix the problem by threatening to make the information public. As mentioned, vendors have had the reputation of simply plodding along and delaying the fixes until a later version or patch, which will address the flaw, is scheduled for release. This approach doesn’t have the best interests of the consumers in mind, however, as they must sit and wait while their business is put in danger with the known vulnerability. The vendor looks at the issue from a different perspective.
Disclosing sensitive information about a software flaw causes two major problems. First, the details of the flaw will help hackers to exploit the vulnerability. The vendor’s argument is that if the issue is kept confidential while a solution is being developed, attackers will not know how to exploit the flaw. Second, the release of this information can hurt the reputation of the company, even in circumstances when the reported flaw is later proven to be false. It is much like a smear campaign in a political race that appears as the headline story in a newspaper. Reputations are tarnished and even if the story turns out to be false, a retraction is usually printed on the back page a week later.
Vendors fear the same consequence for massive releases of vulnerability reports. So security researchers (“gray hat hackers”) get frustrated with the vendors for their lack of response to reported vulnerabilities. Vendors are often slow to publicly acknowledge the vulnerabilities because they either don’t have time to develop and distribute a suitable fix, or they don’t want the public to know their software has serious problems, or both. This rift boiled over in July 2005 at the Black Hat Conference in Las Vegas, Nevada.
In April 2005, a 24-year-old security researcher named Michael Lynn, an employee of the security firm Internet Security Systems, Inc. (ISS), identified a buffer overflow vulnerability in Cisco’s IOS (Internetwork Operating System). This vulnerability allowed the attacker full control of the router. Lynn notified Cisco of the vulnerability, as an ethical security researcher should. When Cisco was slow to address the issue, Lynn planned to disclose the vulnerability at the July Black Hat Conference.
Two days before the conference, when Cisco, claiming they were defending their intellectual property, threatened to sue both Lynn and his employer ISS, Lynn agreed to give a different presentation. Cisco employees spent hours tearing out Lynn’s disclosure presentation from the conference program notes that were being provided to attendees. Cisco also ordered 2,000 CDs containing the presentation destroyed. Just before giving his alternate presentation, Lynn resigned from ISS and then delivered his original Cisco vulnerability disclosure presentation.
Later Lynn stated, “I feel I had to do what’s right for the country and the national infrastructure,” he said. “It has been confirmed that bad people are working on this (compromising IOS). The right thing to do here is to make sure that everyone knows that it’s vulnerable...” Lynn further stated, “When you attack a host machine, you gain control of that machine—when you control a router, you gain control of the network.”
The Cisco routers that contained the vulnerability were being used worldwide. Cisco sued Lynn and won a permanent injunction against him, disallowing any further disclosure of the information in the presentation. Cisco claimed that the presentation “contained proprietary information and was illegally obtained.” Cisco did provide a fix and stopped shipping the vulnerable version of the IOS.
NOTE: Those who are interested can still find a copy of the Lynn presentation.
Incidents like this fuel the debate over disclosing vulnerabilities after vendors have had time to respond but have not. One of the hot buttons in this arena of researcher frustration is the Month of Bugs (often referred to as MoXB) approach, where individuals target a specific technology or vendor and commit to releasing a new bug every day for a month. In July 2006, a security researcher, H.D. Moore, the creator of the Month of Bugs concept, announced his intention to publish a Month of Browser Bugs (MoBB) as a result of reported vulnerabilities being ignored by vendors. Since then, several other individuals have announced their own targets, like the November 2006 Month of Kernel Bugs (MoKB) and the January 2007 Month of Apple Bugs (MoAB).
In November 2006, a new proposal was issued to select a 31-day month in 2007 to launch a Month of PHP bugs (MoPB). They didn’t want to limit the opportunity by choosing a short month. Some consider this a good way to force vendors to be responsive to bug reports. Others consider this to be extortion and call for prosecution with lengthy prison terms.
Because of these two conflicting viewpoints, several organizations have rallied together to create policies, guidelines, and general suggestions on how to handle software vulnerability disclosures. This chapter will attempt to cover the issue from all sides and to help educate you on the fundamentals behind the ethical disclosure of software vulnerabilities.

How Did We Get Here?

Before the mailing list Bugtraq was created, individuals who uncovered vulnerabilities and ways to exploit them just communicated directly with each other. The creation of Bugtraq provided an open forum for individuals to discuss these same issues and to work collectively. 
Easy access to ways of exploiting vulnerabilities gave rise to the script kiddie point-and-click tools available today, which allow people who did not even understand the vulnerability to successfully exploit it. Posting more and more vulnerabilities to the Internet has become a very attractive pastime for hackers and crackers. This activity increased the number of attacks on the Internet, networks, and vendors. Many vendors demanded a more responsible approach to vulnerability disclosure. In 2002, Internet Security Systems (ISS) discovered several critical vulnerabilities in products like Apache web server, Solaris X Windows font service, and Internet Software Consortium BIND software. ISS worked with the vendors directly to come up with solutions. 
A patch that was developed and released by Sun Microsystems was flawed and had to be recalled. In another situation, an Apache patch was not released to the public until after the vulnerability was posted through public disclosure, even though the vendor knew about the vulnerability. These types of incidents, and many more like them, caused individuals and companies to endure a lower level of protection, to fall victim to attacks, and eventually to deeply distrust software vendors. 
Critics also charged that security companies like ISS have ulterior motives for releasing this type of information. They suggest that by releasing system flaws and vulnerabilities, they generate good press for themselves and thus promote new business and increased revenue. Because of the resulting controversy that ISS encountered pertaining to how it released information on vulnerabilities, it decided to initiate its own disclosure policy to handle such incidents in the future. It created detailed procedures to follow when discovering a vulnerability, and how and when that information would be released to the public. 
Although their policy is considered “responsible disclosure” in general, it does include one important twist—vulnerability details would be released to paying subscribers one day after the vendor has been notified. This fueled the anger of the people who feel that vulnerability information should be available for the public to protect themselves. This and other dilemmas represent the continual disconnect between vendors, software customers, and gray hat hackers today. 
There are differing views and individual motivations that drive each group down different paths. The models of proper disclosure that are discussed in this chapter have helped these different entities to come together and work in a more concerted manner, but there is still a lot of bitterness and controversy around this issue.

NOTE The amount of emotion, debates, and controversy over the topic of full disclosure has been immense. The customers and security professionals are frustrated that the software flaws exist in the products in the first place, and by the lack of effort of the vendors to help in this critical area. Vendors are frustrated because exploitable code is continually released as they are trying to develop fixes. We will not be taking one side or the other of this debate, but will do our best to tell you how you can help and not hurt the process.

CERT’s Current Process

The first place to turn to when discussing the proper disclosure of software vulnerabilities is the governing body known as the CERT Coordination Center (CERT/CC). CERT/CC is a federally funded research and development operation that focuses on Internet security and related issues. Established in 1988 in reaction to the first major virus outbreak on the Internet, the CERT/CC has evolved over the years, taking on a more substantial role in the industry that includes establishing and maintaining industry standards for the way technology vulnerabilities are disclosed and communicated. In 2000, the organization issued a policy that outlined the controversial practice of releasing software vulnerability information to the public. The policy covered the following areas:
Full disclosure will be announced to the public within 45 days of being reported to CERT/CC. This timeframe will be executed even if the software vendor does not have an available patch or appropriate remedy. The only exception to this rigid deadline will be exceptionally serious threats or scenarios that would require a standard to be altered.

  • • CERT/CC will notify the software vendor of the vulnerability immediately so that a solution can be created as soon as possible. 
  • • Along with the description of the problem, CERT/CC will forward the name of the person reporting the vulnerability, unless the reporter specifically requests to remain anonymous. 
  • • During the 45-day window, CERT/CC will update the reporter on the current status of the vulnerability without revealing confidential information. 
CERT/CC states that its vulnerability policy was created with the express purpose of informing the public of potentially threatening situations while offering the software vendor an appropriate timeframe to fix the problem. The independent body further states that all decisions on the release of information to the public are based on what is best for the overall community. The decision to go with 45 days was met with opposition, as consumers widely felt that this was too much time to keep important vulnerability information concealed. 
The vendors, on the other hand, feel the pressure to create solutions in a short timeframe, while also shouldering the obvious hits their reputations will take as news spreads about flaws in their product. CERT/CC came to the conclusion that 45 days was sufficient time for vendors to get organized, while still taking into account the welfare of consumers. A common argument that was posed when CERT/CC announced their policy was, “Why release this information if there isn’t a fix available?” The dilemma that was raised is based on the concern that if a vulnerability is exposed without a remedy, hackers will scavenge the flawed technology and be in prime position to bring down users’ systems. 
The CERT/CC policy insists, however, that without an enforced deadline the vendor will have no motivation to fix the problem. Too often, a software maker could simply delay the fix into a later release, which puts the consumer in a vulnerable position. To accommodate vendors and their perspective of the problem, CERT/CC performs the following:
  • • CERT/CC will make good faith efforts to always inform the vendor before releasing information so there are no surprises.
  • • CERT/CC will solicit vendor feedback in serious situations and offer that information in the public release statement. In instances when the vendor disagrees with the vulnerability assessment, the vendor’s opinion will be released as well, so that both sides can have a voice. 
  • • Information will be distributed to all related parties that have a stake in the situation prior to the disclosure. Examples of parties that could be privy to confidential information include participating vendors, experts who could provide useful insight, Internet Security Alliance members, and groups that may be in the critical path of the vulnerability.
Although there have been other guidelines developed and implemented after CERT’s model, CERT is usually the “middleperson” between the bug finder and the vendor to try and help the process, and to enforce the necessary requirements for all of the parties involved. As of this writing, the model that is most commonly used is the Organization for Internet Safety (OIS) guidelines. CERT works within this model when called upon by vendors or gray hats. The following are just some of the vulnerability issues posted by CERT:
  • • VU#179281 Electronic Arts SnoopyCtrl ActiveX control and plug-in stack buffer overflows 
  • • VU#336105 Sun Java JRE vulnerable to unauthorized network access 
  • • VU#571584 Google Gmail cross-site request forgery vulnerability 
  • • VU#611008 Microsoft MFC FindFile function heap buffer overflow 
  • • VU#854769 PhotoChannel Networks Photo Upload Plugin ActiveX control stack buffer overflows 
  • • VU#751808 Apple QuickTime remote command execution vulnerability 
  • • VU#171449 Callisto PhotoParade Player PhPInfo ActiveX control buffer overflow 
  • • VU#768440 Microsoft Windows Services for UNIX privilege escalation vulnerability 
  • • VU#716872 Microsoft Agent fails to properly handle specially crafted URLs 
  • • VU#466433 Web sites may transmit authentication tokens unencrypted

Full Disclosure Policy (RainForest Puppy Policy)

A full disclosure policy, known as RainForest Puppy Policy (RFP) version 2, takes a harder line with software vendors than CERT/CC. This policy takes the stance that the reporter of the vulnerability should make an effort to contact and work together with the vendor to fix the problem, but the act of cooperating with the vendor is a step that the reporter is not required to take, so it is considered a gesture of goodwill. Under this model, strict policies are enforced upon the vendor if it wants the situation to remain confidential. The details of the policy follow:
  • • The issue begins when the originator (the reporter of the problem) e-mails the maintainer (the software vendor) with the details of the problem. The moment the e-mail is sent is considered the date of contact. The originator is responsible for locating the appropriate contact information of the maintainer, which can usually be obtained through its website. If this information is not available, e-mails should be sent to one or all of the addresses shown next. The common e-mail formats that should be implemented by vendors include: security-alert@[maintainer] secure@[maintainer] security@[maintainer] support@[maintainer] info@[maintainer] 
  • • The maintainer will be allowed five days from the date of contact to reply to the originator. The date of contact is from the perspective of the originator of the issue, meaning if the person reporting the problem sends an e-mail from New York at 10 A.M. to a software vendor in Los Angeles, the time of contact is 10 A.M. Eastern time. The maintainer must respond within five days, which would be 7 A.M. Pacific time five days later. An auto-response to the originator’s e-mail is not considered sufficient contact. If the maintainer does not establish contact within the allotted time, the originator is free to disclose the information. Once contact has been made, decisions on delaying disclosures should be discussed between the two parties. The RFP policy warns the vendor that contact should be made sooner rather than later. It reminds the software maker that the finder of the problem is under no requirement to cooperate, but is simply being asked to do so in the best interests of all parties. 
  • • The originator should make every effort to assist the vendor in reproducing the problem and adhering to its reasonable requests. It is also expected that the originator will show reasonable consideration if delays occur, and if the maintainer shows legitimate reasons why it will take additional time to fix the problem. Both parties should work together to find a solution. 
  • • It is the responsibility of the vendor to provide regular status updates every five days that detail how the vulnerability is being addressed. It should also be noted that it is solely the responsibility of the vendor to provide updates, and not the responsibility of the originator to request them. 
  • • As the problem and fix are released to the public, the vendor is expected to credit the originator for identifying the problem. This is considered a professional gesture to the individual or company for voluntarily exposing the problem. If this good faith effort is not executed, there will be little motivation for the originator to follow these guidelines in the future.
  • • The maintainer and the originator should make disclosure statements in conjunction with each other so that all communication will be free from conflict or disagreement. Both sides are expected to work together throughout the process. 
  • • In the event that a third party announces the vulnerability, the originator and maintainer are encouraged to discuss the situation and come to an agreement on a resolution. The resolution could include the originator disclosing the vulnerability, or the maintainer disclosing the information and available fixes while also crediting the originator. The full disclosure policy also recommends that all details of the vulnerability be released if a third party releases the information first. Because the vulnerability is already known, it is the responsibility of the vendor to provide specific details, such as the diagnosis, the solution, and the timeframe.
RainForest Puppy is a well-known hacker who has uncovered an amazing number of vulnerabilities in different products. He has a long history of successfully, and at times unsuccessfully, working with vendors on helping them develop fixes for the problems he has uncovered. The disclosure guidelines that he developed came from his years of experience in this type of work, and his level of frustration at the vendors not working with individuals like himself once bugs were uncovered. 
The key to these disclosure policies is that they are just guidelines and suggestions on how vendors and bug finders should work together. They are not mandated and cannot be enforced. Since the RFP policy takes a strict stance on dealing with vendors on these issues, many vendors have chosen not to work under this policy. So another set of guidelines was developed by a different group of people, which includes a long list of software vendors.

Organization for Internet Safety (OIS)

There are three basic types of vulnerability disclosures: full disclosure, partial disclosure, and nondisclosure. There are advocates for each type, and long lists of pros and cons that can be debated for each. CERT and RFP take a rigid approach to disclosure practices. Strict guidelines were created, which were not always perceived as fair and flexible by participating parties. 
The Organization for Internet Safety (OIS) was created to help meet the needs of all groups and it fits into a partial disclosure classification. This section will give an overview of the OIS approach, as well as provide the step-by-step methodology that has been developed to provide a more equitable framework for both the user and the vendor. OIS is a group of researchers and vendors that was formed with the goal of improving the way software vulnerabilities are handled. 
The OIS members include @stake, BindView Corp (acquired by Symantec), The SCO Group, Foundstone (a division of McAfee, Inc.), Guardent, Internet Security Systems (owned by VeriSign), Microsoft Corporation, Network Associates (a division of McAfee, Inc.), Oracle Corporation, SGI, and Symantec. 
The OIS believes that vendors and consumers should work together to identify issues and devise reasonable resolutions for both parties. It is not a private organization that mandates its policy to anyone, but rather it tries to bring together a broad, valued panel that offers respected, unbiased opinions that are considered recommendations. The model was formed to accomplish two goals:
  • • Reduce the risk of software vulnerabilities by providing an improved method of identification, investigation, and resolution.
  •  • Improve the overall engineering quality of software by tightening the security placed upon the end product.
There is a controversy related to OIS. Most of it has to do with where the organization’s loyalties lie. Because the OIS was formed by vendors, some critics question their methods and willingness to disclose vulnerabilities in a timely and appropriate manner. The root of this is how the information about a vulnerability is handled, as well as to whom it is disclosed. 
Some believe that while it is a good idea to provide the vendors with the opportunity to create fixes for vulnerabilities before they are made public, it is a bad idea not to have a predetermined time line in place for disclosing those vulnerabilities. The thinking is that vendors should be allowed to fix a problem, but how much time is a fair window to give them? 
Keep in mind that the entire time the vulnerability has not been announced, or a fix has not been created, the vulnerability still remains. The greatest issue that many take with OIS is that their practices and policies put the needs of the vendor above the needs of the community which could be completely unaware of the risk it runs. 
As the saying goes, “You can’t make everyone happy all of the time.” A group of concerned individuals came together to help make the vulnerability discovery process more structured and reliable. While some question their real allegiance, since the group is made up mostly of vendors, it is probably more of a case of, “A good deed never goes unpunished.” The security community is always suspicious of others’ motives—that is what makes them the “security community,” and it is also why continual debates surround these issues.

Discovery

The OIS process begins when someone finds a flaw in the software. It can be discovered by a variety of individuals, such as researchers, consumers, engineers, developers, gray hats, or even casual users. The OIS calls this person or group the finder. Once the flaw is discovered, the finder is expected to carry out the following due diligence: 
  1. 1. Discover if the flaw has already been reported in the past. 
  2. 2. Look for patches or service packs and determine if they correct the problem. 
  3. 3. Determine if the flaw affects the default configuration of the product. 
  4. 4. Ensure that the flaw can be reproduced consistently.
After the finder completes this “sanity check” and is sure that the flaw exists, the issue should be reported. The OIS designed a report guideline, known as a vulnerability summary report (VSR), that is used as a template to properly describe the issues. The VSR includes the following components: 
  • • Finder’s contact information
  • • Security response policy 
  • • Status of the flaw (public or private) 
  • • Whether the report contains confidential information 
  • • Affected products/versions 
  • • Affected configurations 
  • • Description of flaw 
  • • Description of how the flaw creates a security problem 
  • • Instructions on how to reproduce the problem 

Notification

The next step in the process is contacting the vendor. This is considered the most important phase of the plan according to the OIS. Open and effective communication is the key to understanding and ultimately resolving the software vulnerability. The following are guidelines for notifying the vendor. The vendor is expected to do the following: 
  • • Provide a single point of contact for vulnerability reports. 
  • • Post contact information in at least two publicly accessible locations, and include the locations in its security response policy. 
  • • Include in contact information: 
  • • Reference to the vendor’s security policy 
  • • A complete listing/instructions for all contact methods 
  • • Instructions for secure communications 
  • • Make reasonable efforts to ensure that e-mails sent to the following formats are rerouted to the appropriate parties: 
  • • abuse@[vendor] 
  • • postmaster@[vendor] 
  • • sales@[vendor] 
  • • info@[vendor] 
  • • support@[vendor]
  • • Provide a secure communication method between itself and the finder. If the finder uses encrypted transmissions to send its message, the vendor should reply in a similar fashion. 
  • • Cooperate with the finder, even if it chooses to use insecure methods of communication. The finder is expected to: 
  • • Submit any found flaws to the vendor by sending a vulnerability summary report (VSR) to one of the published points of contact. 
  • • If the finder cannot locate a valid contact address, it should send the VSR to one or many of the following addresses: 
  • • abuse@[vendor] 
  • • postmaster@[vendor] 
  • • sales@[vendor] 
  • • info@[vendor] 
  • • supports@[vendor]
Once the VSR is received, some vendors will choose to notify the public that a flaw has been uncovered and that an investigation is under way. The OIS encourages vendors to use extreme care when disclosing information that could put users’ systems at risk. It is also expected that vendors will inform the finder that they intend to disclose the information to the public. In cases where the vendor does not wish to notify the public immediately, it still needs to respond to the finder. 
After the VSR is sent, the vendor must respond directly to the finder within seven days. If the vendor does not respond during this period, the finder should then send a Request for Confirmation of Receipt (RFCR). The RFCR is basically a final warning to the vendor stating that a vulnerability has been found, a notification has been sent, and a response is expected. 
The RFCR should also include a copy of the original VSR that was sent previously. The vendor will be given three days to respond. If the finder does not receive a response to the RFCR in three business days, it can move forward with public notification of the software flaw. The OIS strongly encourages both the finder and the vendor to exercise caution before releasing potentially dangerous information to the public. The following guidelines should be observed:
  • • Exit the communication process only after trying all possible alternatives. 
  • • Exit the process only after providing notice to the vendor (RFCR would be considered an appropriate notice statement). 
  • • Reenter the process once any type of deadlock situation is resolved.
The OIS encourages, but does not require, the use of a third party to assist with communication breakdowns. Using an outside party to investigate the flaw and to stand between the finder and vendor can often speed up the process and provide a resolution that is agreeable to both parties. A third party can consist of security companies, professionals, coordinators, or arbitrators. Both sides must consent to the use of this independent body and agree upon the selection process. 
If all efforts have been made and the finder and vendor are still not in agreement, either side can elect to exit the process. Again, the OIS strongly encourages both sides to consider the protection of computers, the Internet, and critical infrastructures when deciding how to release vulnerability information.

Validation

The validation phase involves the vendor reviewing the VSR, verifying the contents, and working with the finder throughout the investigation. An important aspect of the validation phase is the consistent practice of updating the finder on the status of the investigation. The OIS provides some general rules regarding status updates: 
  • • Vendor must provide status updates to the finder at least once every seven business days, unless another arrangement is agreed upon by both sides. 
  • • Communication methods must be mutually agreed upon by both sides. Examples of these methods include telephone, e-mail, or an FTP site. 
  • • If the finder does not receive an update within the seven-day window, it should issue a Request for Status (RFS). 
  • • The vendor then has three business days to respond to the RFS. The RFS is considered a courtesy to the vendor reminding it that it owes the finder an update on the progress that is being made on the investigation.

Investigation

The investigation work that a vendor undertakes should be thorough and cover all related products linked to the vulnerability. Often, the finder’s VSR will not cover all aspects of the flaw, and it is ultimately the responsibility of the vendor to research all areas that are affected by the problem, which includes all versions of code, attack vectors, and even unsupported versions of software if they are still heavily used by consumers. The steps of the investigation are as follows: 
1. Investigate the flaw of the product described in the VSR.
2. Investigate whether the flaw also exists in supported products that were not included in the VSR.
3. Investigate attack vectors for the vulnerability.
4. Maintain a public listing of which products/versions it currently supports

Shared Code Bases 

 In some instances, one vulnerability is uncovered in a specific product, but the basis of the flaw is found in source code that may spread throughout the industry. The OIS believes it is the responsibility of both the finder and the vendor to notify all affected vendors of the problem. Although their “Security Vulnerability Reporting and Response Policy” does not cover detailed instructions on how to engage several affected vendors, the OIS does offer some general guidelines to follow for this type of situation. The finder and vendor should do at least one of the following action items:

  • • Make reasonable efforts to notify each vendor that is known to be affected by the flaw. 
  • • Establish contact with an organization that can coordinate the communication to all affected vendors. 
  • • Appoint a coordinator to champion the communication effort to all affected vendors.
Once the other affected vendors have been notified, the original vendor has the following responsibilities:
  • • Maintain consistent contact with the other vendors throughout the investigation and resolution process. 
  • • Negotiate a plan of attack with the other vendors in investigating the flaw. The plan should include such items as frequency of status updates and communication methods.
Once the investigation is under way, it is often necessary for the finder to provide assistance to the vendor. Some examples of the help that a vendor would need include more detailed characteristics of the flaw, more detailed information about the environment in which the flaw occurred (network architecture, configurations, and so on), or the possibility of a third-party software product that contributed to the flaw. Because recreating a flaw is critical in determining the cause and eventual solution, the finder is encouraged to cooperate with the vendor during this phase.
NOTE Although cooperation is strongly recommended, the only requirement of the finder is to submit a detailed VSR.

Findings

When the vendor finishes its investigation, it must return one of the following conclusions to the finder: 
  • • It has confirmed the flaw. 
  • • It has disproved the reported flaw. 
  • • It can neither prove nor disprove the flaw.
The vendor is not required to provide detailed testing results, engineering practices, or internal procedures; however, it is required to demonstrate that a thorough, technically sound investigation was conducted. This can be achieved by providing the finder with:
  • • List of product/versions that were tested 
  • • List of tests that were performed 
  • • The test results

Confirmation of the Flaw

In the event that the vendor confirms that the flaw does indeed exist, it must follow up this confirmation with the following action items: 
  • • List of products/versions affected by the confirmed flaw 
  • • A statement on how a fix will be distributed 
  • • A timeframe for distributing the fix

Disproof of the Flaw

 In the event that the vendor disproves the reported flaw, the vendor then must show the finder that one or both of the following are true: 
  • • The reported flaw does not exist in the supported product. 
  • • The behavior that the finder reported exists, but does not create a security concern. If this statement is true, the vendor should forward validation data to the finder, such as: 
  • • Product documentation that confirms the behavior is normal or nonthreatening 
  • • Test results that confirm that the behavior is only a security concern when it is configured inappropriately 
  • • An analysis that shows how an attack could not successfully exploit this reported behavior The finder may choose to dispute this conclusion of disproof by the vendor. In this case, the finder should reply to the vendor with its own testing results that validate its claim and contradict the vendor’s findings. The finder should also supply an analysis of how an attack could exploit the reported flaw. The vendor is responsible for reviewing the dispute, investigating it again, and responding to the finder accordingly. 
Unable to Confirm or Disprove the Flaw
 In the event the vendor cannot confirm or disprove the reported flaw, it should inform the finder of the results and produce detailed evidence of its investigative work. Test results and analytical summaries should be forwarded to the finder. At this point, the finder can move forward in the following ways: 
  • • Provide code to the vendor that better demonstrates the proposed vulnerability. 
  • • If no change is established, the finder can move to release their VSR to the public. In this case, the finder should follow appropriate guidelines on releasing vulnerability information to the public (covered later in the chapter).

Resolution

In cases where a flaw is confirmed, the vendor must take proper steps to develop a solution. It is important that remedies are created for all supported products and versions of the software that are tied to the identified flaw. Although not required by either party, many times the vendor will ask the finder to provide assistance in evaluating if its proposed remedy will be sufficient to eliminate the flaw. The OIS suggests the following steps when devising a vulnerability resolution: 
1. Vendor determines if a remedy already exists. If one exists, the vendor should notify the finder immediately. If not, the vendor begins developing one.
2. Vendor ensures that the remedy is available for all supported products/versions.
3. Vendor may choose to share data with the finder as it works to ensure that the remedy will be effective. The finder is not required to participate in this step.

Timeframe

Setting a timeframe for delivery of a remedy is critical due to the risk to which that the finder and, in all probability, other users are exposed. The vendor is expected to produce a remedy to the flaw within 30 days of acknowledging the VSR. Although time is a top priority, ensuring that a thorough, accurate remedy is developed is equally important.
The fix must solve the problem and not create additional flaws that will put both parties back in the same situation in the future. When notifying the finder of the target date for its release of a fix, the vendor should also include the following supporting information:
 A summary of the risk that the flaw imposes

  • • The technical details of the remedy 
  • • The testing process 
  • • Steps to ensure a high uptake of the fix
The 30-day timeframe is not alwaysstrictly followed, because the OIS documentation outlines several factors that should be contemplated when deciding upon the release date of the fix. One of the factors is “the engineering complexity of the fix.” The fix will take longer if the vendor identifies significant practical complications in the process. 
For example, data validation errors and buffer overflows are usually flaws that can be easily recoded, but when the errors are embedded in the actual design of the software, then the vendor may actually have to redesign a portion of the product.
CAUTION: Vendors have released “fixes” that introduced new vulnerabilities into the application or operating system—you close one window and open two doors. Several times these fixes have also negatively affected the application’s functionality. So although it is easy to put the blame on the network administrator for not patching a system, sometimes it is the worst thing that he could do.
There are typically two types of remedies that a vendor can propose: configuration changes or software changes. Configuration change fixes involve giving the users instructions on how to change their program settings or parameters to effectively resolve the flaw. Software changes, on the other hand, involve more engineering work by the vendor. There are three main types of software change fixes: 
  • Patches Unscheduled or temporary remedies that address a specific problem until a later release can completely resolve the issue. 
  • Maintenance updates Scheduled releases that regularly address many known flaws. Software vendors often refer to these solutions as service packs, service releases, or maintenance releases. 
  • Future product versions Large, scheduled software revisions that impact code design and product features. 

Vendors consider several factors when deciding which software remedy to implement. The complexity of the flaw and the seriousness of the effects are major factors in the decision process to start. In addition, the established maintenance schedule will also weigh into the final decision. For example, if a service pack was already scheduled for release in the upcoming month, the vendor may choose to address the flaw within that release. If a scheduled maintenance release is months away, the vendor may issue a specific patch to fix the problem.
NOTE: Agreeing upon how and when the fix will be implemented is often a major disconnect between finders and vendors. Vendors will usually want to integrate the fix into their already scheduled patch or new version release. Finders usually feel it is unfair to make the customer base wait this long and be at risk just so it does not cost the vendor more money

Release 

The final step in the OIS “Security Vulnerability Reporting and Response Policy” is the release of information to the public. The release of information is assumed to be to the overall general public at one time, and not in advance to specific groups. OIS does not advise against advance notification, but realizes that the practice exists in case-by-case instances and is too specific to address in the policy.

Conflicts Will Still Exist

The reasons for the common breakdown between the finder and the vendor lie in their different motivations and some unfortunate events that routinely occur. Finders of vulnerabilities usually have the motive of trying to protect the overall industry by identifying and helping remove dangerous software from commercial products.
A little fame, admiration, and bragging rights are also nice for those who enjoy having their egos stroked. Vendors, on the other hand, are motivated to improve their product, avoid lawsuits, stay clear of bad press, and maintain a responsible public image.
Although more and more software vendors are reacting appropriately when vulnerabilities are reported (because of market demand for secure products), many people believe that vendors will not spend the extra money, time, and resources to carry out this process properly until they are held legally liable for software security issues. The possible legal liability issues software vendors may or may not face in the future is a can of worms we will not get into, but these issues are gaining momentum in the industry. The main controversy that has surrounded OIS is that many people feel as though the guidelines have been written by the vendors, for the vendors.
Critics have voiced their concerns that the guidelines will allow vendors to continue to stonewall and deny specific problems. If the vendor claims that a remedy does not exist for the vulnerability, the finder may be pressured to not release the information on the discovered vulnerability. Although controversy still surrounds the topic of the OIS guidelines, they are a good starting point. If all of the software vendors will use this as their framework, and develop their policies to be compliant with these guidelines, then customers will have a standard to hold the vendors to.

Case Studies

The fundamental issue that this chapter addresses is how to report discovered vulnerabilities responsibly. The issue has sparked considerable debate in the industry for some time. Along with a simple “yes” or “no” to the question of whether there should be full disclosure of vulnerabilities to the public, other factors should be considered, such as how communication should take place, what issues stand in the way, and what both sides of the argument are saying. This section dives into all of these pressing issues, citing case studies as well as industry analysis and opinions from a variety of experts.

Pros and Cons of Proper Disclosure Processes 

 Following professional procedures with regard to vulnerability disclosure is a major issue. Proponents of disclosure want additional structure, more rigid guidelines, and ultimately more accountability from the vendor to ensure the vulnerabilities are addressed in a judicious fashion. The process is not cut and dried, however. There are many players, many different rules, and no clear-cut winner. It’s a tough game to play and even tougher to referee.

The Security Community’s View

The top reasons many bug finders favor full disclosure of software vulnerabilities are: • The bad guys already know about the vulnerabilities anyway, so why not release it to the good guys? • If the bad guys don’t know about the vulnerability, they will soon find out with or without official disclosure. • Knowing the details helps the good guys more than the bad guys. • Effective security cannot be based on obscurity. • Making vulnerabilities public is an effective tool to make vendors improve their products. Maintaining their only stronghold on software vendors seems to be a common theme that bug finders and the consumer community cling to. In one example, a customer reported a vulnerability to his vendor. A month went by with the vendor ignoring the customer’s request. Frustrated and angered, the customer escalated the issue and told the vendor that if he did not receive a patch by the next day, he would post the full vulnerability on a user forum web page. The customer received the patch within one hour. These types of stories are very common and are continually presented by the proponents of full vulnerability disclosure.

The Software Vendors’ View

In contrast, software vendors view full disclosure with less enthusiasm, giving these reasons: • Only researchers need to know the details of vulnerabilities, even specific exploits. • When good guys publish full exploitable code, they are acting as black hats and are not helping the situation but making it worse. • Full disclosure sends the wrong message and only opens the door to more illegal computer abuse. Vendors continue to argue that only a trusted community of people should be privy to virus code and specific exploit information. They state that groups such as the AV Product Developers’ Consortium demonstrate this point. All members of the consortium are given access to vulnerability information so that research and testing can be done across companies, platforms, and industries. The vendors do not feel that there is ever a need to disclose highly sensitive information to potentially irresponsible users.

Knowledge Management 

A case study at the University of Oulu in Finland titled “Communication in the Software Vulnerability Reporting Process” analyzed how the two distinct groups (reporters and receivers) interacted with one another and worked to find the root cause of the breakdowns. The researchers determined that this process involved four main categories of knowledge: • Know-what • Know-why • Know-how • Know-who The know-how and know-who are the two most telling factors. Most reporters don’t know whom to call and don’t understand the process that should be started when a vulnerability is discovered. In addition, the case study divides the reporting process into four different learning phases, known as interorganizational learning: • Socialization stage When the reporting group evaluates the flaw internally to determine if it is truly a vulnerability • Externalization phase When the reporting group notifies the vendor of the flaw • Combination phase When the vendor compares the reporter’s claim with its own internal knowledge about the product • Internalization phase When the receiving vendor accepts the notification and passes it on to its developers for resolution One problem that apparently exists in the reporting process is the disconnect and sometimes even resentment between the reporting party and the receiving party. Communication issues seem to be a major hurdle for improving the process. From the case study, it was learned that over 50 percent of the receiving parties who had received potential vulnerability reports indicated that less than 20 percent were actually valid. In these situations the vendors waste a lot of time and resources on issues that are bogus.

Publicity

The case study included a survey that circled the question of whether vulnerability information should be disclosed to the public; it was broken down into four individual statements that each group was asked to respond to: 1. All information should be public after a predetermined time. 2. All information should be public immediately. 3. Some part of the information should be made public immediately. 4. Some part of the information should be made public after a predetermined time. As expected, the feedback from the questions validated the assumption that there is a decided difference of opinion between the reporters and the vendors. The vendors overwhelmingly feel that all information should be made public after a predetermined time, and feel much more strongly about all information being made immediately public than the reporters do.

The Tie That Binds 

To further illustrate the important tie between reporters and vendors, the study concludes that the reporters are considered secondary stakeholders of the vendors in the vulnerability reporting process. Reporters want to help solve the problem, but are treated as outsiders by the vendors. The receiving vendors often found it to be a sign of weakness if they involved a reporter in their resolution process. The concluding summary was that both participants in the process rarely have standard communications with one another. Ironically, when asked about improvement, both parties indicated that they thought communication should be more intense. Go figure!

Team Approach 

Another study, “The Vulnerability Process: A Tiger Team Approach to Resolving Vulnerability Cases,” offers insight into the effective use of teams comprising the reporting and receiving parties. To start, the reporters implement a tiger team, which breaks the functions of the vulnerability reporter into two subdivisions: research and management. The research team focuses on the technical aspects of the suspected flaw, while the management team handles the correspondence with the vendor and ensures proper tracking. The tiger team approach breaks down the vulnerability reporting process into the following life cycle: 1. Research Reporter discovers the flaw and researches its behavior. 2. Verification Reporter attempts to re-create the flaw. 3. Reporting Reporter sends notification to receiver, giving thorough details of the problem. 4. Evaluation Receiver determines if the flaw notification is legitimate. 5. Repairing Solutions are developed. 6. Patch evaluation The solution is tested. 7. Patch release The solution is delivered to the reporter. 8. Advisory generation The disclosure statement is created. 9. Advisory evaluation The disclosure statement is reviewed for accuracy. 10. Advisory release The disclosure statement is released. 11. Feedback The user community offers comments on the vulnerability/fix.

Communication

When observing the tendencies of the reporters and receivers, the case study researchers detected communication breakdowns throughout the process. They found that factors such as holidays, time zone differences, and workload issues were most prevalent. Additionally, it was concluded that the reporting parties were typically prepared for all their responsibilities and rarely contributed to time delays. The receiving parties, on the other hand, often experienced lag time between phases, mostly due to difficulties in spreading the workload across a limited staff. Secure communication channels between the reporter and the receiver should be established throughout the life cycle. This sounds like a simple requirement, but as the research team discovered, incompatibility issues often made this task more difficult than it appeared. For example, if the sides agree to use encrypted e-mail exchange, they must ensure that they are using similar protocols. If different protocols are in place, the chances of the receiver simply dropping the task greatly increase. 

Knowledge Barrier 

There can be a huge difference in technical expertise between a vendor and the finder. This makes communicating all the more difficult. Vendors can’t always understand what the finder is trying to explain, and finders can become easily confused when the vendor asks for more clarification. The tiger team case study found that the collection of vulnerability data can be very challenging due to this major difference. Using specialized teams who have areas of expertise is strongly recommended. For example, the vendor could appoint a customer advocate to interact directly with the finder. This party would be a middleperson between engineers and the finder.

Patch Failures 

The tiger team case also pointed out some common factors that contribute to patch failures in the software vulnerability process, such as incompatible platforms, revisions, regression testing, resource availability, and feature changes. Additionally, it was discovered that, generally speaking, the lowest level of vendor security professionals work in maintenance positions, which is usually the group who handles vulnerability reports from finders. It was concluded that a lower quality of patch would be expected if this is the case.

Vulnerability after Fixes Are in Place

Many systems remain vulnerable long after a patch/fix is released. This happens for several reasons. The customer is continually overwhelmed with the number of patches, fixes, updates, versions, and security alerts released every day. This is the reason that there is a maturing product line and new processes being developed in the security industry to deal with “patch management.” Another issue is that many of the previously released patches broke something else or introduced new vulnerabilities into the environment. So although it is easy to shake our fists at the network and security administrators for not applying the released fixes, the task is usually much more difficult than it sounds.

iDefense

iDefense is an organization dedicated to identifying and mitigating software vulnerabilities. Started in August 2002, iDefense employs researchers and engineers to uncover potentially dangerous security flaws that exist in commonly used computer applications throughout the world. The organization uses lab environments to re-create vulnerabilities and then works directly with the vendors to provide a reasonable solution. iDefense’s program, Vulnerability Contributor Program (VCP), has pinpointed hundreds of threats over the past few years within a long list of applications. This global security company has drawn skepticism throughout the industry, however, as many question whether it is appropriate to profit by searching for flaws in others’ work.
The biggest fear here is that the practice could lead to unethical behavior and, potentially, legal complications. In other words, if a company’s sole purpose is to identify flaws in software applications, wouldn’t there be an incentive to find more and more flaws over time, even if the flaws are less relevant to security issues? The question also touches on the idea of extortion. Researchers may get paid by the number of bugs they find—much like the commission a salesperson makes per sale.
Critics worry that researchers will begin going to the vendors demanding money unless they want their vulnerability disclosed to the public—a practice referred to as a “finder’s fee.”
Many believe that bug hunters should be employed by the software companies or work on a voluntary basis to avoid this profiteering mentality.
Furthermore, skeptics feel that researchers discovering flaws should, at a minimum, receive personal recognition for their findings. They believe bug finding should be considered an act of goodwill and not a profitable endeavor. Bug hunters counter these issues by insisting that they believe in full disclosure policies and that any acts of extortion are discouraged. In addition, they are paid for their work and do not work on a bug commission plan as some skeptics maintain.
Yep— more controversy. In the first quarter of 2007, iDefense, a VeriSign company, offered up a challenge to the security researchers. For any vulnerability that allows an attacker to remotely exploit and execute arbitrary code on either Microsoft Windows Vista or Microsoft Internet Explorer v7, iDefense will pay $8,000, plus an extra $2,000 to $4,000 for the exploit code, for up to six vulnerabilities. Interestingly, this has fueled debates from some unexpected angles. Security researchers are up in arms because previous quarterly vulnerability challenges from iDefense paid $10,000 per vulnerability.
Security researchers feel that their work is being “discounted.” This is where it turns dicey. Because of decrease in payment for the gray hat work for finding vulnerabilities, there is a growing dialogue between these gray hatters to auction off newly discovered, zero-day vulnerabilities and exploit code through an underground brokerage system. The exploits would be sold to the highest bidders. The exploit writers and the buyers could remain anonymous.
In December 2006, eWeek reported that zero-day vulnerabilities and exploit code were being auctioned on these underground, Internet-based marketplaces for as much as $50,000 apiece, with prices averaging between $20,000 and $30,000. Spam-spewing botnets and Trojan horses sell for about $5,000 each. There is increasing incentive to “turn to the dark side” of bug hunting. The debate over higher pay versus ethics rages on. The researchers claim that this isn’t extortion, that security researchers should be paid a higher price for this specialized, highly skilled work.
So, what is it worth? What will it cost? What should these talented, dedicated, and skilled researchers be paid? In February 2007, dialogue on the hacker blogs seemed to set the minimum acceptable “security researcher” daily rate at around $1,000. Further, from the blogs, it seems that uncovering a typical, run-of-the-mill vulnerability, understanding it, and writing exploit code takes, on average, two to three weeks. This sets the price tag at $10,000 to $15,000 per vulnerability and exploit, at a minimum. Putting this into perspective, Windows Vista has approximately 70 million lines of code. A 2006 study sponsored by the Department of Homeland Security and carried out by a team of researchers centered at Stanford University, concluded that there is an average of about one bug or flaw in every 2,000 lines of code. This extrapolates to predict that Windows Vista has about 35,000 bugs in it. If the security researchers demand their $10,000 to $15,000 ($12,500 average) compensation per bug, the cost to identify the bugs in Windows Vista approaches half a billion dollars—again, at a minimum. Can the software development industry afford to pay this? Can they afford not to pay this? The path taken will probably lie somewhere in the middle.

Zero Day Initiative

Another method for reporting vulnerabilities that is rather unique is the Zero Day Initiative (ZDI). What makes this unique is the method in which the vulnerabilities are used. The company involved, TippingPoint (owned by 3Com), does not resell any of the vulnerability details or the code that has been exploited. Instead they notify the vendor of the product and then offer protection for the vulnerability to their clients. Nothing too unique there; what is unique though, is that after they have developed a fix for the vulnerability, they offer the information about the vulnerability to other security vendors. This is done confidentially, and the information is even provided to their competitors or other vendors that have vulnerability protection or mitigation products. Researchers interested in participating can provide exclusive information about previously undisclosed vulnerabilities that they have discovered. Once the vulnerability has been confirmed by 3Com’s security labs, a monetary offer is made to the researcher. After an agreement on the acquisition of the vulnerability, 3Com will work with the vendor to generate a fix. When that fix is ready, they will notify the general public and other vendors about the vulnerability and the fix. When TippingPoint started this program, they followed this sequence of events: 1. A vulnerability is discovered by a researcher. 2. The researcher logs into the secure ZDI portal and submits the vulnerability for evaluation. 3. A submission ID is generated. This will allow the researcher to track the unique vulnerability through the ZDI secure portal. 4. 3Com researches the vulnerability and verifies it. Then it decides if it will make an offer to the researcher. This usually happens within a week.5. 3Com makes an offer for the vulnerability, and the offer is sent to the researcher via e-mail that is accessible through the ZDI secure portal. 6. The researcher is able to access the e-mail through the secure portal and can decide to accept the offer. If this happens, then the exclusivity of the information is assigned to 3Com. 7. The researcher is paid in its preferred method of payment. 3Com responsibly notifies the affected product vendor of the vulnerability. TippingPoint IPS protection filters are distributed to the customers for that specific vulnerability. 8. 3Com shares advanced notice of the vulnerability and its details with other security vendors before public disclosure. 9. In the final step, 3Com and the affected product vendor coordinate a public disclosure of the vulnerability when a patch is ready and through a security advisory. The researcher will be given full credit for the discovery, or if it so desires, it can remain anonymous to the public. That was the initial approach that TippingPoint was taking, but on August 28, 2006, it announced a change. Instead of following the preceding procedure, it took a different approach. The flaw bounty program would announce its currently identified vulnerabilities to the public while the vendors worked on the fixes. The announcement would only be a bare-bones advisory that would be issued at the time it was reported to the vendor. The key here is that only the vendor that the vulnerability affects is mentioned in this early reporting, as well as the date the report was issued and the severity of the vulnerability. There is no mention as to which specific product is being affected. This move is to try to establish TippingPoint as the industry watchdog and to keep vendors from dragging their feet in creating fixes for the vulnerabilities in their products. The decision to preannounce is very different from many of the other vendors in the industry that also purchase data on flaws and exploits from external individuals. 
Many think that this kind of approach is simply a marketing ploy and has no real benefit to the industry. Some critics feel that this kind of advanced reporting could cause more problems for, rather than help, the industry. These critics feel that any indication of a vulnerability could attract the attention of hackers in a direction that could make that flaw more apparent. Only time will truly tell if this will be good for the industry or detrimental.

Vendors Paying More Attention

Vendors are expected to provide foolproof, mistake-free software that works all the time. When bugs do arise, they are expected to release fixes almost immediately. It is truly a double-edged sword. However, the common practice of “penetrate and patch” has drawn criticism from the security community as vendors simply release multiple temporary fixes to appease the users and keep their reputation intact. Security experts argue that this ad hoc methodology does not exhibit solid engineering practices. Most security flaws occur early in the application design process. Good applications and bad applications are differentiated by six key factors:
1. Authentication and authorization The best applications ensure that authentication and authorization steps are complete and cannot be circumvented. 2. Mistrust of user input Users should be treated as “hostile agents” as data is verified on the server side and as strings are stripped of tags to prevent buffer overflows. 3. End-to-end session encryption Entire sessions should be encrypted, not just portions of activity that contain sensitive information. In addition, secure applications should have short timeouts that require users to reauthenticate after periods of inactivity. 4. Safe data handling Secure applications will also ensure data is safe while the system is in an inactive state. For example, passwords should remain encrypted while being stored in databases, and secure data segregation should be implemented. Improper implementation of cryptography components has commonly opened many doors for unauthorized access to sensitive data. 5. Eliminating misconfigurations, backdoors, and default settings A common but insecure practice for many software vendors is shipping software with backdoors, utilities, and administrative features that help the receiving administrator learn and implement the product. The problem is that these enhancements usually contain serious security flaws. These items should always be disabled before shipment and require the customer to enable them; and all backdoors should be properly extracted from source code. 6. Security quality assurance Security should be a core discipline during the designing of the product, the specification and developing phases, and during the testing phases. An example of this is vendors who create security quality assurance (SQA) teams to manage all security-related issues.

So What Should We Do from Here on Out?

There are several things that we can do to help improve the situation, but it requires everyone involved to be more proactive, more educated, and more motivated. Here are some suggestions that should be followed if we really want to improve our environments: 1. Stop depending on firewalls. Firewalls are no longer an effective single countermeasure against attacks. Software vendors need to ensure that their developers and engineers have the proper skills to develop secure products from the beginning. 2. Act up. It is just as much the consumers’ responsibility as the developers’ to ensure that the environment is secure. Users should actively seek out documentation on security features and ask for testing results from the vendor. Many security breaches happen because of improper configurations by the customer.
3. Educate application developers. Highly trained developers create more secure products. Vendors should make a conscious effort to train their employees in areas of security. 4. Access early and often. Security should be incorporated into the design process from the early stages and tested often. Vendors should consider hiring security consultant firms to offer advice on how to implement security practices into the overall design, testing, and implementation processes. 5. Engage finance and audit. Getting the proper financing to address security concerns is critical in the success of a new software product. Engaging budget committees and senior management at an early stage is also critical.