In March of 1999, the Melissa virus, named after the programmer’s favorite stripper, multiplied rapidly across the world, caused an estimated $1.1 billion in damages. Disguised as a file containing passwords for paywalled adult content websites, the infected file LIST.doc was posted to the alt.sex online forum, where it quickly drew some excited downloads. Once opened, the Melissa virus bombarded its hosts with a barrage of adult content websites while also leveraging Microsoft’s Outlook client to email additional infected files to the first 50 contacts in a victim’s address book. In just five days, Melissa infected thousands of computers worldwide, victimizing the email servers of around 300 corporations and government agencies, some of which were forced offline entirely.
Over the next few years, additional malware like ILOVEYOU, Code Red, and Nimda worms caused damages around the globe. Businesses, government agencies, schools, financial institutions, and international organizations were brought to their knees as their networks were sabotaged. Billions more were lost, classified information was accessed, and critical system files were destroyed.
The common denominator among these malware threats was their exploitation of vulnerabilities in widely used systems, particularly those involving Microsoft products. Officials began pressuring Microsoft, making it clear that if the company continued to turn a blind eye to security, the government would take their business elsewhere.
In response, Microsoft declared security would be the company’s “highest priority” in a 2002 memo written by Gates himself. The Trustworthy Computing initiative was born.
This sentiment, however, had not yet been embraced industry wide. With no established vulnerability disclosure programs, some hackers instead posted their discoveries to the BugTraq mailing list, which was then distributed to the customers of a start-up called SecurityFocus. Others posted to the full disclosure mailing list, an entirely unmoderated mailing list open to the world.
SNOsoft vs. Hewlett Packard
In 1998, Netragard’s very own Adriel Desautels and Kevin Finisterre founded Secure Network Operations, Inc., home to the SNOsoft Research Team, a collective of loosely affiliated security experts with a passion for vulnerability research and exploit development.
In July of 2002, only months after Microsoft took its stance, a SNOSoft member, known online as “Phased,” added an exploit for a privilege escalation vulnerability in the Tru64 Unix operating system to the BugTraq list. This was done in response to HP not taking action after being notified of the vulnerability many months prior. A week later, Kent Ferson, the vice president of Hewlett Packard Unix unit, the same person that SNOsoft disclosed the issue to (and many others) sent a letter containing the following:
“HP hereby requests that you cooperate with us to remove the buffer overflow exploit from Securityfocus.com and to take all steps necessary to prevent the further dissemination by SnoSoft and its agents of this and similar exploits of Tru64 Unix… If SnoSoft and its members fail to cooperate with HP, then this will be considered further evidence of SnoSoft’s bad faith.”
The executive also threatened legal action, citing the penalties carried by the Digital Millennium Copyright Act (DCMA):
“…[the SNOsoft researcher] could be fined up to $500,000 and imprisoned for up to five years…“
An aggressive interpretation of the DCMA law, originally intended to thwart unauthorized duplication of copyrighted works, was being weaponized in an attempt to hide a critical Tru64 vulnerability from the rest of the world. Ultimately, SNOsoft contacted the Electronic Frontier Foundation and made the matter, including the threat, public. This resulted in a massive public response aimed at HP, and the threat was rescinded.
However, the incident only exacerbated a growing issue within the security community: white-hat hackers, those motivated to bring awareness of security flaws to vendors, were losing any incentive to do so – especially when patches for known vulnerabilities remained unaddressed.
Some would argue that disclosure is inherently responsible, but many of those with extensive experience in the industry feel differently. Vulnerability disclosure would be responsible if businesses had a track record of fixing issues as they were reported. Instead, businesses waited months, sometimes even years to apply fixes. Meanwhile, threat actors could weaponize disclosures in days, sometimes even hours.
Silent disclosure, like when vulnerabilities are disclosed to Microsoft and patches are quietly pushed to customer systems sounds like a better idea. However, when Patch Tuesday comes around, security researchers (good and bad) reverse-engineer the patches and create n-day exploits (where n represents the number of days since disclosure). While 0-day exploits fetch much higher payouts, n-day exploits are also salable, especially when they are fresh.
Despite the turmoil, by disclosing the vulnerability, HP had no other choice but to create a solution.
The Line
Following the SNOsoft incident, the legal landscape surrounding vulnerability disclosure began to evolve. The 2003 DMCA Triennial Rulemaking Hearings were part of a broader process mandated by the Digital Millennium Copyright Act (DMCA) to review exemptions to its anti-circumvention provisions. These hearings provided a platform for discussing the legal challenges faced by security researchers and contributed to ongoing debates about the DMCA’s impact on security research. Over time, these discussions helped shape the responsible disclosure practices we see today, which include the development of bug bounty platforms like HackerOne and the adoption of safe harbor provisions by companies to encourage open reporting of vulnerabilities.
While responsible disclosure practices are better today, it is still difficult to define the line between what constitutes responsible and irresponsible disclosure. In May of 2022, the U.S. Department of Justice announced a revision to the Computer Fraud and Abuse Act that states “good-faith security research should not be charged.” Even so, the definition of what is considered “good-faith” is vague:
“Good faith security research means accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services.”
DeepSeek vs. Wiz
The timing was incredible! $590,000,000,000 dollars in market value was lost by AI’s golden child NVIDIA in 2025 during the trading week of January 27th – 31st. This was the the largest loss for a single company on record, (other than Tesla a little later at roughly $700,000,000,000.00).
Less than a week prior, on January 21st, the Trump administration unveiled the Stargate Project — a $500B investment into AI infrastructure, intended to position the United States as a leader in AI innovation. You would imagine that the news of such a substantial subsidy would have been good news in the stock market.
However, on January 20th, an unknown Chinese startup named DeepSeek had released the R1 model of its AI assistant. Within the week, it went viral, surpassing OpenAI’s ChatGPT app to become the top-rated free application on the iOS App Store within the U.S.
The usurping is understandable. The R1 model rivals and even outperforms the leading closed-source competition in performance benchmarks.
Along with claims that training costs only amounted to $6M (a fraction of the cost when compared to OpenAI’s GPT-4), DeepSeek’s AI has delivered substantial blows to not just NVIDIA but also to semiconductor and AI companies across the board, leaving investors to question their investment thesis.
The Wiz Finding
As DeepSeek was dominating the headlines, security researchers of the Israeli technology company, Wiz, discovered a publicly exposed database belonging to the startup, accessible to anyone who came across it as authentication was not required.
The database contained over a million lines of log entries, containing sensitive information such as system logs, user chat history, and API keys.
It is reported the Wiz security researchers:
“… were unsure about how to disclose their findings to the company and simply sent information about the discovery on Wednesday to every DeepSeek email address and LinkedIn profile they could find or guess.”
Shortly after the mass email campaign, the database was locked down.
The Issue with the Wiz Finding
Another simple mistake was brought to the attention of the company, contributing to the ongoing effort to secure the Internet and protect end users. Right?
For those familiar with the vulnerability reporting process, you may have noticed that Wiz did not have prior authorization to test DeepSeek’s systems. For those of you who are unfamiliar, if Wiz was granted permission, there would have been established communication methods to report any discovered vulnerability.
One could argue that Wiz was indeed acting in good faith. However, there are caveats…
In established disclosure programs, ones in which a company has consented to freelance security testing, there are clear rules regarding the extent of permissible intrusion into the systems. All security researchers are aware of this, even if it is not written in stone. However, rather than ceasing as soon as unauthenticated access was confirmed, Wiz researchers pushed further. By mapping the backend structure and executing database queries in search of sensitive information, which crosses the line separating acting in good faith to exceeding what would be considered acceptable conduct.
Additionally, Wiz disclosed this vulnerability publicly, using it as a marketing opportunity instead of quietly notifying DeepSeek and giving them an appropriate time to respond. News of this nature can have a severe impact on a company’s reputation and financial state. Perhaps that was the point?
Under normal circumstances, a free and ethically conducted security assessment would be welcomed by any company.
What can be done?
Responsible vulnerability disclosure remains a complex and nuanced issue. Recent high-profile cases, such as the vulnerabilities in the FreeHour Student Timetabling App and the MasterCard DNS Configuration Error, underscore the importance of careful consideration and necessary adherence to best practices in responsible disclosure. For guidance on navigating these responsibilities, resources like OWASP’s Vulnerability Disclosure Cheat Sheet and the Electronic Frontier Foundation (EFF) Coders’ Rights Project are invaluable references.
By embracing responsible disclosure, organizations can ensure that security vulnerabilities are identified and addressed in a way that protects users while respecting the rights of researchers. This involves establishing clear communication channels, providing timely responses, and offering safe harbor provisions to encourage open reporting. While financial rewards serve as a strong incentive, awards of recognition can be just as powerful motivators. Wiz themselves cite DeepSeek’s meteoric rise in popularity as the driving factor behind their decision to target it.
However, responsibly handling disclosure does not fall solely on the organization. Researchers must also exhibit patience and allow sufficient time for the issue to be resolved before publicly disclosing their findings. Yet, that was the case in the HP incident and the vulnerability was still unaddressed. So how much time should a vendor be given before an exploit is made public to force their hand? Many cite Google’s Project Zero approach as best practice, where the full details of the vulnerability are published after 90 days regardless of whether or not the organization has published a patch.
Is it also possible that delineating responsibility from irresponsible disclosure is too binary and doesn’t account for the nuances of security? As with many of the examples above, even if one does follow responsible vulnerability disclosure practices, that doesn’t mean legal action won’t be taken.
Ultimately, responsible vulnerability disclosure is about fostering a culture of transparency and cooperation, where security researchers and organizations work together to strengthen digital defenses and safeguard against potential threats. Security is not a spectator sport; it takes a team effort.