INTRODUCTION

When practicing cybersecurity, we must consider risk, vulnerabilities & threats.  Risk is the “potential” for the occurrence of something bad.  Vulnerabilities are discovered weaknesses in our systems or “enablers”, and threats are negative events (manifestations of risk) that can occur because of vulnerabilities.

A primary controller of any risk / threat model are vulnerabilities. Security vulnerabilities are often associated to the rapid evolution of technology.  Many vendors prioritize product release schedules and profits over security. So, as vendors release products faster, they increase risks by not properly testing for and remediating vulnerabilities.  This prioritization model, albeit good if you own the company stock, raises risks to end users and provides a fertile hunting ground for hackers to threaten the ecosystem by exploiting the vulnerabilities associated to compressed development lifecycles.

DISCLOSURE CONSTRUCTS

When a security researcher discovers a vulnerability, they should take immediate and specific action to protect the ecosystem. The researcher must consider the total impact to all participants: hardware vendors, software vendors, end users.  The security researcher must also formalize their findings by ensuring their discovery is both valid and repeatable.

Once the researcher has formalized their finding, they should proceed using the following steps:

  1. Communicate with impacted vendor(s)
  2. Negotiate with vendor(s) the timeline for remediation / patching
  3. Release information to public

COMMUNICATE WITH IMPACTED VENDORS

The impacted hardware or software vendor(s) should be notified and afforded the opportunity to create remediation strategies BEFORE the public (and opportunistic hackers) are made aware of the security researcher’s findings. Some vendors offer a “bug bounty” program that financially compensate the security researcher depending on the value of the researcher’s finding. It is always a good idea for the researcher to investigate compensation opportunities for their work and determine if the constraints of the program are compatible with their moral compass.

NEGOTIATE TIMELINE FOR REMEDIATION

Ultimately, it is up to the researcher to decide on the time frame they will allow the vendor(s) to respond. This timeline should be carefully considered.  It should be long enough to allow reasonable development of a remediation, but short enough to elicit a high prioritization of the required development. The researcher must balance the need to disclose and the value of the vendor relationship.  Additionally, if there are any bug bounty rewards, and the researcher desires compensation, then they must adhere to the requirements of the program.

Regardless of any opportunity for compensation, it is up to the researcher to “keep the vendors honest”. History shows that many vendors leverage the principle of Responsible Disclosure to delay remediation. These delays can span months or even years.  In some cases, irresponsible vendors have leveraged Responsible Disclosure principles to “run out the clock” for the support lifecycle of their vulnerable products. Therefore, any negotiated public disclosure timelines should honor the constraints of any bug bounty program, and if possible, be targeted to limit the required remediation release schedule to the soonest possible time.  If the vendor offers no time estimates, a good rule of thumb is to work with a public release date that is no more than 60 days from the date of notification.

MANAGING UNRESPONSIVE VENDORS

So, what does the researcher do when the impacted vendor(s) ignore them? 

This is a difficult position for the researcher because, if they are truly a “white hat”, their first responsibility is the protection of the ecosystem.  But if they disclose a vulnerability to the public prior to a vendor creating a patch or remediation, the security researcher has just personally increased the risk to the ecosystem they claim to protect. 

With this scenario, the researcher should not fall for the false argument that any researcher NOT following a responsible disclosure path is somehow being irresponsible.  If the security researcher attempts responsible disclosure and the vendor ignores the opportunity to remediate the vulnerability, then the onus lies with the vendor, not the researcher.  So, the “responsible” path for the researcher may lead to immediate disclosure when irresponsible vendors are derelict to provide timely remediation or patches.

RELEASE THE INFORMATION TO THE PUBLIC

On the agreed upon date, or sooner with an unresponsive vendor, the security researcher shares their findings with the public. This can be accomplished using a personal blog and/or contacting an industry publisher or reporter and offering the story.  No matter the method, the researcher MUST ensure their release is shared and widely distributed.  As the saying goes, “the story must have legs”. What this means is that the researcher must ensure that enough people know about their findings so the news endures and spreads.

CONCLUSION

As with all things in this world, personal integrity should drive your motivations. Do what is right even when (you think) no one is looking.  Using a structured vulnerability disclosure process is in the best interests of all impacted parties. Following these recommendations will establish and bolster a good reputation (“street cred”) for the researcher. It also allows building meaningful relationships with vendors, providing value to the public (ecosystem), and sometimes even generates revenues for the researcher.

If anyone, then, knows the good they ought to do and doesn’t do it, it is sin for them. – James 4:14 (NIV)

Comments are closed

Recent Posts