A Logo

Feel free to include my content in your page via my
RSS feed

Help Irongeek.com pay for
bandwidth and research equipment:

Subscribestar or Patreon

Search Irongeek.com:

Affiliates:
Irongeek Button
Social-engineer-training Button

Help Irongeek.com pay for bandwidth and research equipment:

paypalpixle




Ethics of full disclosure concerning security vulnerabilities

Ethics of full disclosure concerning security vulnerabilities

Adrian Crenshaw

    I originally wrote this for a class, but decide to post this on my website (hell, content is content). I’ve edited it somewhat to more closely reflect my views. Business ethics is sometimes not ethics at all, but "public relations containment". The layout was specified by the class, so it may look a little odd to some of my readers. If nothing else, this could be useful for other students having to write a similar paper.

So, what are the dilemmas?

Is it ethical to reveal the details of a software/process vulnerability? On the one hand, it could lead "bad guys" to know how to attack systems, on the other revealing the details could help system planers not to make the same mistake in the future as well as mitigate current problems.

Who are the stakeholders?

  • Software companies and system designers, hence forth revered to as vendors for brevity
  • Security researchers
  • Companies that purchase and implement software and systems (here after referred to as "business clients" for the sake of brevity)
  • End users of the software and systems, because of the potential size of this group it should be considered full public disclosure and I will frequently refer to them as the "general public" throughout this paper
  • Criminals

Who are the decision-makers?

  • Security researchers (this will be the focus as that is who is asking the question)
  • Law makers
  • Software companies and system designers
  • Companies that purchase and implement software and systems
  • End users of the software and systems, but this category is assumed to encompass the whole world

What are the options/courses of action?

    As security researchers we have the choice to reveal vulnerabilities in software and systems in many different ways, and to different extents. There is a whole menu of options on how much to reveal about the vulnerability, who to reveal it to and when. For the sake of clarity, let’s create such a menu:

Amount of disclosure:

  1. No disclosure (keep all details a secret)
  2. Partial disclosure (give enough information for end users and business clients to mitigate vulnerabilities, but hopefully not enough for criminals to attack systems)
  3. Full disclosure of all details

Who to disclose to:

  1. Software companies and system designers
  2. Other security researchers
  3. Business clients
  4. End users

When to disclose:

  1. Immediately
  2. After the vulnerability has been fixed
  3. After the Software companies or system designers have had adequate time to address the problem, regardless of whether or not the problem is fixed

    Multiple choices can be made from each list, and choices on each list are not mutually exclusive. We could choose full disclosure for one category of stakeholder, and limited or no disclosure to another. Also, when the disclosures happen can be selected by stakeholder category. Under different circumstances, varying amounts of discloser to different stakeholder categories at different times may be appropriate.

Apply tests, articles, analysis, theories, standards

    Because there are so many permutations, let’s narrow the options down to four common courses of action in the security industry. Since there is some ambiguity, I will try my best to explicitly define the terms, other researchers may have differing definitions. Asking "What is responsible disclosure" to security researchers is a little like asking "What is God?" to a bunch of philosophers/theologians, you will get all sorts of answers back.

Immediate Full and Public Disclosure

As soon as the vulnerability is know, all details are full published to the public.

Complete Nondisclosure

    The security researcher keeps the vulnerability to themselves. One might wonder why a researcher would want to do this, after spending all of the time to find the vulnerability. Two main reasons come to mind. First, there may be legal issues with disclosing it, and the researcher fears the reaction of the system developers. A second is that the researcher may want a vulnerability in their arsenal that they can use and know has not been fixed. These are known as "zero days" or "0-days" in the industry since it has been zero days since the problem has been patched. It should be noted there is some ambiguity with the term 0-day as well, as the term is sometimes used for a vulnerability that is known to the public/vendor but not patched yet, and sometimes the vulnerability is unknown to anyone but a very small group. 0-days are quite useful to criminal attackers because of their high likely hood of working, and on the legal side professional pen-testers (experts hired to act as attackers by a company to check for vulnerabilities) may use them as well.

Limited Disclosure

    Only the vendor, and perhaps some corporate clients, are told about the vulnerability.

Responsible Full Disclosure

There is much debate on how to define "Responsible Full Disclosure", but a few of the general principle include:

    1. Tell the vendor about vulnerability and give them time to fix it (sometimes referred to as giving them time to patch).
    2. Let the public know enough so that the risk can be mitigated, but hopefully not enough so that criminals can use the vulnerability in an attack.
    3. Once the vulnerability has been patched you may release the full details.
    4. If the vendor ignores the researcher during step one, a partial or full disclosure to the public may be in order to force their hand into fixing the issue.

Now let’s apply some of the models of ethical testing to these four options.

Immediate Full and Public Disclosure
Complete Nondisclosure
Limited Disclosure
Responsible Full Disclosure

It is my personal opinion that all these models fall far short of being really useful, but people involved in "business ethics" seem to love them.

Nash model
http://www.cs.bgsu.edu/maner/heuristics/1981Nash.htm

1. Have you defined the problem accurately?

    For this question all four possible courses of action seem to have the same answer. As I understand the problem, yes, it seems to have been defined accurately. Different types of vulnerabilities will add variance to the problem however.

2. How would you define the problem if you stood on the other side of the fence?

    There are so many fences here that the answer is complicated. I’ll have to break it down by the four courses of action, and how they might see the other courses of action.

How Immediate Full and Public Disclosure proponents might see the other courses of action:

Complete Nondisclosure: The vendors made a mistake, and the world needs to know about it. Knowledge is power, and should be spread.

Limited disclosure: Why should knowledge about how a system works only be kept to a small elite few? Others may benefit.

Responsible Full Disclosure: While I agree with you on the side of information needing to be released, why should we have to wait for the vendors to get their act together?

How Complete Nondisclosure proponents might see the other courses of action:

Immediate Full and Public Disclosure: If you release all of the details, then I won’t have this nifty tool in my arsenal.

Limited Disclosure: If you release all of the details, then I won’t have this nifty tool in my arsenal.

Responsible Full Disclosure: If you release all of the details, then I won’t have this nifty tool in my arsenal.

How Limited Disclosure proponents might see the other courses of action:

Immediate Full and Public Disclosure: You are acting irresponsibly. Your spreading of the vulnerability information will lead to many compromises and attacks on systems.

Complete Nondisclosure: By keeping it to yourselves the vendor never get a chance to fix the problem, and the public can’t mitigate the risks. You are only acting in your own self interest.

Responsible Full Disclosure: Once we fix the issue, why does anyone else need to know the details?

How Responsible Full Disclosure proponents might see the other courses of action:

Immediate Full and Public Disclosure: You are acting irresponsibly. Your spreading of the vulnerability information will lead to many compromises and attacks on systems. Wait for the vendor to get a chance to fix the problem, and then release the details so developers in the future won’t make the same mistakes.

Complete Nondisclosure: By keeping it to yourselves the problem never gets fixed. You are only acting in your own self interest. Think about what is best for the community.

Limited disclosure: By not releasing the facts of the vulnerability after it has been fixed the world is missing an educational opportunity. Those that do not learn from their mistakes are doomed to repeat them.

3. How did this situation occur in the first place?

    With all courses of action the answer to this question is pretty much the same. The vendors made a mistake or a compromise in the design of their system or software that lead to a security vulnerability. Now the question is what is in the public’s best interest with regards to keeping the damages to a minimum.

4. To whom and to what do you give your loyalty as a person and as a member of the corporation?

    This would depend greatly on who is answering. The ethical answer would seem to be loyalty to the public good. The question is: how is the public best served in this decision. I’m not sure that all of the proponents of the four courses of action have the same loyalties. This will be made somewhat clearer in step 5. While in might be somewhat cynical, this would seem to be the loyalties of the different proponents.

Immediate Full and Public Disclosure: To the public.

Complete Nondisclosure: To themselves.

Limited Disclosure: To the company that created the vulnerable system.

Responsible Full Disclosure: To the public.

5. What is your intention in making this decision?

    The answer to this question is highly dependent on the course of action.

Immediate Full and Public Disclosure: To let the public know, to educate them, and perhaps to get a little fame for myself. Even security researchers have egos.

Complete Nondisclosure: To have exploits in my arsenal.

Limited Disclosure: To protect the company, and the end users.

Responsible Full Disclosure: To protect the public and to increase knowledge about such issues. Personal fame also becomes an issue.

6. How does this intention compare with the probable results?

    I can only judge this based on my past experience in the security field, but the following seem like likely outcomes.

Immediate Full and Public Disclosure: There will be a lot of havoc to start will, but the problem will get fixed in a short matter of time because of the pressure exerted by end users.

Complete Nondisclosure: The researcher will have a 0-day for a while that they can use, but if one researcher can discover it so can others. The chances of it remaining undiscovered forever seem rather slim.

Limited Disclosure: The vendor will have more time to fix the vulnerability, but may drag their feet since fixing vulnerabilities does not directly add to the bottom line. Eventually, someone else will discover the bug, or perhaps they already have and are exploiting it secretly. In the mean time, the vulnerability exists and the public does not have the knowledge to mitigate it.

Responsible Full Disclosure: The vendor will have some time to fix the issue, though how much time should be given is up for debate. The public will get a chance to mitigate the issue. On the down side, the small amount of information give to end users so they can mitigate the problem may be just enough for attackers to figure out where the vulnerability lies and to exploit it. After the issue is fixed, full details will be release for the sake of advancing the field.

7. Whom could your decision or action injure?

Immediate Full and Public Disclosure: The vendor may suffer because of the bad press and having to pay developers to fix the vulnerability. The public may suffer because of attackers exploiting the vulnerability. The security researcher may suffer legal action from the vendor.

Complete Nondisclosure: The public may suffer because of attackers exploiting the vulnerability.

Limited Disclosure: The public may suffer because of attackers exploiting the vulnerability while waiting for the vendor to fix the issue.

Responsible Full Disclosure: The vendor may suffer because of the bad press and having to pay developers to fix the vulnerability. The security researcher may suffer legal action from the vendor. The public that is slow to patch may suffer because of attackers exploiting the vulnerability. It’s also possible that the limited information given to the end users so they can mitigate the problem will cause less ethical researchers to also be able to figure out how the vulnerability works and exploit it for personal gain before the vendor has a chance to patch.

8. Can you discuss the problem with the affected parties before you make your decision?

Yes, but how open a vendor will be to the issues varies. Two common complaints are that vendors will either ignore the problem, or threaten legal action.

9. Are you confident that your position will be as valid over a long period of time as it seems now?

    This question will have to be answered base on my personal experience in the field. It should be noted that actually proponents of each of the courses of action are likely to feel differently.

Immediate Full and Public Disclosure: I don’t believe proponents of this course of action would care much about what happens in the long view.

Complete Nondisclosure: I’m not confident at all that this course of action will be valid over a long period of time, history seems to show that if one person can find a vulnerability, so can others.

Limited Disclosure: If the vendor is quick about fixing the problem, then this course of action seems fine. However, some vendors in the past have procrastinated in fixing bugs because they did not see the need of fixing a problem others did not know about.

Responsible Full Disclosure: I’m fairly confident that this course of action will be valid over a long period. There is a chance of problems, but it seems the least problematic of the four courses of action.

10. Could you disclose without qualm your decision or action to your boss, your CEO, the board of directors, your family, society as a whole?

    The following would be my reaction if I was to follow the course of action specified.

Immediate Full and Public Disclosure: I think this course of action would make me look reckless. I do not think I would like other people knowing that I went this route.

Complete Nondisclosure: I think this course of action would make me look selfish. I do not think I would like other people knowing that I went this route.

Limited Disclosure: How I would feel about telling the world depends on the outcome, and how responsible the vendor is about fixing the vulnerability. This course of action already presupposes that you would not tell society as a whole.

Responsible Full Disclosure: Yes, I would have no problem revealing this decision, though I might worry about litigation if I’m an outside researcher, or being fired if I were a researcher within the vendor’s firm.

11. What is the symbolic potential of your action if understood? Misunderstood?

Immediate Full and Public Disclosure: I’m not sure there is really a way to misunderstand the nature of this course of action. Either way, understood or misunderstood it would likely be seen as reckless and irresponsible.

Complete Nondisclosure: Depending on the ethics of the person evaluating my decision, I will either be seen as a shrewd tactician for keeping the vulnerabilities to myself, or as a self serving sociopath. Remember, being a sociopath means never having to say you’re sorry. J

Limited Disclosure: If understood I might seem responsible and loyal to the company, if misunderstood it could be seen as only working in the interest of the vendor without caring about the public at large.

Responsible Full Disclosure: If understood I would be seen as someone working for the good of all of the stakeholders, if misunderstood I would be seen as no different than the proponents of Immediate Full and Public Disclosure. Non techies may not understand the differences, and PR folks would rather look good than be good.

12. Under what conditions would you allow exceptions to your stand?

    It should be noted that the threat of litigation is likely to change any of these groups’ stands. Barring that, these seem to be the likely circumstances under which the proponents of the different courses of action would change their selection.

Immediate Full and Public Disclosure: I can not foresee this group making any exceptions, it’s very much an all or nothing sort of proposition.

Complete Nondisclosure: This group may make exceptions if they think they can make some sort of gain by exposing the vulnerability. Researchers have been paid it the past for finding vulnerabilities, so there may be a monetary incentive to reveal the details.

Limited Disclosure: I can not foresee this group making any exceptions, it’s very much an all or nothing proposition.

Responsible Full Disclosure: If the risk is very great, and it is unlikely that there will be fix for the issue, then perhaps saying nothing would be the best course of action.

Now let’s move on to a different model to test with.

Blanchard and Peale model
http://www.ericdigests.org/pre-9220/focus.htm

1. Is it legal?

    Ok, right here and now, I’ve got to say I dislike this model since I see ethics and law being only tenuously connected. Still, let’s go with it. The answer for all courses of action is a tentative yes, however there are exceptions which I will cover more in the legal in the section of this paper. The specifics of the vulnerability, and how it was found, greatly change the legality of the matter.

2. Is it balanced?

Immediate Full and Public Disclosure: This is not really balanced, as this choice does not take into account the interests of the vendor nor the general public.

Complete Nondisclosure: No, this is not balanced. The only stakeholder whose interest is addressed by this decision is the security researcher’s.

Limited Disclosure: This may be balanced if the vendor is responsible and fixes the vulnerability in a timely manner. If the vulnerability is fixed in a timely manner, then it would serve the interests of both the vendor and the general public. However, future system developer may not benefit from the knowledge gained.

Responsible Full Disclosure: This seems the most balanced of the four courses of action. If things go as planned, the vendor has a chance to fix the problem, the end users get a chance to mitigate the problem before a fix is released, future system developers get another case example to learn from and the researcher can get credit for the discovery.

3. How does it make me feel?

Immediate Full and Public Disclosure: Choosing this course of action would make me feel irresponsible.

Complete Nondisclosure: Choosing this course of action would make me feel selfish, but safer from legal hassles.

Limited Disclosure: Depending on the outcome, I may feel like I am doing what is best for the company or I may feel like a company sellout.

Responsible Full Disclosure: I would feel good about this decision as I believe I’ve given the vendor time to fix the issue while letting the public know enough to mitigate the problem. I also get to educate the public about this sort of vulnerability after the problem has been fixed.

    Ok, so I’ve applied two of the common ethical models, now for my personal thought processes on how to make a decision on what is ethical when it comes to disclosure. At times, giving the world information about a vulnerability could be like giving a lighter and an espresso to a five year old, fun to watch if it’s not in your neighborhood, but still a bad idea. That said, trying to keep secrets about vulnerabilities, or forbidden knowledge in general for that matter, is like the proverbial Genie in the bottle. If one person can figure it out, so can another. I feel everyone knowing something helps even the playing field, especially form a mitigation standpoint, and is better than letting only an elite few have the knowledge. Still, there are some things I would not want to reveal to the world until I had time to mitigate the damage they could cause (making sure that hyped up five year old was playing in an asbestos room, EPA not withstanding).

Conclusion

    Of the four courses of action listed, only two seem to possibly be ethical by normal standards: "Limited Disclosure" and "Responsible Full Disclosure." Of the two, Responsible Full Disclosure seems the most balanced for its effect on the various stakeholders. Both have their potential ethical issues:

Limited Disclosure: The vendor may procrastinate on fixing the problem, or may decide to do nothing since no one knows about the problem. In the mean time, end users could be compromised if criminals have figured out the issue on their own.

Responsible Full Disclosure: The information given to the general public may be just enough for criminals to "reverse engineer" the vulnerability and attack end users.

    Of those two options for course of action, I would choose Responsible Full Disclosure, but would want the following steps followed:

  1. After the vulnerability is understood, give the vendor a reasonable amount of time to fix the vulnerability.
  2. If the vulnerability is severe and likely to be exploited before the vendor has a chance to fix it, then give enough information to the public so they may mitigate the threat.
  3. If the vendor does not wish to fix the problem, or is taking an unreasonable amount of time, and the vulnerability is likely to be discovered by others in the mean time, then immediately give enough information to the public so they may mitigate the threat.
  4. Publish full information about the vulnerability so that future system designers can learn from it.

Legal Considerations

    While finding a vulnerability and publish information about it is generally legal, there are exceptions. There is the general tort of slander/libel if the vendor says the vulnerability does not exist, but this can be countered by simply proving the truth with "proof of concept code" for the vulnerability. Disclose of trade secrets is also a worry. Injunctions have been issued in the past to keep researchers from disclosing results, even after the vulnerability had supposedly been patched (Cisco Vs. Lynn):

http://news.bbc.co.uk/2/hi/technology/4724791.stm

    A bigger issue is certain laws that have been passed over the last decade, and have caused a "chilling effect" on security research. Sometimes laws have been stretched to suppress research, as is the case with Massachusetts Bay Transportation Authority v. Anderson where the MBTA used the Computer Fraud and Abuse Act to stop MIT students who were preparing to give a presentation on the transit fare system at the Defcon security conference:

http://www.eff.org/files/filenode/MBTA_v_Anderson/mbta-v-anderson-complaint.pdf

    Also, the DMCA’s Anti-circumvention measures have been used to stymie publication of research into the vulnerabilities of DRM (Digital Right Management) and other encryption systems:

http://www.aftenposten.no/english/local/article696330.ece

    Given the above and other examples, it might be best for security researchers to obtain a lawyer on retainer. Even being a responsible and ethical security researcher can have its legal hassles.

Printable version of this article

15 most recent posts on Irongeek.com:


If you would like to republish one of the articles from this site on your webpage or print journal please contact IronGeek.

Copyright 2020, IronGeek
Louisville / Kentuckiana Information Security Enthusiast