Evolving Threat series — Bug bounties and the Cobra Effect

Chetan Conikee
ShiftLeft Blog
Published in
7 min readDec 14, 2019

--

Have you ever tried to fix your cybersecurity posture, only to make things worse?

That’s called the Cobra Effect — when an well intentioned solution results in unintended consequences.

In a real-world system, there will be multiple reinforcing connections between events, resulting in often unpredictable feedback loops.

In colonial India, Delhi suffered a proliferation of cobras, which was a problem very clearly in need of a solution given the sorts of things that cobras bring, like death. To reduce the number of cobras slithering through the city, the local government placed a bounty on them. This seemed like a perfectly reasonable solution. The bounty was generous enough that many people take up cobra hunting, which led exactly to the desired outcome: The cobra population decreased. And that’s where things get interesting.

Credit: Sketchplanations (https://www.sketchplanations.com/post/186110724996/the-cobra-effect-the-story-goes-something-like)

As the cobra population decreased and it became harder to find cobras in the wild, the common folk became rather entrepreneurial. They started raising cobras in their homes, which they would then kill to collect the bounty as before. This led to a new problem: local authorities realized that there were very few cobras evident in the city, but they nonetheless were still paying the bounty to the same degree as before.

City officials then assessed the situation and canceled the bounty. In response, the people raising cobras in their homes also did a reasonable thing: They released all of their now-valueless cobras back into the streets. Who wants a house full of cobras?

The unintended consequence of the cobra eradication plan was an increase in the number of cobras in the streets. This case has become the exemplar of when an attempt to solve a problem ends up exacerbating the very problem that rule-makers intended to fix.

How does this anecdote relate to bug bounties?

When Bugs Become Cobras

Software Security is difficult and in a large part because it is irrefutable. It is practically impossible to discover and find all bugs in a software component. Hence we try to stack the odds in our favor by layering systems that discover vulnerabilities : we start with hiring experienced engineers, choosing a secure programming language, schedule formal design reviews, then mandate peer code reviews during development, followed by automated static analysis, red teaming, blue teaming, penetration testing, integrated testing, etc.

Bounty programs sit at the end of this process which means every bug found (from the outside) is one that slipped past everything mentioned above. In some cases, bug bounties act as a conduit for security researchers in public space to submit inherent flaws and vulnerabilities in software. Bounty payouts are exponentially increasing year on year. HackerOne reported a staggering $42MM payouts to date.

The pragmatist in us hate, but cannot ignore, that our time can be better spent in analyzing our codebases to identify potential vulnerabilities than fielding bounty reports.

Why aren’t we spelunking our codebases to discover conditions that can manifest into vulnerabilities?

As per this GitLab’s global DevSecOps report: “68% of the security professionals surveyed believe it’s a programmer’s job to write secure code, but they also think less than half of developers can spot security holes.” And that’s a problem.

Nearly half of security pros surveyed, 49%, said they struggle to get developers to make remediation of vulnerabilities a priority. Worse still, 68% of security professionals feel fewer than half of developers can spot security vulnerabilities later in the life cycle. At the same time, nearly 70% of developers said that while they are expected to write secure code, they get little guidance or help. One disgruntled programmer said, “It’s a mess, no standardization, most of my work has never had a security scan”. Another problem is it seems many companies don’t take security seriously enough. Nearly 44% of those surveyed reported that they’re not judged on their security vulnerabilities.
~ excerpt from
report

This twitter thread is a good example of variance in opinion across the board.

As engineers, we vet our codebases for unexpected behaviors by incorporating static analysis and fuzzing — but if we asked to bring our timeline in, what will we trade? New feature requests, Performance, Quality or Security?

Are we going to keep our completed feature turned off for the next two or three sprints while we finish security analysis? Hell No. Why? Most might say solving for security is a non-functional requirement and has no near-term incentives bound to it.

So, incentivization to the rescue, eh?

Until incentivized, we will not witness culture changes. Is culture-change a consequence of incentives?

Let me quote ~ Alfie Kohn verbatim since no one else can capture his commentary on culture of “rewards” as best as he himself can:

Do rewards work?

The answer depends on what we mean by “work.” Research suggests that, by and large, rewards succeed at securing one thing only: temporary compliance. When it comes to producing lasting change in attitudes and behavior, however, rewards, like punishment, are strikingly ineffective. Once the rewards run out, people revert to their old behaviors. Studies show that offering incentives for losing weight, quitting smoking, using seat belts, or (in the case of children) acting generously is not only less effective than other strategies but often proves worse than doing nothing at all. Incentives, a version of what psychologists call extrinsic motivators, do not alter the attitudes that underlie our behaviors. They do not create an enduring commitment to any value or action. Rather, incentives merely — and temporarily — change what we do.

Why do most executives continue to rely on incentive programs? Perhaps it’s because few people take the time to examine the connection between incentive programs and problems with workplace productivity and morale. Rewards buy temporary compliance, so it looks like the problems are solved. It’s harder to spot the harm they cause over the long term. Moreover, it does not occur to most of us to suspect rewards, given that our own teachers, parents, and managers probably used them. “Do this and you’ll get that” is part of the fabric of American life. Finally, by clinging to the belief that motivational problems are due to the particular incentive system in effect at the moment, rather than to the psychological theory behind all incentives, we can remain optimistic that a relatively minor adjustment will repair the damage.

By contrast, training and goal-setting programs had a far greater impact on productivity than did pay-for-performance plans.

The unintended consequence of lacking incentives and culture leads to insecure software deployed in production accompanied by large bounty payouts.

In true sense we are professing “ShiftLeft” in belief and practicing for “SolveRight”.

Speaking of which a keenly observant engineer from the inside might ask:

Why are bounty hunters incentivized to discover vulnerabilities while I am not ? Can I enforce the “Cobra Effect” to benefit from this unintended consequence? Given that majority of bounty hunters operate anonymously, can I etch a n-degree relationship with a bounty hunter and trade vulnerabilities for a spilt on yield (if the bounty eligibility contract is not well defined)?

Course Correcting your Bounty Programs

At this point you need to pause and examine if you have an iron-clad bounty-eligibility agreement in place.

One suggestion is to join BugCrowd and HackerOne as they have effective governance structures. If you decide not to, then define and publish an effective eligibility agreement. Good examples are https://www.microsoft.com/en-us/msrc/bounty-terms and https://www.paypal.com/us/webapps/mpp/security-tools/reporting-security-issues

Critical call-outs in an effective agreement (excerpts from Microsoft’s bounty terms)

- You are currently an employee of XXX or a XXX subsidiary, or an immediate family (parent, sibling, spouse, or child) or household member of such an employee;

- Within the six months prior to providing us your Submission you were an employee of XXX or a XXX subsidiary;

- You currently (or within six months prior providing to us your Submission) perform services for XXX or a XXX subsidiary in an external staff capacity that requires access to the XXX Corporate Network, such as agency temporary worker, vendor employee, business guest, or contractor; or

- You are or were involved in any part of the development, administration, and/or execution of this Program.

From a practice standpoint, examine your instruments that measure for vulnerabilities in the left and verify if it correlates to bounty reports discovered on the right. In an ideal case, the funnel should taper off at the far right/bottom or maintain a balance (as some classes of vulnerabilities are best discovered at runtime without bias).

Colin Greene from Facebook wrote this very interesting piece titled “Designing Security for Billions” where he explains how facebook iteratively adjusted their practice to automate code analysis.

Reducing Programming Errors : Funnel https://about.fb.com/news/2019/01/designing-security-for-billions/

If you net-spend on bounties begin to exponentially increase, its time for you to pause and apply first principles thinking to literally shift security practices to the left.

Lastly, allocate sufficient time for your developers to assess and fix vulnerabilities when encountered. For namesake do not integrate security tools in your CI/CD and consider yourself secure. Make it a point to understand results produced during analysis and address them before they percolate downstream and become vulnerabilities.

Thanks to Suchakra Sharma for peer reviewing/editing this article.

--

--

Engineer, InfoSec tinkerer, Seed Investor, Founder/CTO of ShiftLeft Inc., (Opinions, my own)