Copyright 2020 Tyler Moore. This material is being shared for anyone to reuse in course materials for educational (non-profit) purposes. If you do use this material in your course, I would greatly appreciate it if you let me know by emailing me with the course title and institution for which it is being used.
Learning Objectives
Upon completing this module you should be able to:
- Articulate key concepts from economics, notably incentives and market failures, and understand how they apply to cybersecurity problems.
- Determine the promise and limitations of various policy interventions, notably certification schemes, information disclosure, and voluntary measures.
Why Economics?
In these videos, I’ll explain why economics offers a useful perspective to understand and overcome cybersecurity challenges. I'll introduce the concept of incentives and explain why it is crucial to understanding cybersecurity failures.
Market Failures
What is a market failure and why does it matter?
Economists use the term market failure to describe when the real world doesn't quite live up to the models of perfect competition. In fact, markets fail time and again in the same particular ways, so much so that several distinct categories have been identified and are highlighted by economists:
- Monopoly
- Oligopoly
- Public goods
- Information asymmetries
- Externalities
You’re probably familiar with some of these terms but not others. In a monopoly, a good or service has only one provider. Under these circumstances, the monopolist affects prices by controlling supply. In oligopolies, only a few providers are available.
Most goods can be privately consumed. Material possessions such as cars and houses are assigned individual owners, and it is natural that no one else can consume the good at the same time. But certain goods cannot be privately consumed, such as investments in national defense or even the air we all breathe. Public goods behave differently than normal private goods in two key ways. First, public goods are nonexcludable: there is no practical way to prevent people who don’t pay from consuming the good. Second, public goods are nonrival. When someone consumes the good, this does not limit others from also consuming it. National security is a classic public good. It is nonrival because my benefiting from, say, the protection afforded by the U.S. military does not prevent others from experiencing the same benefits. It is nonexcludable because it is not feasible, let alone ethical, to exclude tax scofflaws from also receiving protection even though they didn’t pay for it.
The last two market failures—information asymmetries and externalities—appear in many cybersecurity contexts. We dedicate several of the subsequent pages to explaining how these work. The presence of a market failure justifies a regulatory intervention and in turn informs how public policy should be designed. Even when public policy interventions are politically impractical, pointing out the existence of a market failure is still useful for two reasons. First it helps explain why we have suboptimal investment in cyber security. Some puzzles as to why things don't work can be explained in the context of these failures. Second, it could create opportunities and guidance for private actors to come in and correct the problem.
Information asymmetries and externalities in cybersecurity
These videos describe the two most important market failures afflicting cybersecurity in greater detail: asymmetric information and externalities.
Implications of externalities
Both positive and negative externalities are bad from an economic perspective.
Whenever you have a positive externality, you tend to have less of the good than you would like.
Whereas, when you have a negative externality you end up with more of the bad thing than you'd like from a social perspective.
In other words, in a world rife with externalities, we end up with less security investment from the good guys and more harm emanating from the bad guys than would be socially optimal.
Policy Interventions
Available policy interventions
When is a policy intervention needed? The presence of market failures justifies requiring policy interventions. And what's particularly interesting here is that in our context, many traditional interventions don't work well for cyber security. So we're going to focus our efforts on three approaches that hold promise in correcting market failures in security:
- Certification schemes
- Information disclosure requirements,
- Intermediary liability
To begin, though, we should talk about traditional regulatory approaches. There is a dichotomy between ex ante and ex post approaches. Ex ante means trying to do something about the problem before the bad thing occurs. However, in ex post you live and let live but as soon as something goes wrong, you assign responsibility for failure afterward.
In safety regulation, you have compliance regimes that try to prevent harm. Ex ante approaches are preferred when the situation you are trying to prevent is so bad that you want to prevent it from occurring at all. You see ex ante approaches used, for example, in nuclear safety regulation, where no one wants to wait around for accidents to happen and then assign blame. It's also worth noting that ex ante approaches can be preferred whenever it is difficult to measure bad outcomes, as we often happen in cyber security.
In ex post liability, you wait for the bad thing to occur, then assign liability to the party that caused the problem. Ex post liability has been used extensively in many industries, such as the auto industry, but it has not been adopted in the software industry.
Both approaches have significant drawbacks in the context of cyber security. The ex ante safety rules can certainly promote a compliance approach to security, which is okay so long as you are adopting the right measures. And unfortunately, sometimes there is drift between the measures that are adopted and those that actually prevent harm. Meanwhile, software liability certainly has considerable downsides. In particular, it certainly could hinder innovation in the context of developing free and open source software. Arguably much open source software development would stop if those volunteering their efforts could be held liable for bugs they might introduce. Furthermore, by introducing software liability, you may raise the barriers of entry to software development so that small firms may not enter the marketplace. So there's a real trade-off between innovation and security when it comes to the question of the software liability.
Nonetheless, the proliferation of IoT introduces the potential for product liability rules to apply to their cyber-enabled equivalents.
Policy interventions for cybersecurity
The following videos discuss several mechanisms that can be tried to remedy information asymmetries affecting cybersecurity.
Additional Resources
The paper he economics of cybersecurity: principles and policy options supplements the material discussed above. If you are looking for more information on any one of these topics, please take a look.
You can also view the slides used in the videos directly here and here.
Assignments are also available to instructors who incorporate this material into their courses. Contact Tyler Moore if you are interested.