Left of the Boom: A Conversation with Malcolm Harkins, Part 2

Screen Shot 2020-08-13 at 5.31.50 PM.png

In part one of my conversation with Malcolm Harkins, we discussed the CISO’s role as a “choice architect.” In part two, we discuss how CISOs misperceive risk and the innovation necessary to move security “to the left,” enabling security and privacy by design.  (With Harkins’s approval, I’ve edited the questions and answers for brevity and clarity.)

The Misperception of Risk

Jamie Lewis (JL): [In part one of this conversation], we discussed the need for CISOs to expand their scope. As the security scope broadens, what’s the biggest risk CISOs face? 

Malcolm Harkins (ML): The most significant vulnerability we face is the misperception of risk, which is driven by economics and psychology. The economic side is my P&L, my budget, all those things that drive a level of bias toward the goals I have, and how my performance is measured. Like when Ford shipped the Pinto, it had a patent on a part for a safer gas tank that would have cost $11. But they were facing competition from Volkswagen in the low-end car market, so they brought the Pinto to the market faster than any other car they had built. They didn't want to lose the opportunity, so the economics were creating a strong bias. And we all know the results of that. The other aspect of this is psychology, and it can manifest in different ways. One is the “shiny bauble” syndrome. When people perceive a benefit in something or an opportunity in it, or they get enamored with it, they psychologically discount the risks. 

JL: What about CISOs and the people on their teams? How do they misperceive risk?

MH: If you go back to WWI, the fighter planes that were doing strafing runs had high crash rates because the pilots would get target fixation. They would forget to pull up in time, or they would lose situational awareness and not see the other planes coming at them. The risk team, the security team, the privacy team, the compliance team, they all get target fixation, which causes them to misperceive other risks.
We also misperceive risk because of how we calculate risk and the labels we use in the security industry. Say Microsoft issues what it calls a high-risk patch. Everybody reacts, thinking they have to apply the patch in a week or a day or whatever. But “highly vulnerable” doesn't necessarily mean “highly exploitable.” Patching reduces some risks, but it increases others, such as downtime and instability. And sometimes the high-risk thing isn't so high-risk because it's not easily exploitable. And sometimes, the low-risk item is highly exploitable. 

So sometimes it’s this world of opposites that you have to think about. You have to take on the attacker's mindset. Are you going to exploit a vulnerability that Microsoft and CERT have blasted to the world and loudly labeled as high-risk? Maybe not because you know everybody is focusing on it and not paying attention to other things. So if I were the attacker, I might use vulnerabilities that are seen as “lower risk” but are more exploitable and perhaps not on everyone’s radar. The labeling we do as an industry is based on an appropriate risk calculation, but it’s still generalized. Those labels can distort our perception of risk when it comes to specific circumstances.

JL: Short-term vs. long-term thinking is another misperception of risk.

MH: Absolutely. Many CISOs and cybersecurity folks get too focused on the incident cycle right in front of them. They’re just thinking tactically, at best a couple of quarters out. You’ve got to be a long-range strategic planner. I worked on pandemic response plans at Intel in 2002, three months after I started running business continuity. Some of the lack of preparation I see with COVID-19 today is just a failure of imagination.
From a risk perspective, if you can imagine it, it's possible. And if it's possible, even if you can't prevent it, how do you prepare for it? Even if it's just getting mentally prepared or taking your organization through an exercise, it can make a difference. Long-range planning gives you the ability to do something I call a “risk nudge.” A tiny change in the path of a distant risk makes a big difference. If you've contemplated it and talked to people about it, and they started understanding it, they're going to make slightly different choices. It’s like an asteroid that’s a gazillion miles out there. That slight change a gazillion miles out would push it out of the extinction path it was on. If you can get people to think about a distant risk, they're going to take a slightly different path. It may affect how they architect a future system.  

Integrating Security and Development

JL: Policies are supposed to guide choices, but there’s often a wide gap between the policies and how well developers and other people who make choices understand them. How do you make that work? 

MH: A typical security and privacy policy document will run upwards of 150 to 200 pages. When you have that many rules to follow, how will a developer ever remember all that? Even the people who created it don't remember it all. And policies change, so you can’t just sit back and say, “I’m done. I gave them a set of rules to follow.” 

I shifted to a focus on principles a long time ago. When you think about a set of principles, there are five, maybe ten things you want to emphasize. It's much easier to train people on that set of principles. You can think of the principles as a compass, and those things don’t change. They’re more of a constant. And if you get people to understand and believe in the principles, they're always going to head in a directionally correct fashion, which means a substantial reduction in risk. When they're uncertain, they're going to seek guidance, and that’s where the specific policies come in. Make it easy for them to ask questions. The policies then become the equivalent of GPS coordinates, helping them get to a particular destination. But even if they're off by a couple of degrees on the GPS coordinates, think of all the risks that you reduced, because they were damn close.

JL: But isn’t it fair to say that developer tools and environments don’t make security or privacy by design easy for developers? Isn’t that a part of the choice architecture?

MH: Yes, there’s a real lack of innovation in this area. Everybody's focused on endpoints and networks, and those are obviously important. And we’ve got some conversations going on about the security development life cycle and privacy by design. But the lion's share of security products—and the lion's share of security spending—are all post-implementation. We’ve got to get left of the boom. The boom occurs in the design, development, and implementation cycle, and there’s too much friction in that cycle. 
If I’m in a development environment and I have questions, then I've got to get out of what I’m doing, go to a separate system, read a bunch of policy documents, get trained on a bunch of things, go through all these checks,  it’s no wonder that things get sloppy. A developer’s job is in a specific toolset, a particular workflow, and I'm taking them somewhere else, five steps out of their process. We've got to figure out how the security development life cycle and privacy by design get built into the development process. And security and privacy can’t be disjointed things. They need to be connected and built into the development process. It would be nice if we had some innovative frameworks where cross-checks are built-in. Some of this is happening. Tools such as CheckMarx have emerged, and they're good. But they're still a bolt-on to the development process instead of being built into it. 

JL: That's part of the promise of DevOps and -- for lack of a better word -- DevSecOps. The mindset of “you build it; you run it” moved reliability assurance to the left. Can DevSecOps move security to the left? 

MH: Yes, but there's a lot of process and structure work to do, such as accountability frameworks. But even with all of those things, we need better security tooling for developers. Everyone’s moving to containers like Docker. But if you ignore how the container is constructed, you end up with a code image that has more holes in it than Swiss cheese. We need developer tools to help developers write secure images. By checking boxes—which could implement specific security and compliance requirements—we could ensure developers build the container in accordance with those requirements. And the security and compliance teams can validate that those requirements are met. So you improve the whole process because you made it easy for the developer to do the right thing.

Innovation in Security Tooling

JL: It’s addressing problems, not just treating symptoms. I’ve long said that we need more innovation in tooling. 

MH: Look at the explosion of IoT devices or the notion of “minimal viable product.” That means I bolt on things like security later. We've got a long way to go. And a lot of it is structural and organizational. We've just got to do better. The software business is like the environment was 50 years ago when we were polluting the world left and right. It wasn't worth the money to scrub the emissions. And look at what we created. We've got to shift the mentality of obligation, to do it right while pursuing business opportunities. That requires innovation, making it easier for developers and implementers without stripping away profits or impeding business velocity. If we can innovate around that and get left of the boom, then we'll be able to drain much of the swamp that we've currently got.

But it's a bit of a chicken and egg problem. We got what we got because we weren’t forward-thinking enough over the past few decades. It’s one thing to say that as a nation, we’ve got to bite the bullet and spend in this area and pass a bunch of cybersecurity legislation. But I would rather have us invest in this side of cybersecurity to get in front of what’s coming. I think if we do, we would be way better off.

JL: So we need to incentivize the investment somehow?

MH: Yes. R&D tax credits or the Cyber Shield act that Senator Edward Markey introduced a couple of years ago, which didn’t pass. We have to deal with that side of the issue because all the stuff we’re doing, while necessary, is not sufficient to create strategic risk reduction in the long term.

Summary

As Harkins points out, the causes of many of the worst security failures organizations experience can be traced back to implementation mistakes. In theory, moving security to the left can address that problem more effectively. But there’s a lot of work to do before we can accomplish that goal in practice. CISOs will likely need to expand the scope of security and risk beyond traditional IT, and security teams will have to transform, both operationally and organizationally. And the IT industry must innovate, with a strong focus on how to integrate security into the development process.

Jamie Lewis