The CISO as a Choice Architect: A Conversation with Malcolm Harkins

Screen Shot 2020-08-13 at 5.31.50 PM.png

Malcolm Harkins is well-known in information security circles. Early in his career at Intel, Harkins held positions in finance and procurement before moving into information security roles. After serving as Intel’s chief information security officer (CISO) for seven years, he became the company’s first Vice President and Chief Security and Privacy Officer (CSPO), responsible for managing the risk, controls, privacy, security, and compliance activities for all of Intel’s information assets, products, and services. Harkins left Intel in 2015, taking the chief security and trust officer position at Cylance. He’s currently the chief security and trust officer for Cymatic, a board member and advisor to other companies, and an executive coach to CISOs and others in information risk roles. The second edition of his book “Managing Risk and Information Security: Protect to Enable” was published in 2016. A frequent speaker and contributor to several publications, Harkins continues to focus on driving security industry accountability and the ethics around technology risk, social responsibility, and the total cost of controls.

I recently (and virtually) sat down with Harkins to discuss his views on the role of the CISO, which he describes as a “choice architect,” the risks they face, and how they should manage and mitigate those risks. This post is part one of two of the interview.  (With Harkins’s approval, I’ve edited the questions and answers for brevity and clarity.)

The CISO Is a Choice Architect

Jamie Lewis (JL): I’m intrigued by your concept of a “choice architect.” When did you start thinking about your role that way?

Malcolm Harkins (MH): It goes back to the early days of my career. As a procurement person, I influenced spending, product prices, headcount, and all that stuff to hit a P&L target. There were some areas where I had decision authority. But in most areas, I couldn’t deny something outright. I could slow it down a little or nudge it in a different direction. But a corporate officer who wanted to spend the money could go up the chain and say, “It's in my budget. Get this idiot analyst out of my way.” It came down to how I could architect the choices, blend the perspectives enough to get the best outcome. Everybody's job is to optimize. Being a CISO is no different. I hear a lot of CISOs say they have all this accountability and responsibility, but they don't have the full control to make things happen. To be honest, it drives me nuts. Nobody has full authority. Even CEOs don’t have control over everything. I say quit whining and do your job. 

JL: So how and where do CISOs architect choices?

MH: CISOs have two battlefields. There's the external battlefield we talk about every day: the threat actors, agents, vulnerabilities, all that type of stuff. The other battlefield is internal: the budgets, bureaucracies, technologies, and behaviors. You have to be a choice architect to manage on that internal battlefield. 

Some decisions are yours to make. Other people make some decisions, and you work to influence those decisions in a different direction. And that’s often a question of how you tell a story. How can you evoke an emotional response? How do you portray the risks and choices in the right context? Not only the risk to the business but the risk to the customers and perhaps societal risks. Because they’re all different risk portraits, and depending upon how you tell that story, you'll evoke a different response, which will frame not just the immediate decision, but decisions people make later.

And then sometimes you're stiff-arming people. I’ve always said risk management is a contact sport. It's like playing hockey. Checking people into the glass is part of the game. A good player tries not to use the stick, but it is a contact sport. And if you're unwilling to have contact, you shouldn’t be in the role. You can politely bump somebody into the glass. It’s a way of saying, “Hey, I'm letting you know I'm here, and I could have planted you into it, but I didn’t.” And that's going to frame their choice the next time.

The Nine Boxes of Controls: The CISO’s Choice Tool

JL: Your “Nine Boxes of Controls” matrix deals with how security people choose to approach controls, and the impact those choices have on the business. 

MH: Exactly. How much friction are you creating on the customer experience and business velocity? Friction can generate a lot of resistance and make things worse. While I focus on security controls, I believe that control is an illusion. Security teams often put a high-friction thing in place, thinking that they've controlled for a risk. But they only caused people to drive around the control. They wasted money, slowed the business, and created unmanaged, unseen risks. 

Instead, they should design the friction in a way that alters the path, enabling what the business needs while allowing the security team to manage the risk. You can design the physical layout of a site, guiding people down a particular path. The most effective and efficient traffic control mechanism is a roundabout. Traffic continues to flow. You're not burning fuel, and there’s less mortality, less damage to cars. Why is that? Roundabouts make people risk-aware, so they make better decisions. You can do that logically too. Security teams need to shape the path so that people become more risk-aware, not pretend they can control everything. 

There's another good transportation example. I think it was in Chicago along the great lakes. There's a long straightaway, and then a big curve. People would go down the straightaway too fast, wouldn't anticipate the curve, and crash. They put up all these screaming warning signs that had no impact on the crash rate. So they shortened the distance between the stripes in the middle of the road, which increased people’s perception of speed, causing them to take their foot off the gas. The signs failed, but an optical illusion caused people to slow down. There are tons of examples in the world that have nothing to do with info security or IT. I look everywhere for stuff like that. It makes me think differently. It causes me to explain things differently. It causes me to architect--technology, business process, and choices--differently. 

Screen Shot 2020-08-13 at 5.44.16 PM.png


Figure1: The Nine Boxes of Controls


JL: If I’m a CISO looking at the Nine Boxes of Controls, moving down and to the left in that matrix, making myself less vulnerable, makes perfect sense. But how does one do that?

MH: It’s a control philosophy and strategy dialogue. You can map your spending, both on people and technology, on the matrix, and it will give you an idea of where you’re anchored. And let’s say you’re doing pretty good. Your friction is moderate -- you're in the center, not top right. Without an outcome-based strategy, inertia is always going to pull you up and to the right because of compliance regimes and other such things. You need to have a strategic and philosophical focus on continuing to shift controls left and down. You need to show executives what that's going to look like over time, what investment you need to make to get innovation going. And you need to show what the long-term returns on that investment will be. You have to show how, as technology spending grows, you’re going to ask for a smaller percentage of that spend, with lower friction on the business and the user experience with lower liability. 

Even with an economic shockwave like COVID-19—innovation comes through starvation. Why not figure out how to innovate your way out of it while shrinking your budget? But a lot of IT folks don't think that way. Some CIOs just focus on cutting costs, saying, “we're x percent of revenue, and that's way too high, so I'm going to cut it.” They look like heroes in the short term, but ten years out, it’s clear they screwed up the business. Someone who's more strategic isn’t looking at it as purely a cost line but is architecting in line with the business, to a specific outcome. Yes, it's a cost center, but if you manage it correctly, you can do good things, which goes back to making choices. What are you prioritizing with your choices, and how do you influence the organization along those lines? Can you look at value differently? 

JL: Can you give me an example of how you’ve done that in your career?

MH: Around 2004, when Intel launched the Centrino platform, we had rogue WiFi access points popping up left and right, like most companies. I had a budget of a few million dollars to solve the problem by deploying a product that would help us shut them down. 

But I discovered that the IT organization, which was budget-constrained, was charging a business unit $50,000 to light up a building for wireless, and then $10 per user, per month. It was an economics problem and we were creating it. Somebody from a business unit could go to Fry's, buy a cheap wireless router, and light up a few thousand square feet. So I took the money they gave me to fight rogue access points and gave it to the network team. I invested it in deploying ubiquitous wireless. 

My security team was freaking out, saying wireless is insecure, and we shouldn’t do that. But my point of view was different. People wanted and needed wireless because it helped them do their jobs, and they were going around us. By shaping the path, we got device-level identity and control, with encrypted traffic. It would have cost a gazillion dollars to get that kind of stuff implemented on our physical network. We enabled mobility, lowered costs, lowered friction, and got better security. And we didn't play whack-a-mole. Any rogue access point stood out like a beacon, and 99 percent of those were the product R&D labs. So we just had the labs register their experiments, so we didn't react to them. 

And we solved other risk issues by leaning into the change. Everyone at that time was worried about mobility risk and data protection with laptops. But I said, “If everybody has a laptop and a building blows up, they'll still be able to work unless they were in the building when it blew up.” People thought I was nuts. And then the SARS pandemic came. An employee contracted it, and we had to shut down the Hong Kong sales office to do a full clean. Everyone was able to work from home. We had ice storms in Oregon in 2007, and seven or eight thousand employees couldn’t get to the office. We didn't miss a beat. Our lost and stolen rate was low because people had their kids’ soccer photos on their laptops. They had their taxes on them. They saw it as theirs and took care of it for the most part. We enabled the environment. We lowered costs. But I took a completely different philosophical view than the rest of my peers.

JL: In other words, don’t fight change. Lead it in a way that allows you to manage the risk.

MH: Exactly. It's our job to understand, manage, and mitigate risk. How can you do that if you aren’t the first mover on the riskiest thing in your company? The security teams need to be the risk-takers, to be at the forefront so they can shape the path. Manage the risk instead of being the ones always saying no. If you're the one experimenting with it, you can be the one to figure out how to manage the risk before everybody else gets there.

It's a philosophical thing, but it's a choice. I was running toward wireless. I was running toward mobility. I was running toward virtualization. The security team did the first big data implementation at Intel. We were working on the instrumentation and analytics, predictive intelligence, and the early machine learning stuff. I wanted to be at the forefront of it so I could understand it, capitalize on the benefits for my team, and be in front of the risk before everybody else got there.

Running Toward Risks

JL: What should security teams be running toward right now? 

MH: I think we should be running towards IoT. If we’re running towards it, we can find all the scary risk issues earlier. Then we can develop some level of mitigation for them and give feedback to the creators and developers on the technology side. 

Artificial intelligence and machine learning are other areas where the security team should be at the forefront. Every company is going to start using it if they're not already. Security teams also need to think more broadly about social responsibility, ethical implications, and go beyond the traditional scope of enterprise IT.  If there's an unintentional bias in the AI algorithm, isn't that an integrity issue? And isn't an integrity issue with technology a part of the security scope? Some may say, “That’s not a cyber risk.” So they’re saying that the marketing AI-based thing that has an inherent bias, that discriminated against people, or exposed them to a privacy risk because the inferences it could be making, isn’t their problem? 
Cement companies are putting sensors in concrete, and if the CISO of that cement company is only worried about the traditional IT stuff, then that company has a problem. Security teams need to broaden their scope beyond traditional IT into all aspects of technology development. 

Z-Shaped Individuals

JL: Every company is becoming a technology company in one way or another, but other executives, such as the CEO, determine the security team’s scope, right? It’s at least to some degree a function of where the CISO sits in the organization and whether the executive team sees security as a part of the business’s strategic management capability.

MH: Structure certainly drives behavior. But the other aspect is the individual. If they can’t punch above their weight limit, if they don't understand the business context, the technical context, and the risk and security and controls context – that’s what it takes to be at an executive level, be a part of those broader dialogues. You’ve got to think of yourself like the CFO, like the general counsel. 

A lot of people talk about T-shaped individuals. The ideal technologist has a breadth of business acumen (the horizontal stroke of the T) and depth of technical acumen (the vertical stroke). But a CISO needs to be a Z-shaped individual. One horizontal stroke is the breadth of business acumen, and the other is breadth of technical acumen. The hash is the depth of knowledge around risk, security, and controls. And I've got to wrap around that Z shape a set of values: independence, accountability, and integrity, the willingness to challenge the status quo, and the ability to say “not on my watch” at times. Just like a general counsel or CFO would. But it's a quest because the business, technology, risk, and vulnerability are always changing. So you constantly have to pursue being Z shaped. People who seek out those abilities will be given the opportunity because they have created it for themselves.

Speaking to the Board

JL: To your point about the CISO being like the CFO or general counsel, there’s an increased demand from company boards on the CISO to demonstrate the efficacy and maturity of their security programs. How should CISOs approach that?

MH: Well, I think some boards are paying service to security when they say it’s a significant concern for them, and I have two data points to back that up. First, in December of 2019, the National Association of Corporate Directors did a broad survey of corporate directors. One of the questions was around cyber risk, and 61 percent of the respondents said that they would compromise on cybersecurity for business objectives. 
Second, if you look at every significant breach that's occurred, there's an emotional sell-off in the company’s stock at first. But six months, maybe a year later, the stock swings back, often above what it was before the breach. So if I'm on the board, or if I'm the CFO and the CEO, are there any long-term shareholder implications for a data breach? Do we have the economic incentives and accountability to change the behavior? The data suggest not. 

JL: Fair points. But the CISO still has to demonstrate some level of program maturity, even if the board is just paying lip service to the issue.

MH: You need an outcome-based program, and you need to rethink “assessments.” Ed Amoroso recently published an excellent paper on security performance management. It says that periodic assessments need to give way to a continuous process. Think about what CEOs and CFOs do. They have tools like SalesForce and Domo, and they're looking at metrics all the time. It's consistent and continuous performance management, not just a periodic assessment. If you have an outcome-based program and implement the right processes and tools, including a security performance management program, you can have that discussion with the executive team and board. 

CISOs should also do something I learned from Dennis Carter, who was CMO back at Intel when I was a young finance person. He said that, as an executive, you need to have your 30-3-30: Your 30-second sound bite, your three-minute elevator pitch, and your 30-minute deep-dive. As a security executive, you always need to have a 30-second sound bite on what's working and what's not. You need a three-minute elevator pitch. If you happen to be with an executive or board member in an elevator, you’re ready. And always have your 30-minute deep dive in your hip pocket if someone wants to have a more in-depth conversation. 
But if you don’t have an outcome-based program and aren’t continuously working the measurements, metrics, and maturation program, you can’t do that. Getting your 30-second sound bite will take a week. Getting your three-minute elevator pitch together will take three weeks of work. And your 30-minute deep dive will take three months to prepare because your performance program isn’t continuous. You're not living in it.

Summary

As Harkins points out, CISOs must orchestrate choices that determine the overall security of their organizations. But the idea of running toward risk to enable what the organization needs runs contrary to the tendency at least some CISOs have to resist change. Given the inevitability of those changes -- and the CISO’s responsibility to manage risk in a fashion that enables the business -- Harkins’s approach is worth consideration. Stay tuned for the second part of the interview.

Jamie Lewis