Risky Business: Can we really measure our cyber risk?


By the end of 2016, companies in North America are set to spend upwards of
$85 billion on cybersecurity. That’s billion with a B. Despite that investment, the number of breaches and cyber attacks hasn’t decreased. In fact, cyber risk is accelerating: to cite one recent study, the number of ransomware attacks increased by 172% in the first six months of 2016 compared to 2015, including at least 14 hospital networks which were operationally paralyzed.  

The pressure on CISOs to measure and reduce cyber risk is real and mounting. But can you really measure something as uncertain as a cyber threat?

At Versus 16, Richard Seiersen, CISO and VP of Trust of Twilio, and author of “How to Measure Anything in Cybersecurity” answered that question with a resounding and enthusiastic yes.

Kicking off his talk at Versus, Richard asked the audience, “What do these four things have in common?”

  1. The probability of a mine flooding
  2. Forecasting drought in the Horn of Africa
  3. Predicting fuel requirements for the Iraq War
  4. The likelihood of a cyber breach   

His answer: They can all be measured through probabilistic thinking and leveraging your team as “bookies” to assign probability and likelihood to cyber risk.

Richard’s session was so well received, he had the chance to expand these ideas with Mahendra Ramsinghani at CSO.

Versus16-5507

We wanted to dig a little deeper into these ideas with Richard, so we followed up with a few questions of our own.

At Versus, you told us we first need to understand what we’re measuring and pass the Clairvoyant Test. What do you mean by that?

The Clairvoyant or “Clarity Test” is a means of getting clear about the object of our measurements. That “test” comes from Professor Ron Howard at Stanford.  He is considered the grandfather of “Decision Analysis.” When we are looking to decide on something risky, we want to get as clear as possible about the contents of the thing we are deciding on. To help students get “clear” he posited a clairvoyant that has perfect visibility into the future but with zero judgement. I prefer to say that the clairvoyant is an “idiot savant” who can only respond to explicit tangibles. So, if you asked him, “Will we be breached?”, you would get no response.  But, if you said, “Will we have one or more publicly disclosed, un-patched vulnerabilities that can be exploited by unauthorized users so that they become authorized users who can read 500 or more unencrypted records that include PHI?” then he will respond. This leads to clear thinking that can be used to make better forecasts about probable future loss.

Is being wrong about cyber risk predictions 20% of the time REALLY an improvement over today?

It’s actually 21% inconsistency. That is purely “humans being humans.”  We have done the largest industry study on this topic particular to security. The 21% is an ideal situation, where the means (tool) for collecting the SME’s predictions is not flawed.  Contrast that to the fact that almost all of the current means of collecting predictions are horribly flawed.  These tools are based on a type of measurement called “ordinal scales.”  You have all seen or used these if you are a security practitioner or vendor.  “High, Medium, Low” or  “Red, Yellow, Green” or “1-10, 1-5” etc all represent common scales used to measure cyber risk.  There is ample scholarly work that shows that such approaches are “worse than doing nothing”.  They amplify risk, and in particular, they are rocket fuel for inconsistency.  We can work with 21% through modeling (making robots) that out performs the collective SME predictions.  This is at the heart of what expert systems do when they model physicians, epidemiologists, etc.

How much investment would it take for a company to get from a 20% error rate to a 10% error rate?

Again, it’s really human inconsistency.  We still need the human predictions, particularly when we lack empirical data. Our SME’s actually “know something.”  That is why we hired them. Through calibration, good risk assessment tools and statistical practices (like the lens model discussed in the book) we can approach bookie-level prediction skill in terms of consistency.  The cost is not really all that much, it’s the will and the statistical know-how that seems to be the barrier.  Statistics has a firewall of jargon that prevents the uninitiated from playing.  But, those barriers are eroding, that is the intent of our book.  A small salvo against that firewall!

We’ve posted a podcast with
the full session from Versus.
Share it with your team, and
share your thoughts below.
For audio only, click here.


Happy measuring!

Written on December 7, 2016
by Veliz Perez
Tags:
  • Industry, 
  • Security