How to Measure Anything in Cybersecurity Risk
Rate it:
18%
Flag icon
Rule of Five
19%
Flag icon
Quantitative
23%
Flag icon
The organization can make its own list of minimal controls. If a control is considered a required minimum, then there is no dilemma, and where there is no dilemma, there is no value to decision analysis. So it is helpful to focus our attention on controls where having them or not are both reasonable alternatives.
25%
Flag icon
FAIR, as another Monte Carlo–based solution with its own variation on how to decompose risks into further components, could be a step in the right direction for your firm.
27%
Flag icon
We don’t have to be limited by looking just at ourselves.
30%
Flag icon
Our goal is actually to elevate the expert. We want to treat the cybersecurity expert as part of the risk assessment system.
31%
Flag icon
In every field tested so far, it has been observed that experts are highly inconsistent—both in stability and consensus—in virtually every area of judgment.
32%
Flag icon
Consistency is partly a measure of how diligently the expert is considering each scenario.
33%
Flag icon
If, however, we do not have less uncertainty about the variables we decompose the problem into, then we may not be gaining ground.
39%
Flag icon
Range compression is a sort of extreme rounding error introduced by how continuous values like probability and impact are reduced to a single ordinal value.
39%
Flag icon
No matter how the buckets of continuous quantities are partitioned into ordinal values, choices have to be made that undermine the value of the exercise.
39%
Flag icon
the ambiguity hides problems instead of facilitating the lack of information.
40%
Flag icon
Common Vulnerability Scoring System (CVSS), the Common Weakness Scoring System (CWSS), the Common Configuration Scoring System (CCSS), and so forth. All of these scoring systems do improper math on nonmathematical objects for the purpose of aggregating some concept of risk. These wouldn’t have the same problems as a risk matrix, but they introduce others—such as the mathematical no-no of applying operations like addition and multiplication to ordinal scales. As the authors have stated it in presentations on this topic, it is like saying “Birds times Orange plus Fish times Green equals High.” ...more
40%
Flag icon
Thomas et al. found that any design of a risk matrix had “gross inconsistencies and arbitrariness” embedded within it.
48%
Flag icon
don’t forget that the reason you do this is to evaluate alternatives.
49%
Flag icon
Decompositions should be less abstract to the expert than the aggregated amount.
50%
Flag icon
if decomposition causes you to widen a range, that might be informative if it makes you question the assumptions of your previous range.
51%
Flag icon
In March 2015, another analysis by Trefis published in Forbes magazine indicated that while Home Depot did not actually lose business in the quarters after the breach, management reported other losses in the form of expenses dealing with the event, including “legal help, credit card fraud, and card re-issuance costs going forward.”
51%
Flag icon
What about the survey indicating how customers would abandon retailers hit by a major breach? All we can say is that the sales figures don’t agree with the statements of survey respondents. This is not inconsistent with the fact that what people say on a survey about their value of privacy and security does not appear to model what they actually do, as one Australian study shows.
52%
Flag icon
There are real costs associated with them, but we need to think about how to model them differently than with vague references to reputation. The actual “reputation” losses may be more realistically modeled as a series of very tangible costs we call “penance projects” as well as other internal and legal liabilities.
57%
Flag icon
Another one of these methods involves asking people to identify arguments against each of their estimates.
57%
Flag icon
Looking at each bound alone as a separate binary question of “Are you 95% sure it is over/under this amount?”
60%
Flag icon
80% of participants are ideally calibrated after the fifth calibration exercise.
63%
Flag icon
The claim has often been correctly made that Einstein’s equation E = mc 2 is of supreme importance because it underlies so much of physics. . . . I would claim that Bayes equation, or rule, is equally important because it describes how we ought to react to the acquisition of new information. —Dennis V. Lindley
66%
Flag icon
The beta distribution is useful when you are trying to estimate a “population proportion. A population proportion is just the share of a population that falls in some subset. If only 25% of employees are following a procedure correctly, then the “25%” refers to a population proportion.
66%
Flag icon
The alpha and beta parameters in the beta distribution seem very abstract and many stats texts don’t offer a concrete way to think about them. However, there is a very concrete way to think of alpha and beta if we think of them as being related to “hits” and “misses” from a sample. A “hit” in a sample is, say, a firm that had a breach in a given time period, and a “miss” is a firm that did not.
68%
Flag icon
We believe this may be a major missing component of risk analysis in cybersecurity. We can realistically only treat past observations as a sample of possibilities and, therefore, we have to allow for the chance that we were just lucky in the past.
70%
Flag icon
The log odds ratio (LOR) method provides a way for an expert to estimate the effects of each condition separately and then add them up to get the probability based on all the conditions.
70%
Flag icon
at www.howtomeasureanything.com/cybersecurity
73%
Flag icon
The product of the chance of being wrong and the cost of being wrong is called the Expected Opportunity Loss (EOL).
73%
Flag icon
the value of information is just a reduction in EOL.
78%
Flag icon
Descriptive Analytics: The majority of analytics out there are descriptive. They are just basic aggregates like sums and averages against certain groups of interest like month-over-month burn up and burn down of certain classes of risk.
78%
Flag icon
Predictive Analytics:
78%
Flag icon
You are using past data to make a forecast about a potential future outcome.
80%
Flag icon
Dimensional modeling is a logical approach to designing physical data marts. Data marts are subject-specific structures that can be connected together via dimensions like Legos. Thus they fit well into the operational security metrics domain.
81%
Flag icon
open source solutions like Apache Drill provide a unified SQL (Structured Query Language) interface to most all data stores, including NoSQL databases, various file types (CSV, JSON, XML, etc.),
85%
Flag icon
CSRM function is a C-level function.
85%
Flag icon
The quantitative risk analysis (QRA) team consists of trained analysts with a great bedside manner.
86%
Flag icon
Analytics Technology This will be the most expensive and operationally intensive function. It presupposes big data, stream analytics, and cloud-deployed solutions to support a myriad of analytics outcomes.
86%
Flag icon
While there are many potential barriers to success like failed planning, there is one institutional barrier that can obstruct quantitative analysis: compliance audits. In theory, audits should help ensure the right thing is occurring at the right time and in the right way. To that extent, we think audits are fantastic. What makes compliance audits a challenge is when risk management functions focus on managing to an audit as opposed to managing to actual risks. Perhaps this is why the CEO of Target was shocked when they had their breach; he went on record claiming they were compliant to the ...more