More on this book
Community
Kindle Notes & Highlights
Rule of Five
Quantitative
The organization can make its own list of minimal controls. If a control is considered a required minimum, then there is no dilemma, and where there is no dilemma, there is no value to decision analysis. So it is helpful to focus our attention on controls where having them or not are both reasonable alternatives.
FAIR, as another Monte Carlo–based solution with its own variation on how to decompose risks into further components, could be a step in the right direction for your firm.
We don’t have to be limited by looking just at ourselves.
Our goal is actually to elevate the expert. We want to treat the cybersecurity expert as part of the risk assessment system.
In every field tested so far, it has been observed that experts are highly inconsistent—both in stability and consensus—in virtually every area of judgment.
Consistency is partly a measure of how diligently the expert is considering each scenario.
If, however, we do not have less uncertainty about the variables we decompose the problem into, then we may not be gaining ground.
Range compression is a sort of extreme rounding error introduced by how continuous values like probability and impact are reduced to a single ordinal value.
No matter how the buckets of continuous quantities are partitioned into ordinal values, choices have to be made that undermine the value of the exercise.
the ambiguity hides problems instead of facilitating the lack of information.
Common Vulnerability Scoring System (CVSS), the Common Weakness Scoring System (CWSS), the Common Configuration Scoring System (CCSS), and so forth. All of these scoring systems do improper math on nonmathematical objects for the purpose of aggregating some concept of risk. These wouldn’t have the same problems as a risk matrix, but they introduce others—such as the mathematical no-no of applying operations like addition and multiplication to ordinal scales. As the authors have stated it in presentations on this topic, it is like saying “Birds times Orange plus Fish times Green equals High.”
...more
Thomas et al. found that any design of a risk matrix had “gross inconsistencies and arbitrariness” embedded within it.
don’t forget that the reason you do this is to evaluate alternatives.
Decompositions should be less abstract to the expert than the aggregated amount.
if decomposition causes you to widen a range, that might be informative if it makes you question the assumptions of your previous range.
In March 2015, another analysis by Trefis published in Forbes magazine indicated that while Home Depot did not actually lose business in the quarters after the breach, management reported other losses in the form of expenses dealing with the event, including “legal help, credit card fraud, and card re-issuance costs going forward.”
What about the survey indicating how customers would abandon retailers hit by a major breach? All we can say is that the sales figures don’t agree with the statements of survey respondents. This is not inconsistent with the fact that what people say on a survey about their value of privacy and security does not appear to model what they actually do, as one Australian study shows.
There are real costs associated with them, but we need to think about how to model them differently than with vague references to reputation. The actual “reputation” losses may be more realistically modeled as a series of very tangible costs we call “penance projects” as well as other internal and legal liabilities.
Another one of these methods involves asking people to identify arguments against each of their estimates.
Looking at each bound alone as a separate binary question of “Are you 95% sure it is over/under this amount?”
80% of participants are ideally calibrated after the fifth calibration exercise.
The claim has often been correctly made that Einstein’s equation E = mc 2 is of supreme importance because it underlies so much of physics. . . . I would claim that Bayes equation, or rule, is equally important because it describes how we ought to react to the acquisition of new information. —Dennis V. Lindley
The beta distribution is useful when you are trying to estimate a “population proportion. A population proportion is just the share of a population that falls in some subset. If only 25% of employees are following a procedure correctly, then the “25%” refers to a population proportion.
The alpha and beta parameters in the beta distribution seem very abstract and many stats texts don’t offer a concrete way to think about them. However, there is a very concrete way to think of alpha and beta if we think of them as being related to “hits” and “misses” from a sample. A “hit” in a sample is, say, a firm that had a breach in a given time period, and a “miss” is a firm that did not.
We believe this may be a major missing component of risk analysis in cybersecurity. We can realistically only treat past observations as a sample of possibilities and, therefore, we have to allow for the chance that we were just lucky in the past.
The log odds ratio (LOR) method provides a way for an expert to estimate the effects of each condition separately and then add them up to get the probability based on all the conditions.
at www.howtomeasureanything.com/cybersecurity
The product of the chance of being wrong and the cost of being wrong is called the Expected Opportunity Loss (EOL).
the value of information is just a reduction in EOL.
Descriptive Analytics: The majority of analytics out there are descriptive. They are just basic aggregates like sums and averages against certain groups of interest like month-over-month burn up and burn down of certain classes of risk.
Predictive Analytics:
You are using past data to make a forecast about a potential future outcome.
Dimensional modeling is a logical approach to designing physical data marts. Data marts are subject-specific structures that can be connected together via dimensions like Legos. Thus they fit well into the operational security metrics domain.
open source solutions like Apache Drill provide a unified SQL (Structured Query Language) interface to most all data stores, including NoSQL databases, various file types (CSV, JSON, XML, etc.),
CSRM function is a C-level function.
The quantitative risk analysis (QRA) team consists of trained analysts with a great bedside manner.
Analytics Technology This will be the most expensive and operationally intensive function. It presupposes big data, stream analytics, and cloud-deployed solutions to support a myriad of analytics outcomes.
While there are many potential barriers to success like failed planning, there is one institutional barrier that can obstruct quantitative analysis: compliance audits. In theory, audits should help ensure the right thing is occurring at the right time and in the right way. To that extent, we think audits are fantastic. What makes compliance audits a challenge is when risk management functions focus on managing to an audit as opposed to managing to actual risks. Perhaps this is why the CEO of Target was shocked when they had their breach; he went on record claiming they were compliant to the
...more