Daniel Miessler's Blog, page 97

July 22, 2018

DNS Rebinding Explained

I recommend reading this in its native typography at DNS Rebinding Explained




A lot of people have questions about the concept of DNS Rebinding attacks, and many of the overviews dive too deep into the details. Here’s a simple explanation that should help those having trouble getting it.



DNS Rebinding lets you send commands to systems behind a victim’s firewall, as long as they’ve somehow come to a domain you own asking for a resource, and you’re able to run JavaScript in their browser.



Here’s how it works.




If you can get someone to make a request to a domain that you own, you can give them a DNS response that maps host.domain to an IP address—say, 1.2.3.4.
If you set the TTL of that response really low—like 10 seconds—you force the system to constantly check again to see what the IP is for host.domain.
If you know (or think) the victim has a given type of system on their internal network—like a router, or an IoT device—that you could control if you were on the same network, you can use a piece of malicious JavaScript running on their browser (because they came to your site) to make requests to that system, e.g., https://host.domain/set-dns-server?se....
When this command is first sent, it’ll be sent to IP 1.2.3.4, because that was the initial IP address that you sent the victim for host.domain.
When the client next updates the DNS record (in 10 seconds, because that’s what you set the TTL to), you then respond back with 192.168.1.1, so the victim’s browser then sends https://host.domain/set-dns-server?se... to 192.168.1.1!
If the router is vulnerable to what you send (perhaps using default credentials or no credentials at all), it will update the DNS server of that router to point to the bad guy, which is probably you again.
Repeat as desired to find the right IP internally, and/or to send different kinds of commands to different devices internally.


They don’t need to redirect to an internal IP, and could just as easily send you somewhere else on the internet to bypass the Same Origin Policy.



Basically, you have them request something from you, you give them take a short-TTL name-to-IP mapping, you inject some JavaScript in their browser that makes malicious requests, and then you change the IP via DNS update on your side to point to all the target IPs behind their firewall.



It reminds me of what I speculated about in 2016, where one might use SSRF to do the same thing to exposed IoT device services.



What makes DNS Rebinding so interesting is that it takes advantage of two major features in the fundamental structure of the internet—which aren’t changing any time soon:




The fact that visiting browsers run your JavaScript by default (including things like BeEF hooks), and…
The ability to set low TTLs on DNS responses so that you can constantly rotate the mapped IPs


Brilliant.



Defenses

Because the attack takes advantage of these fundamental components of the internet, the defenses are non-trivial. They generally include:




Restrict the running of JavaScript (so the attacker can’t force requests).
Pinning IPs to names (so they can’t rotate).
Don’t accept TTLs below a certain size (so they can’t rotate).
Don’t accept DNS responses (for external domains) with private addresses (so they can’t rotate to internal resources).
Likely others as well…


Stay safe out there.



Notes


Image from Dark Web News.
The Wikipedia article on DNS Rebinding.



I spend between 5 and 20 hours on this content every week, and if you're someone who can afford fancy coffee, please consider becoming a member for just $5/month…


Start Membership


Thank you,


Daniel

 •  0 comments  •  flag
Share on Twitter
Published on July 22, 2018 01:14

July 20, 2018

The Problem With Pinker’s Positivity

I recommend reading this in its native typography at The Problem With Pinker’s Positivity




I recently finished Steven Pinker’s Enlightenment Now, which was remarkably well done. I particularly loved the later chapters on science and reason, which talked about how far these have brought our society, and how far they can continue to take us.



In general, the book argued—similar to his previous work The Better Angels of Our Nature—that we should be happy because things are just so much better than they used to be.



I think Pinker is clearly correct about the objective ways in which human society has improved. Unfortunately, that does not equate to happiness or stability in the way that he thinks it does. He does give statistics on happiness in the book, and it’s difficult to argue with the data he provided, but I think those studies are either out of date or were measuring the wrong things.



As a case in point, the CDC has reported that the suicide rate has risen by 30% since 1999. That to me indicates a clear drop in happiness. The US is also facing an extraordinary crisis around opioid use, which is killing unbelievable numbers of people. The CDC has overdose deaths from opioids rising from 3% in 2000 to 13% in 2016.



But Pinker doesn’t talk about increased suicides, or the opioid crisis, or the destruction of jobs by automation, robots, and AI. In other words, he’s focusing almost completely on the positive. It’s as if he’s collecting evidence to support his belief in how good things are, as opposed to asking if they are good or not.



I think his fundamental mistake is confusing reasons we could or should be happy (in his opinion), vs. whether we actually are.



People didn’t elect Donald Trump because they were happy. People aren’t committing suicide because they’re happy. People aren’t getting on disability in record numbers because they’re happy.



The country is in the middle of an economic and social revolution, where there is only a top 10%, who benefit from everything in society, and a bottom tier who work mostly service jobs. The middle is being destroyed by progress in the form of automation. And because this automation makes life easier for businesses and customers, it’s being aggressively embraced all over—which is effectively eliminating millions of jobs.



The clearest and most complete articulation of this depressing state of the country comes from a book by Andrew Yang, called





You absolutely need to read this book.



This book is the exact opposite of Pinker’s. It’s telling us how much trouble America is in, and it conveys the severity of the problem with extreme clarity. I think it’s the most important book for America’s future right now.



I think Andrew Yang’s book, The War on Normal People, is the most important book in America right now.



Some of the topics it covers:




How automation will affect jobs
How lost jobs affect mental health
Men are affected the worst, and they’re most likely to cause problems when it happens
Women can’t find college educated men
We’re separating into two groups: the top and the bottom
A massive number of people are on opioids and/or disability, and they seldom come off
The top will have highly creative jobs and the bottom will be in service jobs
The middle is what used to exist in manufacturing and retail (and soon transportation), but those are going away
The more routine your job the more it’ll be done by automation / AI
People keep saying this is no different than the industrial revolution, and we found solutions then, but this is far different
There are no new jobs for older, low-skill people to do that live outside of big cities
Retraining is not really effective, despite noble efforts


Ultimately the message of the book is that we’re heading towards catastrophe. Not a recession. Not a dip. Not a blip. Catastrophe—and it’s virtually inevitable. I agree with him, and have been writing about the same thing for a while now.



Pinker doesn’t even address these issues. He’s talking about larger scale trends over the course of thousands and hundreds of years. Yes, things have gotten far better at that scale, but that has nothing to do with the crisis we’re currently facing.



We’re watching the entire meaning infrastructure in this country be destroyed.



As an atheist I think there is a lot of good that comes from discarding religion, but as a humble realist I also need to acknowledge the downsides.




Religion is going away.
Most people will not have a good job or a good paycheck.
Being a homemaker is no longer considered meaningful for a lot of parents.


So where is the meaning coming from?



Nowhere.



And that’s why we’re on drugs, committing suicide, and even more scary—electing politicians who will scapegoat others.



We’ve directed, starred in, and watched this movie before, and it doesn’t end well.



I’m guessing on this percentage, but it seems close.



So yes, Pinker’s book was fantastic, but I think it represents what he wishes people could understand and realize—not what they actually think about. Even the act of processing and accepting the points he makes requires that someone be educated and reading books for fun, which basically filters all but the top 5% of society. And of course those are the people who least need to hear the message.



America is suffering, and no amount of “it used to be much worse” is going to change that.



Both books are great, but right now we need Andrew’s a lot more than Steven’s.



Notes


Sam Harris had Andrew Yang on his podcast, which was great, and that’s how I learned about his work.



I spend between 5 and 20 hours on this content every week, and if you're someone who can afford fancy coffee, please consider becoming a member for just $5/month…


Start Membership


Thank you,


Daniel

 •  0 comments  •  flag
Share on Twitter
Published on July 20, 2018 17:32

July 14, 2018

Summary: Enlightenment Now

I recommend reading this in its native typography at Summary: Enlightenment Now




Capture


The world is far better than people think it is


These book summaries are designed as captures for what I’ve read, and aren’t necessarily great standalone resources for those who have not read the book. Their purpose is to ensure that I capture what I learn from any given text, so as to avoid realizing years later that I have no idea what it was about or how I benefited from it.




There is plenty of data to show this
There are many reasons to be optimistic
People really suck at predictions
There is a group of people called superforecasters who are great at it, and the have a very specific set of characteristics
They are in the top 20% of intelligence, but don’t have to be at the very top
Comfortable thinking in guestimates
They have the personality trait of Openness (which is associated with IQ, btw)
They take pleasure in intellectual activity
They appreciate uncertainty and like seeing things from multiple angles
They distrust their gut feelings
Neither left or right wing
They’re not necessarily humble, but they’re humble about their specific beliefs
They treat their opinions as “hypotheses to be tested, not treasures to be guarded”
They constantly attack their own reasoning
They are aware of biases and actively work to oppose them
They are Bayesian, meaning they update their current opinions with new information
Believe in the wisdom of crowds to improve upon or discover ideas
They strongly believe in the role of chance as opposed to fate


Lessons


Don’t let the way you feel, which is influenced by your local surroundings and the media you consume, affect your overall opinion of how things are doing


My analysis and takeaways


I ended up loving this book, but thought the beginning was far too positive
It seemed to me that Pinker was (and perhaps still is) telling us reasons we should be happy, not reasons we are happy
I’m not confident his happiness data are going to hold up in the last 2 years or so, especially in the US
I’m not sure he’s taken into account the CDC number of 30% increased suicides since 1999, for example
Ultimately, I think all his optimism is predicated on people being educated enough to know how good the rest of the world is, when most people in the US have no idea who fought in World War II, what the three branches of government are, etc.
In short, we’re far too stupid to be made happy by world stats that say life is improving, because we don’t read world stats.


[ Find my other book summaries here. ]




I spend between 5 and 20 hours on this content every week, and if you're someone who can afford fancy coffee, please consider becoming a member for just $5/month…


Start Membership


Thank you,


Daniel

 •  0 comments  •  flag
Share on Twitter
Published on July 14, 2018 06:34

How to Become a Superforecaster

I recommend reading this in its native typography at How to Become a Superforecaster




I just finished Steven Pinker’s latest book, Enlightenment Now, which turned out to be fantastic. I was frustrated in the first two-thirds of the book for reasons I won’t go into here (it seemed too positive while ignoring negatives), but the final several chapters were spectacular.



Those chapters mostly focused on science and reason, and why they’re so important and our best path forward—despite being flawed in their own ways.



My favorite gem in the book was a section on what Pinker calls Super-forecasters, which was within a section of the book that talked about how bad we are at making predictions. Here are a few captures from that section.




Though examining data from history and social science is a better way of evaluating our ideas than arguing from the imagination, the acid test of empirical rationality is prediction. ~ Steven Pinker




He talked about the work of Phillip Tetlock, a psychologist in the 80’s who did forecasting tournaments for hundreds of analysts, academics, writers, and regular people where he asked people to predict the likelihood of possible future events.



Tetlock’s Book on Superforecasting



In general, the experts did worse than regular people. And the closer to their expertise the subject was, the worse their predictions.



Indeed, the very traits that put these experts in the public eye made them the worst at prediction. The more famous they were, and the closer the event was to their area of expertise, the less accurate their predictions turned out to be. ~ Steven Pinker



But there were some who predicted far better than the loud and confident experts and the average person, which he called superforecasters.




Pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could. When thinking, they often shifted mental gears, sprinkling their speech with transition markers such as “however,” “but,” “although,” and “on the other hand.” They talked about possibilities and probabilities, not certainties. And while no one likes to say “I was wrong,” these experts more readily admitted it and changed their minds. ~ Pinker / Tetlock




I absolutely love this. But it gets better. Here’s a list of characteristics that superforcasters have, according to Tetlock:




They are in the top 20% of intelligence, but don’t have to be at the very top
Comfortable thinking in guestimates
They have the personality trait of Openness (which is associated with IQ, btw)
They take pleasure in intellectual activity
They appreciate uncertainty and like seeing things from multiple angles
They distrust their gut feelings
Neither left or right wing
They’re not necessarily humble, but they’re humble about their specific beliefs
They treat their opinions as “hypotheses to be tested, not treasures to be guarded”
They constantly attack their own reasoning
They are aware of biases and actively work to oppose them
They are Bayesian, meaning they update their current opinions with new information
Believe in the wisdom of crowds to improve upon or discover ideas
They strongly believe in the role of chance as opposed to fate


Remarkable! Especially the Bayesian part. And he gives an example:



From Steven Pinker’s Enlightenment Now



As for the belief in fate vs. chance question, Tetlock and Mellers rated people on a number of questions that gave them a Fate Score. Questions were things like, everything working according to God’s plan, everything happens for a reason, there are no accidents, everything is inevitable, the role of randomness, etc.



An average American was (probably not anymore, I’m guessing) somewhere in the middle, an undergraduate at a top university scores a bit lower (better), a decent forecaster (as empirically tested by predictions and results) is even lower/better, and the super-forecasters scored lowest of all—meaning they thought the world was the most random!



Holy crap. Phenomenal.



The forecasters who did the worst were the ones with Big Ideas—left-wing or right-wing, optimistic or pessimistic—which they held with an inspiring (but misguided) confidence. ~ Steven Pinker



As someone who already holds a lot of these cautious, tentative views on reality, and who is willing to update them often, this felt good to read. It also reminded me to reinforce my own tendencies to doubt myself, and to be even more loose with my ideas.



It reminded me of another great quote on this, which people like Marc Andreessen use often:




Strong opinions, loosely held.




Overall the book turned out to be excellent, but this is the most powerful concept I’ll take from it.



Summary


Most people are horrible at predictions, and most experts are even worse.
The closer the topic is to their field, the worse the expert usually is at predictions.
Top predictors have a particular set of attributes.
They are high in Openness, they enjoy intellectual activity, they are comfortable with uncertainty, they distrust their gut, they’re neither left or right, they treat their beliefs as hypotheses, they are Bayesian, they believe in the wisdom of crowds, and they strongly believe in chance rather than fate.
If you want to be better at predicting things, and generally not be the idiot that makes wild predictions that don’t come true, do your best to adopt and constantly reinforce these behaviors.



I spend between 5 and 20 hours on this content every week, and if you're someone who can afford fancy coffee, please consider becoming a member for just $5/month…


Start Membership


Thank you,


Daniel

 •  0 comments  •  flag
Share on Twitter
Published on July 14, 2018 06:04

July 13, 2018

Transitioning From “Everyone Must Like Me” to “If You’re Doing Life Correctly, at Least a Few People Will Dislike You”

I recommend reading this in its native typography at Transitioning From “Everyone Must Like Me” to “If You’re Doing Life Correctly, at Least a Few People Will Dislike You”




My whole life I’ve cared greatly about how others view me.



I don’t think it’s been in a negative, egotistical way, but more of a kindness and friendship way. Like, I assume that if they didn’t like me it was because of something I should have done better, or some sort of pain they were going through that I should try to help fix.



This has always given me an advantage in life. I get along with everyone. I like everyone. And everyone seems to like me (or at least not dislike me).



I’ve recently had an experience that made me rethink this approach. Not the being nice part, but the part where I need to find and fix any instance of someone not liking me.



I found a case where no matter what I tried, they didn’t care.



It’s a fellow infosec person, actually, a super smart guy. Talented, doing very well in the industry. And he’s well respected.



We had a casual relationship—only online—and we would occasionally talk about stuff that bothered us in the industry, about random funny stuff, or whatever. But a few times I reached out to ping him about a viewpoint that he had, which I didn’t agree with. It usually involved him being upset about a given thing, and me thinking I had some sort of perspective that he could use to either not care about that thing, or at least care less about it.



So after like five years of this relationship, and like 3 or 4 of those more debate-like interactions among many others about random things, he basically tells me that every time we talk I’m criticizing him, and he doesn’t like it.



So he blocks me on Twitter.



It was my first time being blocked (that I know of anyway).



I was devastated, and from time to time I still am. Hence this post.



I’ve been debating about religion, politics, infosec best-practices, and many more emotional topics like vim vs. emacs, Android vs. iPhone, and other highly radioactive topics for almost exactly 20 years.



Never have I had someone just shut off the conversation. Not enemies. Not people who were close to enemies. Not people I disliked. Nobody.



But here I am just having a friendly discussion with someone I respect—and who I thought respected me—and all of a sudden I’m blacklisted.



And over the months I’ve tried a couple times to apologize, to set boundaries on discussion that could be considered confrontational, and to just be friends.



Nope.



Nothing.



I spend a great deal of energy trying to be nice to people. I’m a giving person. I like to give. I like to help people.



The idea that there is this nice person, who thinks I’ve wronged him enough to PROHIBIT me from speaking to him…it’s just extremely hurtful to me.



And that’s where this new idea comes in.



All throughout time people have been opposed. They try to do things, and people hate them. They try to have opinions, and people hate them. Maybe they just show up and people don’t like how they look, so they’re hated.



I’ve learned a lot about perseverance from biographies. Where people pour their hearts out in books or essays or whatever they’re trying to do, and nobody cares.



But the people we’ve heard of always continued. They just keep creating. Kept writing. Kept on keeping on.



Now I don’t place myself anywhere near the level of anyone whose biography I’ve read. I’ve not done shit yet. I’m a guy on the internet.



But maybe some people see me as something I’m not. They see me as having arrived in some measure, or as having taken something from them in some measure.



I don’t know if that’s true. And I don’t know how it could be true for this person.



But maybe that’s just how things work.



What I’m coming to understand is that I should try—and this will be very hard for me—to see someone disliking me as a sign that I’m doing something right.



Maybe it means I’ve made something. Maybe it means I’ve brought something into the world. Maybe it means I’ve said something true but uncomfortable. Maybe I’ve started an unpleasant conversation about religion, or security, or whatever.



It’s dangerous, though, becuase most people who are disliked are just assholes. I’m pretty sure I’m not an asshole. But it’s something I never want to be. It’s like the prime directive given to me by my father.




Don’t be an asshole.




Ok, that’s clear enough.



He also said if a bunch of people show up in a van and say,




Hey, we’re doing something super fun, you should come join us!




You should run away.



But I digress.



I think what I’m saying is that if I want to be great in the future I cannot obsess about being liked by every single person in the world. Maybe I could only pull that off by not doing anything noteworthy. Mabye I could only stay universally liked if I never made a difference.



So I need to find a way to figure out how to be my old self for 99% of situations, but for the other 1% who don’t like me for whatever reason, I need to just move on. I need to steel myself against the hurt it causes, and resist the urge to go and adjust what I did, or try to address what made them not like me.



The algorithm needs to be:




Did I do something I should not have done?
If so, apologize, and ask them to forgive.
If I didn’t do anything wrong, or if they decline my apology, try again after some time.
If that doesn’t work after 2-3 attempts, move on with my life.
Hope that they’ll come around in the future, but don’t think too much about it.


That’s where I am with him now.



It’s not like we’ve been best friends since childhood or something. We’ve only know each other through infosec and Twitter, so maybe he just hasn’t seen the real me yet, and he has some negative impression from somewhere (combined with me not being sensitive enough in our interactions).



I just wish he could know who I am, and that making him feel bad enough to block someone is something I’d never consider doing in 1,000 lifetimes. It’s not how I work.



I like people. I like friends. And I like helping, not hurting.



So I guess I’m wondering if anyone else has gone through this transition, and if it was the right thing to do for them in hindsight.



I don’t think I’ll ever stop caring, but I need a technique to turn off the sensitivity. Perhaps this is it.




I spend between 5 and 20 hours on this content every week, and if you're someone who can afford fancy coffee, please consider becoming a member for just $5/month…


Start Membership


Thank you,


Daniel

 •  0 comments  •  flag
Share on Twitter
Published on July 13, 2018 20:02

July 11, 2018

The Difference Between Ex-Ante, Post Hoc, Ex Post, A Priori, and A Posteriori

I recommend reading this in its native typography at The Difference Between Ex-Ante, Post Hoc, Ex Post, A Priori, and A Posteriori




If you do a lot of (good) reading you have probably run into a few highbrow, Latiney sounding phrases like those in the title of this post.



I’m guessing you almost looked them up several times, but if you’re here you finally did. Well done. Here they are.




Ex Ante means before the event, and is basically a prediction of something. In the financial world it’s often a prediction of a return on an investment.
Ex Post means after the event, and means something that is settled after the event actually happens. For investment companies it’s a look back at how they company actually did as opposed to how well they planned on doing.
A Priori means from earlier, and refers to knowledge we have naturally, obviously, or before (and not requiring) testing or experience.
A Posteriori means from the latter, and refers to knowledge we must acquire by testing or evidence.
Ad Hoc means for this, and indicates something designed for a specific purpose rather than for general usage.
Post Hoc means after this, and refers to reasoning, discussion, or explanation that takes place after something has already transpired.
i.e. comes from id est in Latin, basically meaning it is, and signifies a restatement of what was just said. It’s a reiteration, not an example or case in point.
e.g. comes from exempli gratia in Latin, which means “for example”. So if you make a point and then say, e.g., you don’t want to restate your point, you want to provide an instance of that being true.


Discussion

I’ve been guilty of mangling these in the past as well.



It’s strange how people incorporate obscure language into their repertoire without knowing what it means, and I find that outside of deeply intellectual (both pseudo and legitimate) circles, these terms are almost always misused.



The one that surprised me most was ad hoc, which I more thought of as meaning “without a plan”, as opposed to being “custom for this situation”. The relationship between those two, and the fact that they’re likely to be coincident, is interesting by itself.



The most common terms you’ll encounter in scientific reading are a priori and a posteriori, which deal with the two types of knowledge:




That which you know without experience or experiment (a priori)
And that which you can only know after you gather evidence (a posteriori)


The distinctions between i.e. and e.g. are less exotic and are part of common grammar at this point, but many still confuse them.



I hope this has been helpful.




I spend between 5 and 20 hours on this content every week, and if you're someone who can afford fancy coffee, please consider becoming a member for just $5/month…


Start Membership


Thank you,


Daniel

 •  0 comments  •  flag
Share on Twitter
Published on July 11, 2018 10:38

July 10, 2018

The Evolution of the SOC and the CSIRT

I recommend reading this in its native typography at The Evolution of the SOC and the CSIRT




We’re starting to see significant debate around the terms SOC vs. CSIRT, and which one companies should have.



As with most such debates in tech—which is moving almost as fast as we can lock down definitions—the issue is largely one of semantics.



To start, Wikipedia says a SOC is a centralized unit that deals with security issues on an organizational and technical level, and that a CSIRT is an expert group that handles computer security incidents.



Richard Bejtlich comes out with a strong position on Twitter, responding to this post by Gartner analyst Augusto Barros’ essay on the topic.



Heard new Gartner research suggests “a CIRT should be part of a SOC.” No! The traditional “SOC” should ultimately disappear. The CIRT does detect/respond/inform/improve. A SOC is a stopgap until automation/orchestration and #secdevops flourish. CC @dinodaizovi @TryPhantom @splunk https://t.co/c2DWDBtkmx

— Richard Bejtlich (@taosecurity) June 28, 2018




So Richard’s position is that the idea of a SOC is outdated, and that CSIRT is the real thing. I agree with this, but I think we need to look at the history and some first principles to get the context.



The military had some of these capabilities much earlier than industry.




The only thing that matters—and that’s ever mattered—is preventing, detecting, and responding to bad things happening to your organization. That was the reason for the SOC in the past, and it’s the reason for the CSIRT now. It’s the foundation this entire conversation sits upon.
In the beginning, there was nothing. Enterprises didn’t have prevention, they didn’t have detection, and they definitely didn’t have response.
Given how bad things were, the first step in the 90’s was installing an IDS on the perimeter and watching incoming attacks. And since all the monitors were in one place, and the people they hired to do the job had similar backgrounds, titles, pay, and reported to the same management, why not put them all in one room so they could communicate better? That synergy of visualization, human resources, and communication lead to the first SOCs.
Within the last five years or so it has started becoming obvious that detection by itself is like one hand clapping. It’s useless unless you’re responding as well.
Because response involves so many other parts of the organization, it became more common for the extended security team to not all work in the same room.
Then add the idea of proactive security to the mix, where we can actually automate a lot of this work, and do a lot of the testing while we’re building and deploying, and suddenly the majority of security is happening outside the SOC.


So we went from security being detection based, using IDs’s and manual follow-up within the security department, to security being response-based, using dozens of tools and leveraging multiple groups within the organization, including development, operations, legal, HR, and management.



SOCs didn’t become unfashionable because everyone needed to be in the same room. They are dying because the focus shifted from reactive to proactive, and from detection to response. Detection and response became a function done by teams, rather than a team performing a function.



Being proactive means involving development. And doing response correctly requires the involvement of many departments. Ultimately the only thing that killed the SOC is progress.



But we shouldn’t make fun of the Blackberry because the iPhone exists. Or look down on ESX when we compare it to AWS. These things had their time, and they performed the important role of bringing us to where we are now.



The SOC didn’t really die. Its soul was absorbed into the bigger picture of business resilience. Like the arm of an ancient Gundam.



And it’s not as if we’ve reached our full evolution—not by far.



Before too long we’ll be talking about how the CSIRT team is an outmoded idea because it implies that it’s a separate function from business resilience and business goals. In that world there’s no difference between quality and security, and automated testing is ubiquitous and continuous in every part of the organization.



We always look condescendingly at the past, not realizing we’re living it now as well. The SOC lives on in the CSIRT team. And the CSIRT team will live on in AI-powered automation and orchestration-based DEVSECOPS.



We are but a stone on the path. Respect the past, and look to the future.



Ultimately we’re just trying to make sure the business doesn’t stop making money under any circumstances. And both the SOC and CSIRT team have played—and are playing—their evolutionary roles in getting us to that point.




I spend between 5 and 20 hours on this content every week, and if you're someone who can afford fancy coffee, please consider becoming a member for just $5/month…


Start Membership


Thank you,


Daniel

 •  0 comments  •  flag
Share on Twitter
Published on July 10, 2018 00:46

SOC vs. CSIRT

I recommend reading this in its native typography at SOC vs. CSIRT




We’re starting to see significant debate around the terms SOC vs. CSIRT, and which one companies should have.



As with most such debates in tech—which is moving almost as fast as we can lock down definitions—the issue is largely one of semantics.



To start, Wikipedia says a SOC is a centralized unit that deals with security issues on an organizational and technical level, and that a CSIRT is an expert group that handles computer security incidents.



Richard Bejtlich comes out with a strong position on Twitter, responding to this post by Gartner analyst Augusto Barros’ essay on the topic.



Heard new Gartner research suggests “a CIRT should be part of a SOC.” No! The traditional “SOC” should ultimately disappear. The CIRT does detect/respond/inform/improve. A SOC is a stopgap until automation/orchestration and #secdevops flourish. CC @dinodaizovi @TryPhantom @splunk https://t.co/c2DWDBtkmx

— Richard Bejtlich (@taosecurity) June 28, 2018




So Richard’s position is that the idea of a SOC is outdated, and that CSIRT is the real thing. I agree with this, but I think we need to look at the history and some first principles to get the context.



The military had some of these capabilities much earlier than industry.




The only thing that matters—and that’s ever mattered—is preventing, detecting, and responding to bad things happening to your organization. That was the reason for the SOC in the past, and it’s the reason for the CSIRT now. It’s the foundation this entire conversation sits upon.
In the beginning, there was nothing. Enterprises didn’t have prevention, they didn’t have detection, and they definitely didn’t have response.
Given how bad things were, the first step in the 90’s was installing an IDS on the perimeter and watching incoming attacks. And since all the monitors were in one place, and the people they hired to do the job had similar backgrounds, titles, pay, and reported to the same management, why not put them all in one room so they could communicate better? That synergy of visualization, human resources, and communication lead to the first SOCs.
Within the last five years or so it has started becoming obvious that detection by itself is like one hand clapping. It’s useless unless you’re responding as well.
Because response involves so many other parts of the organization, it became more common for the extended security team to not all work in the same room.
Then add the idea of proactive security to the mix, where we can actually automate a lot of this work, and do a lot of the testing while we’re building and deploying, and suddenly the majority of security is happening outside the SOC.


So we went from security being detection based, using IDs’s and manual follow-up within the security department, to security being response-based, using dozens of tools and leveraging multiple groups within the organization, including development, operations, legal, HR, and management.



SOCs didn’t become unfashionable because everyone needed to be in the same room. They are dying because the focus shifted from reactive to proactive, and from detection to response.



Being proactive means involving development. And doing response correctly requires the involvement of many departments. Ultimately the only thing that killed the SOC is progress.



But we shouldn’t make fun of the Blackberry because the iPhone exists. Or look down on ESX when we compare it to AWS. These things had their time, and they performed the important role of bringing us to where we are now.



The SOC didn’t really die. Its soul was absorbed into the bigger picture of business resilience. Like the arm of an ancient Gundam.



And it’s not as if we’ve reached our full evolution—not by far.



Before too long we’ll be talking about how the CSIRT team is an outmoded idea because it implies that it’s a separate function from business resilience and business goals. In that world there’s no difference between quality and security, and automated testing is ubiquitous and continuous in every part of the organization.



We always look condescendingly at the past, not realizing we’re living it now as well. The SOC lives on in the CSIRT team. And the CSIRT team will live on in AI-powered automation and orchestration-based DEVSECOPS.



We are but a stone on the path. Respect the past, and look to the future.



Ultimately we’re just trying to make sure the business doesn’t stop making money under any circumstances. And both the SOC and CSIRT team have played—and are playing—their evolutionary roles in getting us to that point.




I spend between 5 and 20 hours on this content every week, and if you're someone who can afford fancy coffee, please consider becoming a member for just $5/month…


Start Membership


Thank you,


Daniel

 •  0 comments  •  flag
Share on Twitter
Published on July 10, 2018 00:46

July 9, 2018

Why I Use a Single-screen iPhone Configuration

I recommend reading this in its native typography at Why I Use a Single-screen iPhone Configuration




There are lots of ways to organize the apps on your phone. You can have lots of screens of single apps, or you can have lots of screens of app folders, or you can have something in-between.



I do something simpler. I have one screen of apps, and everything else I have to invoke via the down-swipe.



Not only do I only have one screen, but the bottom row is empty, so it’s actually less than one screen.



This does a few things for me:




It’s more efficient to swipe down and type one or two characters than to search through multiple screens or folders.
Often times, when you swipe down to search for the app you want, it’s already there in the list at the top.
It constantly re-affirms my priorities. The apps I have on my (one and only) home screen are apps I am happy to spend time in. Everything else I want to have to spend effort to find and use.


A number of my time-wasting apps, like Reddit, News, and New Yor Times, are not on the home screen, so it’s harder to mindlessly click on them and disappear into a time warp.



This is a good thing.



Having a single screen of apps is a way to ensure you’re spending time on what you care about rather than what comes naturally.



This setup is also just far more efficient. You think having a hundred apps right at your fingertips on various screens is easier than searching, but this stops being true once you have two or three screens. Once you hit that threshold it’s faster to search, especially when the OS is predicting what you’re probably looking for.



In short, this setup keeps me both focused on what I want to be doing, as well as efficient when I do need to find an app that I don’t often use.



Try it out.



Notes


The text at the bottom of my screen is part of a custom wallpaper I made to look like it’s part of the phone.



I spend between 5 and 20 hours on this content every week, and if you're someone who can afford fancy coffee, please consider becoming a member for just $5/month…


Start Membership


Thank you,


Daniel

 •  0 comments  •  flag
Share on Twitter
Published on July 09, 2018 00:10

June 25, 2018

New Study Shows You Can Predict Credit Rating From Your Online Tech Fingerprint

I recommend reading this in its native typography at New Study Shows You Can Predict Credit Rating From Your Online Tech Fingerprint




For anyone who’s been worried that their online fingerprint was going to be used against them, this paper will provide them their vindication.



Here’s how they describe the analysis:




As a simple example, every website can effortlessly track whether a customer is using an iOS or an Android device; or track whether a customer comes to the website via a search engine or a click on a paid ad. In this project, we seek to understand whether the digital footprint helps augment information traditionally considered to be important for default prediction and whether it can be used for the prediction of consumer payment behavior and defaults.




Fair enough, but from there it just gets scary.



None of this is surprising to me. I’ve been arguing that this is what big data and machine learning are going to bring us for a long time. But it’s still startling to see it happen.



They proceed to look at all the various markers we leave behind online, that companies could look at while you’re on their website (or if they buy the data from somewhere else) to add to your credit score to determine your chance of default.



What if a company wants to estimate your creditworthiness (or other types of worthiness) without pulling your credit? They can do this analysis, or something like it, for free by looking at all this public information you drop while using the internet.



Not only will companies do this while you’re on their website (they can easily tell your OS just by showing up, along with the site you came from, and if they’re getting data from you they can learn where you live, what your email address is, and tons more data they can use to rate you.



And these things evidently matter a lot in predicting default.




When combining information from both variables (“Operating system” and “Email host”), default rates are even more dispersed.15 We observe the lowest default rate for Mac-users with a T-online email address. The default rate for this combination is 0.36%, which is lower than the average default rate in the 1st decile of FICO scores. On the other extreme, Android users with a Yahoo email address have an average default rate of 4.30%, significantly higher than the 2.69% default rate in the highest decile of FICO scores.




In general, there were a few things that jumped out as predictors.




iOS vs. Android (iOS users were around half as likely to default)
Emails with name in them were better
Desktops defaulted far less than mobile
People with numbers their emails defaulted more
People with old domain emails (Hotmail, Yahoo) defaulted more
People who ordered at night instead of in the afternoon defaulted more


The most interesting thing about this is how easily this stuff can be (and is) gathered from users just during a regular browsing session. Especially if you’re on the website itself that is going to making the decision.



These tech fingerprint ratings are as good as or better than actual credit scores.



To me, this is what big data is all about. It’s not what it should be about. But it is what it’ll be used for. And it already is.



Big data combined with machine learning has only one purpose, and that is to answer questions and make predictions.



The question and prediction of, “Will this person pay me back?” is one of the oldest in human history.



Expect AI and data science to focus on questions like those first.



And if you were worried that your internet droppings might one day be used to judge you, don’t worry anymore.



It’s absolutely true, and it’ll only become more so as the technology advances.




I spend between 5 and 20 hours on this content every week, and if you're someone who can afford fancy coffee, please consider becoming a member for just $5/month…


Start Membership


Thank you,


Daniel

 •  0 comments  •  flag
Share on Twitter
Published on June 25, 2018 15:01

Daniel Miessler's Blog

Daniel Miessler
Daniel Miessler isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Daniel Miessler's blog with rss.