Daniel Miessler's Blog, page 129
March 10, 2017
We’ve Reached Peak Prevention
As we all know, there are two main components to risk: 1) the chance that something will happen, and 2) how bad it would be if it did. Or, probability and impact. For the last 20 years, in both terrorism and information security, we have focused on prevention (probability) and this effort has yielded some decent returns. But no longer.
We’ve simply reached Peak Prevention — a wall of diminishing return where we can multiply our prevention efforts by many fold and get no reduction in risk (and perhaps even an increase due to ever-advancing threats). 10 years ago we were at around 50% prevention maturity, and now we’re at roughly 90%. If we spend another 10 years and 10 trillion we can maybe get to 95%. But all that effort would provide only a small fraction of what we could achieve by making successful compromises less costly.
Imagine if we’ve said to the terrorists after 9/11 that we would start cleanup and rebuilding the following Monday. What if we told them that we lost that many people in car accidents the year before, and that innocent civilians are easy to kill. What if we told them that we would be just fine–that we’d pick ourselves up and continue on as if nothing had happened. No TV shows about the terrorists, no books, no attention. What if we told them that they’d be dead soon, and that nobody would remember their names.
Had we done that we would have spent a few billion dollars, and had a tough couple of years. Instead, we reacted in the worst possible way, dealing a self-inflicted wound that has cost us trillions upon trillions. The attack didn’t hurt us that bad–our response did. What we need for terrorism is resilience, not more prevention, and the same is true for information security.
Imagine if we were to say that digital identities are easy to steal, and that social security numbers are already out there, and that they’re not as important as we thought they were. Or perhaps that corporate networks are too massive to perfectly defend, and that breaches are often inevitable. What then?
Answer: We would move from a paradigm of terror at the thought of a breach, and panic once one has been detected, to that of practiced, mature preparation and controlled response. In short, we may not be able to lower the probability value much more in the risk equation, but we can absolutely adjust the impact. And if the impact goes down, so does the risk.
In this world, the negative publicity from getting hacked comes only from negligence with controls and/or a poorly handled incident response or notification. As it becomes understood that highly trained, asymmetrically resourced adversaries will penetrate highly complex global networks and do harm, the taboo of compromise is all but removed.
In fact, we’re already starting to see that happen. In the last decade we’ve seen literally hundreds of publicbreaches, with a staggering number coming in the last few months alone. Some of these companies have been rocked by their incidents, while others are virtually unscathed after just a few short weeks.
What’s the difference?
The Role of Controls
Many who make a living in security probably don’t want to hear hat we’re about to switch to a resilience paradigm from one of prevention, as it seems to almost trivialize compromise.
Nobody will care if they get hacked!
But that’s not true.
The difference between a company that goes on to be successful after a breach and one that suffers immeasurably is that the former had the controls in place and the later did not. And I’m not just speaking of a few technical controls: I mean a robust, highly mature information security program that has not just the technology but also the processes and training to respond properly when something does take place.
So the security industry will be just fine. The difference is that companies who are judged to have done everything right, but still got hacked, will not suffer the shame that is still associated with being compromised. This will become commonplace, and an accepted part of doing business in the 21st century. The stigma is falling away.
The only question will be whether or not you had your shop in order when it happened, and whether you responded appropriately. Consumer confidence in your company, and your stock price, will reflect this truth.
Two Approaches to Reducing Impact
Once we’ve accepted that the future path of risk reduction lies in reducing impact, we can start to look at ways to accomplish that. I see two primary ways to do so:
Significantly Reduce the Impact of Common Compromises
This portion of the solution will have many technological components, including an idea I got from recent password compromise issues. I believe the networks of the future will store their data in a decentralized way that makes common compromises virtually useless.
In other words, access to data as a result of a low to mid-level compromise will not yield anything of use to attackers because they’ll only have a tiny percentage of what’s required to make the data usable. And getting the other requisite pieces would require failures across multiple other areas in the company’s defenses.
Savvy readers will know that this will not thwart attackers completely, and that they will move their attacks to locations and users who can access the complete data set (someone has to have access to it, afterall). We’re already seeing this today, actually, but this is not a reason to abandon this approach. The fewer the systems that grant access to the real data, and the more effort it takes to get to the real data, the more time and chance we have of finding and stopping them.
Reduce the Value of the Data That is Stolen
This one is harder, but it’s still possible if enough people are involved and energy is put into it. Examples here could include modifying the requirements for getting a credit card, procuring a mortgage, etc. If additional factors (stronger factors) are added to the equation we could see the impact of SSNs or CCNs being stolen plummet significantly.
In short, not only make it less of an issue if you’re compromised, but make the leaked data less valuable as well. Again, this is something that’d have to be done at multiple levels, with multiple organizations helping, but any progress would be significant progress.
Conclusion
However it’s accomplished — and it’ll definitely be through a myriad of approaches — this shift is upon us. We’ve had a good run at catching the prevention unicorn, and we need to maintain our ground and continue to innovate in that area to some degree. But the true progress in future risk reduction will come from reducing the impact of breaches. The sooner we accept this the better.
Notes
This is a concept I wrote up many years ago, and a presentation that I’ve done a couple of times in the past. I’m simply consolidating the concept and the presentation in one place here.
__
I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
March 9, 2017
Computer Voice Interfaces Are a Combination of Voice Recognition and NLP
To a casual observer, it might appear that “voice interfaces” to computers—like Siri or Alexa—are a single technology space. In fact it’s useful to think of them as two problems combined.
First, the computer needs to fully understand exactly what you said. That means deciphering mumbling, removing background noise, handling different voices and accents, etc. That’s difficult, but we’re getting better at it.
Second, the computer needs to understand what you meant to do. This is difficult because it means translating and mapping that input to existing commands, and then executing them.
These are very different problems. The first one is called Voice Recognition, and the second is called Natural Language Processing.
[ NOTE: Natural Language Processing and Neuro-linguistic Programming share the NLP acronym, but they’re quite different. Most notably, Natural Language Processing is a real and developing science and Neuro-linguistic Programming is mostly debunked pseudoscience. ]
You can have a system that’s great at figuring out exactly what you said, even if you mumbled or have a thick accent, while speaking quietly on a subway, but has no idea how to turn your sentence into actions it can perform.
As an example, you might mumble:
Find a better song than this garbage.
If this system is limited with a few hardcoded commands, such as PLAY $ARTISTNAME, then the system will respond back with an error, or a request for clarification, because it didn’t hear the keyword PLAY.
Conversely, you could have a system that could perfectly understand that sentence, except when you say it—even in a relatively quiet setting—it instead hears:
Fire the buttress log on the garage.
Again, one side of the system let down the other side, and the system as a whole responds with an error or an additional prompt.
Both sides evolve together
The key point here is that the system is generally only as good as the worst side of this equation. Voice interfaces continue to become more usable because they’re advancing in both of these areas simultaneously, and they’re incorporating the improvements of each into new iterations.
Summary
Voice interfaces to computers require both voice recognition and NLP.
These are quite separate and it’s possible to be good at one and bad at the other.
The system overall can only be as good as the worse of the two.
We’re seeing improvements in voice interfaces because both sides are improving simultaneously.
The next time you interact with a voice system, and it fails, think about which of these to components was responsible.
Notes
I’m not an expert in this field, but I am willing to wager that each of these two categories (Voice Recognition and NLP) likely break into many others. I think it’s useful, however, to think about them as two components in many contexts.
__
I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
March 7, 2017
We Need Better iTunes Podcast Data
I really wish iTunes gave better (or any statistics) statistics on podcast interaction.
Here’s the view for my podcast on iTunes for the last four episodes.
And here’s the same four episodes within my own software, which is probably giving dramatically incorrect numbers.
The most obvious discrepancy is that the latest episode, No. 68, is showing as maxed out in iTunes, whereas episode No. 64 only has half a bar, while in my podcast management software it’s the opposite. There I have around double the downloads for 64 that I do for 68.
Anyone out there have any insight into stats for iTunes podcasts?
I hope Apple does something in this space soon. They’re the default podcasting leader already, and they could take a few basic steps and improve things drastically.
__
I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
March 6, 2017
Unsupervised Learning: No. 68
This week’s topics: Amazon’s S3 outage, Uber greyballing, fooling AI, DNS RATs, automating human jobs, suicide and ML, post-work IQ and creativity, greatness vs. imperfection, media choice, tools, projects, and more…
This is Episode No. 68 of Unsupervised Learning—a weekly show where I curate 3-5 hours of reading in infosec, technology, and humans into a 15 to 30 minute summary.
The goal is to catch you up on current events, tell you about the best content from the week, and hopefully give you something to think about as well.
The show is released as a Podcast on iTunes, Overcast, Android, or RSS—and as a Newsletter which you can view and subscribe to here or read below.
Infosec news
Amazon S3 had a major outage this week, which took down much of the internet. S3 is the backend for so many websites and applications that many are call it "The internet's hard drive". What I found most fascinating about the outage was Amazon's post-mortem, which identified the cause of the issue as a typo. But rather than saying the sysadmins would be retrained, i.e., blaming the human, they said they'll be implementing tech that will make it impossible for anyone to do this in the future—even if the typo were repeated. I think that's a great answer. Now we just need that for development frameworks. Link
Uber is in (more) trouble because of its use of a technique called Greyballing, which is a play on Blackballing. It's alleged that in cities where Uber was not allowed to operate, Uber would identify city officials and potential investigators and push them a fake version of the app. When they would call a car, it would look like cars would accept, but they would cancel immediately afterwards so they were never able to gather evidence against the company. Link
It's possible to fool a lot of AI systems using what are called Adversarial Examples. Basically they are purposely crafted inputs that cause the AI system to make a mistake, usually involving labeling. You might be able to convince a camera that someone has a gun, for example, or an autonomous car that there's a yield sign instead of a stop sign. The way I characterize this is that if you understand the limitation of the training data, and you have a way to attack it. Link
Security professionals everywhere are rejoicing in Marissa Mayer losing her multimillion dollar cash bonus because of the security issues at Yahoo!. They've felt for years that there could be egregious disregard for infosec but there were never any solid repercussions. Link
HackerOne is offering a free service for Open Source projects. The offering basically allows vetted projects to use the Hacker One platform to manage interaction with the community, but without customer support. Link
Cisco's Talos Intelligence have found a RAT called DNSMessenger that uses DNS TXT records to run PowerShell commands and for C2, preventing the system from having to write any files to disk locally. Link
A researcher found a vulnerability in Google Apps that allowed him to query internal Google domain names, including those for its Active Directory infrastructure. It was essentially an SSRF in their toolbox application, where if you rotated your queries you could pull all sorts of nasty stuff. The researcher received a bounty from Google and the issue has been fixed. Link
CloudPets, a smart stuffed animal that records voice conversations of children and parents, had its MongoDB database compromised, resulting in the exposure of 2 million voice conversations and data from around 800,000 registered users. Then it got hacked and ransomed. Link
Amazon is developing a Voice ID technology. Link
Google has increased all its bounty payouts by 50%, and Microsoft doubled theirs. Link
Google's ReCaptcha has been successfully attacked again. Link
Technology news
New software called Contract Intelligence (COIN) performs in seconds a task that used to take staff 360,000 hours. Link
YouTube has launched YouTube TV, which allows you to stream ABC, CBS, FOX, NBC, ESPN, regional sports, and dozens of other cable networks. Link
Chevrolet is about to offer an unlimited 4G LTE data plan on all cars sold in the U.S. for just $20/month. Link
Ford is exploring a mobile van full of drones for last mile delivery. Link
Human news
A researcher at Florida State University has used machine learning to accurately predict the chance of someone committing suicide to around 80% accuracy. This is stunning given the previous decades of work yielding no better than a 50/50 coin flip. The system looked at 2 million health records and identified 3,200 people it knew had committed suicide, and machine learning did its regular magic of finding what those people had in common that humans couldn't see. Around 120 Americans commit suicide daily. Link
Sweden has reinstated military conscription because of Russian moves in the Baltic. Link
Japanese universities are struggling to remain elite and relevant. Link
Babies evidently give their mothers stem cells that they can use to heal themselves if needed. Link
There's a new tech where you lock up your smartphone at parties. Link
SpaceX is sending two people on a trip around the moon next year. Link
Ideas
IQ and Creativity Bias in a Post-work World Link
The Mea Culpa Game: Analysis of IT Post-mortems Link
Greatness vs. Imperfection: How Should We Rate Our Leaders? Link
Governments, Markets, and Media Link
Companies Exist to Serve Customers, Not Employ People Link
Discovery
The Car Hacker's Handbook is now available for free. Link
GoPhish — An open source phishing framework that has just been updated. Link
A presentation on a car hacking tool called CANToolz. Link
A collection of red team related resources. Link
Hackr.io — A search engine for online programming courses and tutorials. Link
The rise of the Useless Class. Link
AWS Lambda best practices. Link
PaddlePaddle — An open and easy-to-use deep learning platform for enterprise and research. Link
The human body as a transit map. Link
My company, IOActive, released some new research on vulnerabilities in robots. Link
Advice Bill Gates would give his 19-year-old self. Link
Reflect — Design, publish, and share your data. A data visualization platform. Link
A pretty cool Critical Controls PDF. Link
An article on creating macros for Burpsuite. Link
Notes
This newsletter (and podcast) won #4 on a list of 35 security podcasts. It was particularly rewarding since the three that beat us are all super professional, highly produced, have tons of sponsors, etc. Over here it's just you and me, so I'm happy with our #4 spot. Thanks for reading! Link
I'm in the middle of making a new primer—this time on OSINT! It's going to be a fairly major one, and I'm going through hundreds of resources by hand to pick the best ones. I will hopefully release it within the next week or two.
I'm still reading Hamilton, but I took a break and am reading Sapiens. It's unbelievably good. Next up after that might be Homo Deux, another book by the same author.
I'm going to Stanford this week to speak about Cybersecurity and AI. Super excited about that.
My buddy Ty has me thinking about getting one of these. Link
Recommendations
If you're a parent, start thinking about what skills in the future are most resistant to AI and machine learning, because that's where you probably want to point them. It's about life skills, too, not just vocation. I'm going to be doing an essay on this soon.
Aphorism
"The problem with humanity is the following: we have Paleolithic emotions, medieval institutions, and godlike technology." ~ E.O. Wilson
Thank you for listening, and if you enjoy the show please share it with a friend or on social media.
__
I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
March 3, 2017
Greatness vs. Imperfection: How Should We Rate Our Leaders?
Trump’s presidency has forced me to re-evaluate what it means to be a good or bad leader.
As someone who considers himself progressive, this should be easy enough on an emotional level. He’s said and done horrible things, so he must be horrible. But things become murky when you look at some of our most revered leaders in history.
Gandhi is widely considered to have been highly racist towards Africans, referring to them as Kaffirs, which is analogous to the n-word in today’s language. “Kaffirs are as a rule uncivilized—the convicts even more so. They are troublesome, very dirty, and live almost like animals.”
Thomas Jefferson and George Washington owned many slaves in a time when people like Alexander Hamilton refused to. Hamilton himself was an adulterer, as was Martin Luther King, Jr., who is also widely understood to have plagiarized significant pieces of his doctoral dissertation as well as his speeches.
Upon inspection it’s far more difficult than it should be to find great people throughout history who did not have some sort of glaring moral flaw, failure, or fumble during the course of their lives. So the obvious question presents itself:
What magnitude of flaw is required to mark a great person as no longer great?
Thomas Jefferson and Martin Luther King, Jr. are strong cases in point. Martin Luther King, Jr. was a preacher who regularly cheated on his wife and stole writing from his students. Jefferson was a populist who championed agrarian simplicity and thought Hamilton’s banking system would create a nation of borrowing materialists, yet he travelled everywhere with an elaborate entourage and died buried in debt.
Have these flaws undone their greatness? At present that answer is no, so the question becomes instead, “Should they have?”
When normal folk are prodded on topics like these about people they respect, we hear noises like, “best intentions”, or, “flaws in good people”. And when regular people already dislike the person, we hear things like, “Well now we know the truth.”, or “I always knew he was rotten.”
But these subjective reviews leave us no closer to an objective benchmark, and that’s what interests me.
A universal measurement
Many want to say that it’s the intentions that matter, more than the actions, and that good people mean well but might still behave poorly because they’re flawed. Others say it’s not about what you meant to do, or what you wish you had done, but rather what you actually ended up doing.
President Bush II, for example, might have had the best intentions in Iraq, but what he did was lead to the death of hundreds of thousands of Iraqis, the loss of more than 5 trillion dollars, and thousands of American lives.
Perhaps the cleanest judgement can be made using three separate measurements, each scored from 1 to 10:
The world this person would create if they were all-powerful. 1 is the worst possible world, and 10 is the best possible world.
Their wisdom, integrity, and effectiveness in bringing that world into existence. 1 is they were complete hypocrites about their lifestyle vs. what they proclaimed, or they were completely ineffective at making it happen. 10 is that they lived according to their words, and/or were effective at bringing those changes into the world.
The overall good or evil that resulted from their actions. 1 is massive suffering of hundreds of thousands, millions, or billions of people. 10 is a massive improvement in happiness of similar numbers of people.
The minimum score is 1 (1 x 1 x 1), and the maximum score is 1000 (10 x 10 x 10).
So, as examples, George Bush II and Thomas Jefferson did well in the “ideal world” measurement, as Bush thought he was going to fix Iraq and everyone would love him, and Jefferson would have made slavery disappear overnight if it were easy and had no impact on his lifestyle. But the reality is that Jefferson kept slaves his whole life in order to maintain the luxury he desired, and Bush’s actions lead to the suffering and death of hundreds of thousands.
Let’s try a few, and see what we get.
PersonIdealAdherenceImpactTotal
Jefferson928144
Hamilton969486
MLK Jr.10810800
Hitler28232
Bush II844128
TABLE 1. — Judging good and evil in various world leaders.
As you can see, it’s quite difficult to assign the various values, as that process operates on its own set of assumptions and biases. I definitely need some guidelines for assigning each value so we can approach some sort of consistency, but even with these complications I think the process is illuminating.
Another thought I have, looking at the outputs, is that the worldview and impact numbers should weigh heavier than the adherence.
Anyway, I think it’s an interesting exercise, if nothing else.
How would you adjust the factors and/or set the values for the leaders? Send me recommendations and I’ll include them in the list.
__
I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
March 2, 2017
The Mea Culpa Meta Game
I love the psychology that goes into post-outage write-ups. Amazon just had a doozy, with S3 going down and crippling much of the internet for a day. The image above captures their approach to the narrative, which I would classify as dense and opaque.
Key attributes of the write-up include:
Small text
Formal language
Massive paragraphs
No bullets or numbered lists
No images
The message they’re trying to send to your brain without you knowing is:
You probably don’t have time to read it all the way through.
It’s lots of text, so it’s thorough.
It’s small text, so it’s probably deep and technical.
The response was legitimate and my confidence in them is restored.
The Etsy counter-example
And here’s a near-opposite approach, by Etsy.
Here we have:
Larger, more readable text
Shorter, more digestable paragraphs
Relaxed language
An intro followed by a numbered list
What they’re trying to communicate is quite different:
We’re honest and approachable.
We will never lie to you.
We take our lumps when we deserve them.
You can trust us to do the right thing for you.
Summary
Different customers will respond to different messaging, and situations might also call for different approaches. Maybe when you take down half the internet you need to be a bit more formal than when there’s a hiccup in some API.
Post-outage announcements are clearly more art than science, but I find the difference in approaches fascinating. I personally prefer the Etsy approach to Amazon’s, as honesty and transparency rank high on my list of desired traits in a partner.
Either way, the next time you read one of these, think about the non-textual communication that’s being used through text size, paragraph density, tone, and similar meta. It might tell you more about the company you’re dealing with than the text itself.
__
I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
March 1, 2017
Governments, Markets, and Media
I was thinking the other day about an analogy between government services vs. private services and Walter Cronkite vs. cable news channels.
It’s weird, but hear me out.
Walter Cronkite was a lot like the BBC. It was private, but between him and Dan Rather and all those types, there were very few sources of news. It was almost state-provided, like the British do it.
The popular narrative is that we have the same thing we used to have, but now we have a worse version of it. Basically the advertisers got their hands on it, and now it’s all newstainment, or whatever the hybridization is.
But I was just thinking that maybe something else happened entirely.
What if we went from zero choice to lots of choice?
One could argue that news isn’t actually worse today, but rather that there’s so many options and that many of them are worse.
It’s true that the defacto standard, Fox News, along with CNN and MSNBC and the like, are probably much worse than the zero-choice models from 1950-1980 or whatever, but those aren’t our only options.
There are so many top quality newspapers today. Magazines. Special news shows. Semi-private journalism that goes super-deep into stories. Etc.
And rather than searching for a lack of bias, perhaps what we’re getting is a spectrum of biases to choose from. Some left. Some right. Some not very biased at all.
Choice transfers the burden to the individual, and that’s ultimately why we have a media failure today. It’s not the media that’s failing us, it’s the people who choose to watch garbage.
We’re not living in a world where there aren’t good options. It’s the opposite. We have the best options ever in history. We have failed ourselves.
There are many calling for the garbage news shows to go away. They say they’re harming the country. But it’s not true. The channels we see are nothing but mirrors. All the sensationalist news, Nancy Grace, Ghosthunters, etc.—it’s there because it’s what we want to see.
So for those who are calling for better news, what you’re really saying is we need our choices restricted. We need a BBC to control the quality of the content and force some proper education.
And maybe that’s true.
But you have to think about what it means to say that. It’s saying we’re too stupid to think for ourselves, to make good choices about what to watch.
Ultimately the issue isn’t a decline in media quality, it’s a decline in our quality. And I’m not sure limiting the options at this point would be either possible or effective.
We’ve regressed into children who need entertainment, and we’re in a market economy where we have 300 different channels to browse. The only way to fix the market problem is to improve the quality of the consumer and their choices.
I’m not optimistic on that point. But let’s stop blaming the media. We’ve gone from government control to market control, guided by the blade-handled weapon of choice. And now we have to live with choices made by a nation of idiots.
__
I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
February 27, 2017
Political Extremes Produce Relative Heroes
Bush is currently enjoying a surge in popularity. The second Bush, not the first. He’s the one who got us into Iraq and oversaw the Katrina disaster.
Anyway, there are lots of people saying very nice things about him. And for good reason, I think. I think the guy has a genuinely good heart, and that he was honestly trying to do the right thing in most cases. That’s just my opinion.
But he was in charge while a significant amount of pain and suffering were injected into the world. And that’s leading to tweets like these:
Never thought I'd 1 day look back with anything less than rage and derision on Bush & his criminal legacy. But Trump has made me miss Dubya! https://t.co/1Enw3nNTHq
— Mehdi Hasan (@mehdirhasan) February 27, 2017
Contrast
I think the key ingredient here is contrast. When you swing so wildly in various ideological and policy directions, it’s easy to realize how good you had it before. And when it swings the other way you get more hot and cold feelings in different parts of your body.
It’s like repelling near an erupting volcano and then diving into an ocean full of icebergs. The body doesn’t know how to properly measure and think about the inputs because it’s too busy being shocked.
And then you have people who are the ice and the lava, and can’t stand anything that doesn’t match.
Today, people who hate Bush will talk about how great he is and people who loved Bush will talk about how horrible he is.
— Ben (@BenHowe) February 27, 2017
Then you have the people who see my point here, and realize that better than Trump doesn’t equate to good.
Like any president, he wasn't all bad but the Strange New Respect for George W Bush that Trump has inspired is pretty crazy.
— Matthew Yglesias (@mattyglesias) February 27, 2017
As I talked about in Exploring the Nature of Evil, it’s not easy to say who’s good and bad, especially when you’re dealing with great people who yield great power.
But all things being equal, I think Bush was a better person than Trump, and that I’d perhaps rather have his negatives than Trumps. And that’s what it comes down to. You have a collection of horrible, and a collection of strengths, and you have to weigh the two, which are different for most people.
It’s a strange world.
__
I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
Unsupervised Learning: No. 67
This week’s topics: CloudBleed, SHA1-1, White House Leaks, Planets, Satellites, Drones vs. Eagles, InfoSec Jobs, ExFil, IQ and Creativity in a Post-work World, Weaponized Narrative, Security Tools, Tons of Great Links, and more…
This is Episode No. 67 of Unsupervised Learning—a weekly show where I curate 3-5 hours of reading in infosec, technology, and humans into a 15 to 30 minute summary.
The goal is to catch you up on current events, tell you about the best content from the week, and hopefully give you something to think about as well.
The show is released as a Podcast on iTunes, Overcast, Android, or RSS—and as a Newsletter which you can view and subscribe to here or read below.
Infosec news
Tavis Ormandy of Project Zero discovered a major flaw in Cloudflare this week, which is being called CloudBleed. The best way to describe it is that CloudFlare was randomly injecting content from its protected sites into the browsing sessions of other websites hosted on Cloudflare. So they were protecting OK Cupid for example, and if you were visiting any site hosted by Cloudflare you might get random data from OK Cupid injected into the page you got back. Project Zero and Cloudflare worked to fix the issue quickly. Link
A large number of Google users reported being mysteriously logged out of their accounts last Thursday, which was concerning timing given the situation with the Cloudflare vulnerability. Google said, however, that it was a maintenance issue on their side, and was unrelated to the Cloudflare bug. Link
Google researchers have demonstrated the first successful attack on SHA-1 by creating two different PDF files that produce the same SHA-1 hash. Contrary to what much of the media is saying, this is not an extremely practical or realistic attack vector right now. This was Google working for two years to produce this, so it's pretty unlikely to be used against you. It should, however, slightly speed up your migration to a stronger option. Link
Hayvn is IBM Watson, but for information security analysis. People would think it was less awesome if they realized that IBM Watson has already replaced a decent number of Information Security related jobs. In the short term, though, it'll free security analysts up to do other things. Link
Sean Spicer has inspected his aides' mobile phones for apps like Signal and Confide to make sure they weren't communicating with reporters. He then ordered them not to talk about the fact that he was checking for leaks, which was then leaked. Link
With its 88 new satellites, Planet is about to become the worlds largest space surveillance company. Link
Terrorists are building drones, and France is using trained eagles to counter them. Link
Over half of infosec job openings take 3-6 months to fill, and less than 1/4 of applicants are qualified for the jobs they apply for. Link
A new covert data extraction technique has been developed by having malware blink a light on a computer, which is then monitored by a drone. Link
Netflix released a fascinating new tool called Stethescope, which is a user-focused security recommendations system for employees. Link
Technology news
Nokia appears to be trying anything, and have relaunched their used-to-be-popular 3310 phone. I have to admit it does look somewhat attractive, but I don't see a legacy form factor device like this selling well until we have separate displays and digital assistants, i.e., until the device isn't the center of the world. Link
Waynmo is suing Uber, saying an employee stole around 14,000 files from them and took them to Uber. The content in the files allegedly lead to innovations that have produced around half a billion dollars in revenue. Link
Facebook has open sourced Prophet, a data science forecasting tool for Python and R. Link
Google is about to start adding a "fact checked" tag to certain stories in their results. Link
Android Nougat was released in August of 2016 but fewer than 1% of devices are running it. Link
Linode is evidently losing customers massively as a result of their repeated DDoS outages. I'm about to be another one who's leaving. Probably heading to AWS. Link
Tesla is looking to sell cars complete with insurance and maintenance. Link
Human news
Bruce Lee used to write letters to himself about authenticity and personal development, and they've been released for the fist time. Link
NASA found 7 Earth-like planets, just 40 light years away. Link
Kim Jong-Nam was killed by the VX nerve agent, rubbed on his face by a girl at the airport. The entire story is some beyond fiction spy stuff. Link
Fantastic hand-drawn infographics by Wendy Macnaughton. Link
Travel Press is reporting a massive drop in tourism to the U.S. Link
Ideas
IQ and Creativity in a Post-work World Link
Weaponized Narrative is the New Battlespace Link
Companies Exist to Service Customers, Not to Employ People Link
You Should Have Two Different Kinds of Hiring Interview Link
Discovery
Troy Hunt's analysis of the Cloudbleed bug. Link
20 security startups worth paying attention to this year. Link
Analyzing bonnets with Suricata and Machine Learning. Link
A list of sites affected by CloudBleed. Link
If you haven't read about GPDR (the European data privacy law) you should look into it. The short summary is that it gives European citizens back control of their own personal data, and to protect that data from being exported and misused without their knowledge. It includes fines for companies who fail to protect the data of EU citizens of up to 4% of worldwide turnover. Link
Evaluator — An open source tool for strategic information security risk assessment. Link
A fantastic piece on the history of Trump, Putin, and a potential new Cold War. Link
MacOS WiFi Cleaner — A tool by Rob Fuller to remove open wireless hotspots from MacOS. Link
Amazon has launched a new blog dedicated to AI. Link
PayloadsAllTheThings — A list of appsec related attack payloads, coming soon to SecLists as well! Link
Google's API design guide. Link
pURL — An API testing tool written in Python. Link
The ISC/SCADA Top 10 List Link
Notes
If I could do any university program today I'd do the Philosophy, Politics, and Economics degree from Oxford. Link
Still working through Hamilton, and my next book will either be The Federalist Papers or Sapien.
I'll be going to London in the middle of June, so if you're going to be there we should get together.
I'm thinking about doing a live Twitch stream of something I'm calling Office Hours, where people can hit me up on Twitter, YouTube, Facebook, whatever, and ask me anything on the topic of infosec. I'll probably do my first session on my Information Security Career guide, and anyone can ask for more detail on any section, etc. If you're interested let me know on Twitter or via email. Link
Recommendations
Read history. I have learned so much about myself by reading the Hamilton biography. I've seen flaws in Hamilton and Jefferson that I could easily see me making myself, and their experience might be able to help me in my own life. Reading does this for you. It lets you live multiple lives. No matter how much you're reading, you can probably benefit by reading more.
Aphorism
"Never confuse movement with action." ~ Earnest Hemingway
Thank you for listening, and if you enjoy the show please share it with a friend or on social media.
__
I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
February 26, 2017
Companies Exist to Serve Customers, Not to Employ People
I’m not sure when it happened, but somewhere along the American history timeline people became convinced that employers owe people jobs.
They don’t.
Companies have employees for one reason alone: it helps them serve their customers better. The moment this stops being true is the same moment the company will get rid of employees.
This is what people don’t get about the automation / artificial intelligence / robots situation. There’s a common misconception that companies won’t fire most of their employees because…well…because there’s no place else to work! Or because if they fire all the employees then nobody will have money to buy their products!
This is a completely juvenile way of looking at business. Does anyone really think that company A will maintain a massive employee burden, which harms its profits, all because it believes that company B and company C are looking out for them by taking massive business losses in the name of national altruism?
It’s pure fantasy.
Companies do what’s best for them, and they don’t gamble their livelihood on the assumption that everyone is looking out for their best interest, because they know this isn’t true.
If the business makes more money by firing everyone in the short and medium term, but they have a chance of being harmed in some potential long-term future, what do you think they’ll do? They’ll fire everyone. Every time.
Manufacturing is a great example. People keep talking about the fall of American manufacturing, but if you look at the numbers you’ll notice that the U.S. is making as much now as we ever have, but we’re outpouring this much productivity with a tiny fraction of the workforce.
So manufacturing doesn’t have a problem at all. The problem is manufacturing jobs.
This same trend will hit industry after industry until the industries are doing great and some massive percentage of the country is unemployed.
At that point we’ll start to see that there’s a limit to how much the economy can grow, even with the best products, because you can’t sell things to people who don’t have money.
This will take some time, however, since many products will still sell to those who do have money. They money isn’t going away; it’s just going to fewer and fewer people.
But yes, this will eventually become a problem, and we’ll need something like Basic Income to solve it.
But don’t think for a moment that companies are already anticipating this, and holding onto their employees so that they have salaries to buy what they’re making. It’s fiction. Companies work on the timescale of quarters and fiscal years, not generations.
If your company can make more money without you, they will fire you. Period. But that’s not actually the interesting part.
The interesting part is that we should expect them to. And if you don’t, then you don’t understand business.
Companies are there for themselves, for shareholders, and for their customers. Those who put altruism before a healthy business become ex-companies.
Machine learning, AI in all forms, and robots are making it increasingly possible to fire humans and dramatically improve productivity and profits. That means more unemployed humans, and there aren’t many businesses who will avoid upgrading because of what might happen to the greater economy in 20 years as a result.
Prepare yourself for the future that’s coming, not the future you wish were coming.
__
I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
Daniel Miessler's Blog
- Daniel Miessler's profile
- 18 followers

