Daniel Miessler's Blog, page 86
March 31, 2019
Defining the Values of the Intellectual Dark Web
The gravity around the Intellectual Dark Web (IDW) continues to build, with the New York Times doing a piece on it recently, and many other publications coming out to attack its members as either saviors or villains.
The term was coined by Eric Weinstein on Sam’s podcast.
In no particular order, the top echelon of members include Sam Harris, Eric Weinstein, Joe Rogan, Ben Shapiro, and Jordan Peterson, but the exact membership is neither clearly defined nor restricted.
I’ve listened extensively to the work of all these core members—with the exception of the conservative Ben Shapiro who I’ve only watched as a guest with Sam Harris—and in my view the unifying characteristic was the willingness to honestly discuss controversial topics in Good Faith™. The New York Times piece had a different take, saying it was about going against your own in-group.
Good Faith™ here means assuming your discussion partner fundamentally has good intentions.
There is no direct route into the Intellectual Dark Web. But the quickest path is to demonstrate that you aren’t afraid to confront your own tribe.
The New York Times Piece
That doesn’t quite fit for me, because I don’t see people like Ben Shapiro or Dave Rubin angering their fans with their rhetoric. I see them saying exactly what their fans already agree with, but perhaps I’m wrong and they are being criticized for being too friendly with liberal types like Sam and Eric.
I’d almost characterize the IDW as Radical Centrists—which alloys ideas from both liberals and conservatives into a kind, practical, and unconventional middle.
This is what separates people like Rubin and Shapiro.
If we were to take the top issues that define one as a liberal or conservative, such as abortion, healthcare, drug laws, gun control, climate change, LGBT rights, dislike of Trump, etc.—I’m pretty sure Sam Harris, Eric Weinstein, Joe Rogan, Steven Pinker, and even Jordan Peterson would score extremely liberal (as would I).
Even Jordan Peterson feels to me like a disgruntled progressive that’s upset the left won’t harvest good ideas from tradition.
It’s perplexing to me that people like Sam Harris, Eric Weinstein, Joe Rogan, and Steven Pinker are commonly called out as right-wingers when they extremely liberal by most standards.
The dangers of undefined membership
To me the IDW becomes troubling when the extreme right starts using it as a platform for racism, fascism, and nationalism.
“We’re just having honest conversation”, they’ll say. “We’re not racist! We’re part of the IDW”, they’ll say. That makes it difficult to know what the group truly stands for—especially when there are no official membership boundaries.
To a liberal outside this IDW conversation, it’s to see the difference between someone who says that we’re going to have to look hard at the immigration topic in Western Europe and the United States—for various liberal reasons—and someone who says that they believe Jews or Muslims are the problem, and the world would be better off if white people were in charge.
The second position does not represent openness, or honesty, or arguing in good faith—that’s racism. Full stop. Same with the white separatists, ethno-nationalists, and all their various incarnations.
There’s a difference between wanting to have a good-faith conversation about a controversial topic in order to further humanist ideals, and wanting to use a guise of openness to spread racism, sexism, and nationalism.
Defining values
This is why I think if the IDW wants to survive—and not be hijacked by extremists on the right—it needs a set of values. I’d propose something like the following.
The pursuit of a humanist ideal that emphasizes the well-being of all humans, individually and collectively, and that prefers critical thinking and evidence over dogma or superstition.
The belief that Good Faith conversation—including on sensitive or controversial topics—is critical to making progress towards #1.
Some may balk at the humanist and “well-being” pieces, or even the “all humans” part. Perhaps they’ll say it sounds too liberal. Good. That sounds to me like a great way to scare away the racists, sexists, and xenophobes.
I have a few conservative friends and most would agree with these two tenets.
These tenets allow for us to harvest truth and beauty and utility from religious traditions, but to also criticize such traditions where they are out of line with the core values.
These tenets allow us to talk about race, gender, class, and culture—openly, and without fear—but also to reject people who conjure and wield hatred around these topics.
And most importantly, these tenets allow us to unify good people who happen to call themselves, for whatever reason, conservatives, liberals, or libertarians.
If we can unify kind and thoughtful people, who genuinely want to make the world better for everyone, and who believe strongly that the way to do that is by talking to each other, then this IDW might be just what we need to survive 2020.
If so, I’m in.
Notes
Mar 31, 2019 — A reader corrected me that Dave Rubin is actually much more like Sam and Eric in that he has always been liberal, and isn’t a conservative like Ben Shapiro.
—
Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.
March 28, 2019
Unsupervised Learning: No. 170 (Member Edition)
This is a Member-only episode. Members get the newsletter every week, and have access to the Member Portal with all existing Member content.
Non-members get every other episode.
or…
—
Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.
Four Components of Free Speech Risk — Analysis of Sam Harris’ Podcast With Roger McNamee
I just finished listening to an extraordinary episode of Making Sense with Sam Harris, where Roger McNamee was the featured guest. He is a long-time tech analyst and investor, and actually used to be an advisor for Facebook before becoming an outspoken critic.
Roger’s points were legion, and covered the now familiar ground regarding the collection of our data by Facebook, Google, and the hordes of data brokers who assemble and sell curated profiles on all of us. It was a good summary of the scope and impact, but it was covered territory for anyone in infosec or privacy.
I’m not sure he was correct about Google’s malicious motives in creating GMail, but it sure was an interesting narrative. I took it with a bit of skepticism, though.
He talked about how Google had search, but realized they needed to know more about the people searching—so they invented GMail and scanned every email for data about you and your preferences. He says they then realized they needed your location too, so they created Google Maps. It was quite a Mr. Burns-esque picture he painted.
To me, the most interesting point he made was how strange it was that nobody is challenging these companies for doing what they’re doing. He asked how it was that this became their data in the first place. Who gave all these data brokers authorization to sell your data? How did it become theirs somehow? When we never entered into an agreement with them. It was a fascinating question.
“Data is the new oil.” is a popular sentiment in tech right now.
His argument reminded me of oil. Oil comes from the sun’s energy stored in dinosaur bones and other organic material. Who owns the sun’s energy? Well—according to current laws—whoever has the resources to find it first, and to claim ownership of a location and method of extraction.
What if I told you that the vast majority of your privacy risk comes not from the seedy darkweb, but from completely legal data brokers?
— ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ (@DanielMiessler) March 4, 2019
That’s precisely what Google, Facebook, and all these various data brokers have done with our data. They’ve taken our collective sun—which is the fundamental data about our lives—and claimed it as their own.
With the first-hop services like Google and Facebook you probably sign away your rights in the agreement, but I wouldn’t know because I’ve never read it. And neither have you.
They own it. They can sell it. And they can do so without us even knowing. If you want recourse, sure—take some time off of work and they’ll show up with 37 lawyers prepared to battle you for 20 years if necessary, which would cost you thousands of dollars a month.
The point isn’t that we can’t fight big companies—the point is that we’re not even realizing it’s profoundly strange for someone to own sunlight, or the personal data about billions of humans.
The (destructive) power of free speech
Anyway, all this was interesting, but Sam (thankfully) kept pulling Roger out of the weeds and back to the important questions. Namely, what went wrong in 2016? What is wrong with Facebook claiming it’s a platform and not a media company?
Sam was operating with his default state of Good Faith to both sides, even though only one was present. He could see—as can I and many others—that there’s a difference between being a platform for conversation and being a media company, and he was reluctant to blame the platforms fully for the misinformation campaigns that have become so common.
The real question that Sam was asking was excellent, which is basically:
This was specifically his question for Twitter.
Why not draw the line at the First Amendment and be done with it? Why contort yourself into knots over infinite nuance and interpretation when there’s already a (somewhat) clear backstop in the form of the Constitution?
I think that’s brilliant, and I think the answer is obvious from the way Twitter is handling these issues. They see the impact of not taking action against harmful memes and viral hate speech as being far worse than taking action, and I have to say that in many of the cases I’ve heard about, I agree.
What this highlights is simply that things have changed. It reminds me a lot of Sam’s recent conversation with Nick Bostrom, actually. These platforms might end up being the first black ball that we pull from the urn, and it might require that we get a whole lot more controlling over what can be said. That frightens me greatly as well, since I think we’re equally unready to exert that level of protective control without it becoming more of a threat than dangerous speech.
The Four Components of Free Speech Risk
So that brings me to the idea that I had when walking for 45-minutes and listening to the podcast.
I think that law like the First Amendment might have to change based on the evolution of human society and technology. It’s depressing in a similar way to what Yuval Harari talks about in some of his work, i.e, that religion and Capitalism and all these various ideas that have served us in the past might eventually become outdated to the point of becoming useless. And at that point we’ll need something new.
That isn’t to say that it’s time to discard the First Amendment, but rather that it’s possible for something so sacred and so pure as the First Amendment—or Capitalism, or fill_in_the_blank that we’ve always loved—to become such a bad fit with our current reality that we have to modify it to survive.
The way I tried to capture this was to look for elements of risk in free speech, and imagine those element values at various stages of human history. I think the components might look something like:
Platform Reach
Audience Gullibility
Mob Potential
Harm Potential
Platform Reach is how many people can hear you when you pronounce a bit of free speech. Audience Gullibility is how resistant people are to bullshit. Mob Potential is the ability to get others to agree quickly to a given argument or sentiment. And Harm Potential is how bad it could be if one’s free speech were to be harmful or malicious.
I’m not a historian, so apologies if I’m being sloppy here.
In the late 1700’s the platform was the voice, the letter, and the book, which are either small, slow, or limited in penetration. And even though people were far less educated in the past, they were also more indoctrinated with a government or relgion’s dogma—which likely immunized them to other brands of mental debris. And the harm potential back then of a dangerous stump speech or a letter or a book was definitely significant at the top end, but in modern times the potential to change voting patterns, cause social strife, and disrupt herd immunity are arguably even more severe.
Really, check out the Nick Bostrom podcast with Sam.
The point here is that we may be at an inflection point where ideas can be weaponized in a way that’s so bad we need regulation to assist. I’m not saying I want this to happen—I’m saying it might be happening regardless of what we want.
The Supreme Court has also recognized that the government may prohibit some speech that may cause a breach of the peace or cause violence.
Legal Information Institute, Cornell Law School
We have to think about all the various combinations of these four variables, and imagine worst-case combinations.
In the worst case, you have maximally gullible people, who are most open to believing a new narrative, who are being force-fed a false truth, on a platform that reaches hundreds of millions, where it’s trivially easy to form a mob, where it’s relatively easy to cause significant harm either in the short or long-term.
That’s kind of where we are.
Never before have we had this specific combination of these risk variables. And that might mean it’s time to start adjusting how we think about ideas.
Of course, if we can improve any of these variables it greatly reduces the risk. If we have a smarter population, if nobody’s able to be malicious on the platform, or if viral movements get shut down quickly, or if it’s somehow not easy to cause true harm to people—all those factors can make things better.
But I don’t see how we’re going to fix any of them, let alone all of them.
Analysis
I think what will end up happening is that either Twitter is going to move towards the Constitution—since they can’t possibly police everything without some major breakthroughs in ML—or the Constitution (and similar law elsewhere) is going to have to move towards Twitter.
That means instead of having a specific list of things you can’t say within the protection of free speech, the scope of what’s considered dangerous (like yelling fire in a theater) will expand greatly.
Such laws might say something like:
People who act in bad faith, with the intent to harm either individuals or groups, and use platforms designed to reach over 1,000 people, where the outcomes can conceivably result in harm—will be in violation of the Conscious Harm and Communication Act (CHACA).
A thing that we hopefully won’t need
Imagine how much interpretation there will be in there. Imagine how much controversy there will be about what applies and what doesn’t.
We don’t really have to imagine. Just look at Twitter.
Summary
Listen to the podcast.
The coolest point from Roger was the fact that we’re passively accepting the ownership of our data, and we shouldn’t.
Sam’s point was that we already have a line in the sand in the form of the Constitution, so why not use that?
I think Harari’s point is salient here, i.e., that we might have simply evolved past that being a useful protection anymore. Ideas + Maliciousness + Gullibility + Global Tech Platforms might be our first Black Bostrom’s Ball.
What will laws look like that protect global health in this realm? I think we can expect them to be broad and open to interpretation—much like we see today with Twitter.
—
Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.
Four Components of Free Speech Risk — Analysis of Sam Harris’ Conversation With Roger McNamee
I just finished listening to an extraordinary episode of Making Sense with Sam Harris, where Roger McNamee was the featured guest. He is a long-time tech analyst and investor, and actually used to be an advisor for Facebook before becoming an outspoken critic.
Roger’s points were legion, and covered the now familiar ground regarding the collection of our data by Facebook, Google, and the hordes of data brokers who assemble and sell curated profiles on all of us. It was a good summary of the scope and impact, but it was covered territory for anyone in infosec or privacy.
I’m not sure he was correct about Google’s malicious motives in creating GMail, but it sure was an interesting narrative. I took it with a bit of skepticism, though.
He talked about how Google had search, but realized they needed to know more about the people searching—so they invented GMail and scanned every email for data about you and your preferences. He says they then realized they needed your location too, so they created Google Maps. It was quite a Mr. Burns-esque picture he painted.
To me, the most interesting point he made was how strange it was that nobody is challenging these companies for doing what they’re doing. He asked how it was that this became their data in the first place. Who gave all these data brokers authorization to sell your data? How did it become theirs somehow? When we never entered into an agreement with them. It was a fascinating question.
“Data is the new oil.” is a popular sentiment in tech right now.
His argument reminded me of oil. Oil comes from the sun’s energy stored in dinosaur bones and other organic material. Who owns the sun’s energy? Well—according to current laws—whoever has the resources to find it first, and to claim ownership of a location and method of extraction.
What if I told you that the vast majority of your privacy risk comes not from the seedy darkweb, but from completely legal data brokers?
— ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ (@DanielMiessler) March 4, 2019
That’s precisely what Google, Facebook, and all these various data brokers have done with our data. They’ve taken our collective sun—which is the fundamental data about our lives—and claimed it as their own.
With the first-hop services like Google and Facebook you probably sign away your rights in the agreement, but I wouldn’t know because I’ve never read it. And neither have you.
They own it. They can sell it. And they can do so without us even knowing. If you want recourse, sure—take some time off of work and they’ll show up with 37 lawyers prepared to battle you for 20 years if necessary, which would cost you thousands of dollars a month.
The point isn’t that we can’t fight big companies—the point is that we’re not even realizing it’s profoundly strange for someone to own sunlight, or the personal data about billions of humans.
The (destructive) power of free speech
Anyway, all this was interesting, but Sam (thankfully) kept pulling Roger out of the weeds and back to the important questions. Namely, what went wrong in 2016? What is wrong with Facebook claiming it’s a platform and not a media company?
Sam was operating with his default state of Good Faith to both sides, even though only one was present. He could see—as can I and many others—that there’s a difference between being a platform for conversation and being a media company, and he was reluctant to blame the platforms fully for the misinformation campaigns that have become so common.
The real question that Sam was asking was excellent, which is basically:
This was specifically his question for Twitter.
Why not draw the line at the First Amendment and be done with it? Why contort yourself into knots over infinite nuance and interpretation when there’s already a (somewhat) clear backstop in the form of the Constitution?
I think that’s brilliant, and I think the answer is obvious from the way Twitter is handling these issues. They see the impact of not taking action against harmful memes and viral hate speech as being far worse than taking action, and I have to say that in many of the cases I’ve heard about, I agree.
What this highlights is simply that things have changed. It reminds me a lot of Sam’s recent conversation with Nick Bostrom, actually. These platforms might end up being the first black ball that we pull from the urn, and it might require that we get a whole lot more controlling over what can be said. That frightens me greatly as well, since I think we’re equally unready to exert that level of protective control without it becoming more of a threat than dangerous speech.
The Four Components of Free Speech Risk
So that brings me to the idea that I had when walking for 45-minutes and listening to the podcast.
I think that law like the First Amendment might have to change based on the evolution of human society and technology. It’s depressing in a similar way to what Yuval Harari talks about in some of his work, i.e, that religion and Capitalism and all these various ideas that have served us in the past might eventually become outdated to the point of becoming useless. And at that point we’ll need something new.
That isn’t to say that it’s time to discard the First Amendment, but rather that it’s possible for something so sacred and so pure as the First Amendment—or Capitalism, or fill_in_the_blank that we’ve always loved—to become such a bad fit with our current reality that we have to modify it to survive.
The way I tried to capture this was to look for elements of risk in free speech, and imagine those element values at various stages of human history. I think the components might look something like:
Platform Reach
Audience Gullibility
Mob Potential
Harm Potential
Platform Reach is how many people can hear you when you pronounce a bit of free speech. Audience Gullibility is how resistant people are to bullshit. Mob Potential is the ability to get others to agree quickly to a given argument or sentiment. And Harm Potential is how bad it could be if one’s free speech were to be harmful or malicious.
I’m not a historian, so apologies if I’m being sloppy here.
In the late 1700’s the platform was the voice, the letter, and the book, which are either small, slow, or limited in penetration. And even though people were far less educated in the past, they were also more indoctrinated with a government or relgion’s dogma—which likely immunized them to other brands of mental debris. And the harm potential back then of a dangerous stump speech or a letter or a book was definitely significant at the top end, but in modern times the potential to change voting patterns, cause social strife, and disrupt herd immunity are arguably even more severe.
Really, check out the Nick Bostrom podcast with Sam.
The point here is that we may be at an inflection point where ideas can be weaponized in a way that’s so bad we need regulation to assist. I’m not saying I want this to happen—I’m saying it might be happening regardless of what we want.
The Supreme Court has also recognized that the government may prohibit some speech that may cause a breach of the peace or cause violence.
Legal Information Institute, Cornell Law School
We have to think about all the various combinations of these four variables, and imagine worst-case combinations.
In the worst case, you have maximally gullible people, who are most open to believing a new narrative, who are being force-fed a false truth, on a platform that reaches hundreds of millions, where it’s trivially easy to form a mob, where it’s relatively easy to cause significant harm either in the short or long-term.
That’s kind of where we are.
Never before have we had this specific combination of these risk variables. And that might mean it’s time to start adjusting how we think about ideas.
Of course, if we can improve any of these variables it greatly reduces the risk. If we have a smarter population, if nobody’s able to be malicious on the platform, or if viral movements get shut down quickly, or if it’s somehow not easy to cause true harm to people—all those factors can make things better.
But I don’t see how we’re going to fix any of them, let alone all of them.
Analysis
I think what will end up happening is that either Twitter is going to move towards the Constitution—since they can’t possibly police everything without some major breakthroughs in ML—or the Constitution (and similar law elsewhere) is going to have to move towards Twitter.
That means instead of having a specific list of things you can’t say within the protection of free speech, the scope of what’s considered dangerous (like yelling fire in a theater) will expand greatly.
Such laws might say something like:
People who act in bad faith, with the intent to harm either individuals or groups, and use platforms designed to reach over 1,000 people, where the outcomes can conceivably result in harm—will be in violation of the Conscious Harm and Communication Act (CHACA).
A thing that we hopefully won’t need
Imagine how much interpretation there will be in there. Imagine how much controversy there will be about what applies and what doesn’t.
We don’t really have to imagine. Just look at Twitter.
Summary
Listen to the podcast.
The coolest point from Roger was the fact that we’re passively accepting the ownership of our data, and we shouldn’t.
Sam’s point was that we already have a line in the sand in the form of the Constitution, so why not use that?
I think Harari’s point is salient here, i.e., that we might have simply evolved past that being a useful protection anymore. Ideas + Maliciousness + Gullibility + Global Tech Platforms might be our first Black Bostrom’s Ball.
What will laws look like that protect global health in this realm? I think we can expect them to be broad and open to interpretation—much like we see today with Twitter.
—
Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.
March 23, 2019
The Insane Reaction to Renée DiResta on the Joe Rogan Podcast
I heard Renée DiResta on the Sam Harris podcast a while back, and was excited to learn that she just appeared on Joe Rogan as well.
Her work is focused on misinformation campaigns, and she works at a place that tries to combat the problem.
Anyway, I was listening to her and Joe talking, and I somehow started reading some YouTube comments, which I took a screenshot of above.
What in the actual fuck.
I’m completely blown away. I’ve been reading about misinformation campaigns, and specifically the efforts by the Russians, since 2015 or so, and she’s been focused on it full-time for even longer. I’ve also read several books on the topic. I also have a military background with some dabbling in intelligence work during that time.
Some of the campaigns launched against us during the 2016 election.
Every fiber of my being tells me that the misinformation threat from Russia is absolutely real, and the research that’s documented this fact is overwhelming.
So that raises the question: why is almost every single comment on Joe Rogan’s podcast talking about Renée as if she’s the Clinton-Email-Antichrist? And why is it that every time I talk to my conservative friends about Russian influence, they downplay it massively and claim it’s fake news?
Why? What’s the unifying thread?
Especially when they went out of their way to say not everything is Russian influence, there are real people just being idiots, etc. Their discussion was remarkably measured and cautious, actually. It in no way matched the commentary about it.
The simplest solution might be the best one
One of the most powerful dialectic lessons I learned in the last 10 years was that people reject what they don’t want to accept the implications of.
If liberals were shown evidence that concealed carry made places safe, they would reject that evidence because they wouldn’t want to see more guns. Conservatives deny climate change data because they don’t want climate change regulations pushed down from a bloated, untrustworthy government.
And it looks like we have the same thing here.
Another option is that these comments are from a bunch of actual trolls as well, and are therefore not representative of his actual audience.
It appears that conservatives (which Rogan’s YouTube is evidently full of) hate the Russian Influence narrative because it implies that an enemy exists that’s more dangerous than the liberals. Or, put another way, if they accepted that Russia really was actively tampering with us in a very Cold War type of way, they’d have to shift their focus off of Clinton’s emails. Or maybe become concerned with whether Trump has some unsavory entanglements with the same people.
Just as with Climate Change, that natural result is too much to handle for them, so their move is to reject the evidence as fake news.
That’s the world we live in—a world where smart people deny obvious truth because they don’t like what they think the implications are of accepting it.
And the right isn’t the only side doing this. The left is being devoured by this as well.
The only possible escape is something like the Intellectual Dark Web, which is mostly progressive people who are willing to talk about uncomfortable topics, accept truth, interact with one another in good faith, and then come together to pursue solutions.
You’d think that Rogan’s fans would be into that, but their position on the Russia stuff (again, the comments could be misleading) is telling me they’re just blindly following their own religion without doing any independent thinking.
It’s becoming so strange for me at this point. All I can do is read books and listen to people who have IDW mindsets, because everyone else seems horribly lost on either the left or right.
The center is gone. Nuance is gone. Good faith is gone. And pressure is only increasing.
As I said before, this is taking us towards dangerous territory for 2020 and beyond.
—
Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.
March 20, 2019
Unsupervised Learning: No. 169
Unsupervised Learning is my weekly show that provides collection, summarization, and analysis in the realms of Security, Technology, and Humans.
It’s Content Curation as a Service…
I spend between five and twenty hours a week consuming articles, books, and podcasts—so you don’t have to—and each episode is either a curated summary of what I’ve found in the past week, or a standalone essay that hopefully gives you something to think about.
Subscribe to the Newsletter or Podcast
Become a member to get every episode
March 17, 2019
The Bifurcation of Elite Education
I think the bottom is starting to rot out of the education racket. Elite education today is essentially two different things:
The prestige of having gotten into that school, and
The education you receive there.
These two things are separating from each other, and I think that separation is about to accelerate.
Better education elsewhere (or at least as good)
A number of studies have shown that the level of content at regular universities is often very similar to that of elite institutions, yet people who graduate from the top schools still make more money over their lifetimes.
I think what’s going to happen is that more and more professors are going to become disillusioned with the drama and friction and politics, and will start teaching classes themselves or via loose collectives online.
A high-quality video series—with some interaction for paying students—could reach tens of millions online, as opposed to a few thousand inside an elite college. And there are already efforts to get this type of thing going.
We’re also seeing this from regular institutions doing the free online courses, but imagine a deeper level of that—just like a regular course—for a reasonable price. And importantly, this would be a direct relationship between the people paying and the experts teaching the classes. So they wouldn’t have to watch everything they say for fear of angering a university.
Status indicators
If the education itself became available for less money, and to more people, through a system like the one above, that would raise the question of how companies and society could tell the elites from the normals (you know, because that seems to matter), and I think the answer might come in the form of various clubs and associations.
The more tech we use the easier it gets to validate certain types of activities. People might form clubs based on their salaries, or their net worths, or their amount of social media influence. Or the number of people who read their content on websites they write for.
Whatever.
The point is that evolution makes us want to give ourselves elite labels, and form small and selective groups. So if elite colleges stop being an avenue for doing that, due to come combination of cost and unremarkable education, then people will find other ways to draw those distinctions.
China has a social credit system. We have credit scores. Black Mirror had some ideas as well.
I think we’ll see many iterations of such ranking and reputation scoring platforms grow in popularity, even if they’re only popular in certain small or elite crowds because they’re gross to talk about in public.
Summary
Education is becoming too expensive, and the quality of the education isn’t growing at the same pace.
Education from other sources is improving in quality, and technology might enable decentralized options of extraordinary quality very soon.
Once the education component is separated from the status of going to an elite school, we’ll find new ways to get a validated indicator of status assigned to people at various stages of their lives.
—
Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.
March 14, 2019
The Need for Post-Capitalism
When I saw Yuval Harari live with Sam Harris in San Francisco, I heard a lot of interesting things. But the most interesting thing I heard that night was from Harari, when he said something like:
Forms of government have periods where they’re best suited, based on the evolution of the people at that moment. And as the people change, so must how we manage ourselves.
An imperfect paraphrasing of live comments by Yuval Harari in 2018
This shook me when I heard it, because I think the example he used was democracy.
It’s hard to know how much of this was his thought vs. mine after the fact, but I’m giving credit regardless.
He talked about older forms of government, which worked at the time but became outmoded, and then said that democracy is about to go out of style as well. Not just because it’s not fashionable, but because it’s no longer functional.
That got me thinking about when Picard gave someone a lecture in Star Trek the Next Generation. They were asking about money, and Picard explained how that was no longer a priority for them, because now they care about exploration.
I wish we were at that point, but we’re not.
Post-capitalism doesn’t mean Socialism
The problem we have right now is many have diagnosed the problem correctly, but have become obsessed with the wrong medicine. Marx had the same issue. He nailed the fact that Capitalism had issues, but he thought—and many in 2019 are thinking—that the answer is to give the poor the resources of the rich.
Marx and many today were right that Capitalism is failing the masses, but they’re wrong in thinking that Socialism is the answer.
The fix isn’t redistribution, or Socialism—it’s actually much harder and simpler than that. We have to change what we value.
We have to move from valuing the amassing of wealth to valuing the thriving of humanity as a race. We have to move from consumerism to experienism. We have to move from epitomizing power to epitomizing creativity.
That’s a loose estimate based on reading a number of books on the topic.
And we have to do it fast, because we’re about to be looking at a world were 85% of the U.S., and 95% of the world, is not terribly needed for regular work.
This isn’t a work problem though—it’s a meaning problem.
We need new ways for people to find meaning in life, besides the ones given to us by evolution and the toiling difficulty of everyday life.
You can’t take away peoples’ jobs and their value to society—or have that be taken away naturally by the efficiencies of automation and AI—and then expect them to be happy with a monthly stipend.
People don’t need money; they need to feel valued. People don’t need payment; they need respect. And people don’t need handouts; they need to earn their way. That’s what evolution rewards, and it’s what our societies have always been based on.
Post-capitalism
We had monarchies. We had socialism. We had totalitarian regimes. And we had religions, like Catholocism and Capitalism (Harari).
But now it’s time for the next thing, and it’s not Socialism.
Socialism is an (inferior) peer to Capitalism. It’s for a certain stage in our human development. A stage before we could automated most of the work out of our hands.
Once the algorithms and machines can do most of the work, we’ll need something way different.
Andrew Yang thinks it’ll be social capital, i.e., doing nice or useful things for others, which will be traded as currency.
I think the answer is something like that, but that truly emersive VR Video Games will be a major part of the solution. I think the solution will be to recreate real-world value systems (but hopefully less nasty) within the game world, and to trade capital based on what you do in-game.
So you can still be a cop, or a firefighter, or a scientist—all in-game, and you’ll still get all the benefit as if it were the real world.
But this is so far away!
And that’s assuming we make it there.
We have to make it through the phase where only 5-15% of the world is thriving while the rest struggle and suffer. And that situation brings with it the very real possibility of turmoil, revolution, and backward steps in civilization.
The solution is moving to Post-capitalism before that happens.
We have to find a way to give meaning and a sense of value to the billions of people who are losing it as we speak.
And we don’t have much time.
—
Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.
March 10, 2019
Unsupervised Learning: No. 168 (Member Edition)
This is a Member-only episode. Members get the newsletter every week, and have access to the Member Portal with all existing Member content.
Non-members get every other episode.
or…
—
Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.
March 6, 2019
My RSA 2019 Summary
RSA was good this year, but I didn’t really notice any major new trends. Nothing on the scale of—say—AI, or blockchain. But there were some disruptions that looked quite interesting.
Primary themes
The overall themes I saw this year were largely the same as last year, with a few notable changes.
AI talk has become a lot more tempered and realistic. People are realizing it’s more like saying you have a database, and you really have to describe HOW you use it, and not just say you have it.
Lots of threat intelligence stuff.
Lots of focus on orchestration.
Lots more OT stuff.
I suppose the S1 Ranger thing (below) qualifies as Asset Management.
I’m dissapointed to not see much about Asset Management. Maybe next year, when the Linux desktop becomes popular.
Chronicle Releases Backstory
The Backstory release by Chronicle appears to be groundbreaking.
They’re doing a cloud-based offering that is priced by your employee count rather than data usage, and that’s tens, hundreds, or even thousands of times faster than existing solutions.
It’s basically using all the Google magic secret sauce regarding scalability and speed, to do super fast correlation of malicious behavior for an enterprise’s data.
They just launched, but they’re already getting a ton of partnerships.
The key is the ability to go backwards, which is a play on Chronicle and Backstory, which is cute.
They are keeping all your data (I think indefinitely?) and letting you say things like,
We just learned about this APT, which uses this one domain, which we happened to notice that someoene else on your network went to 14 months ago, and it was Julie, and here’s everything else she’s done since then, and everyone else who’s been to that domain.
Oh, and in 250ms.
This and the next tool are definitely the biggest disruptors I saw at the show.
SentinelOne Previews Ranger
SentinelOne is—according to what I’ve seen with multiple customers—the top endpoint protection product, and what they showed at RSA is a new tool called Ranger that allows their installed agents to look laterally at what else is on the network.
So it’s asset discovery using their existing sensors as opposed to installing a bunch of taps or gateways.
It’s super interesting because it’s getting directly into Tanium’s world, which is all about visibility and management.
Ghidra release by NSA
I was in the talk where NSA released Ghidra, and I thought it was quite interesting.
As I wrote after the announcement for the talk, I thought the whole thing was basically a well-meaning PR stunt. That is, a PR stunt for all the right reasons. So, more like a gesture of kindness.
And that was spot on.
What I found interesting about the tool—and the thing that made all the difference—is that Ghidra was not a new tool that they just released for some good press. Oh, no. It’s the primary tool they themselves use, and have been using for years.
The undisputed king of reverse engineering tools has been IDA Pro forever, but with this release the market has instantly changed.
Not only is Ghidra free, while IDA Pro is multiple thousands of dollars, but it actually has many unique features that even IDA doesn’t have.
There’s a back button for changes that won’t mess up your entire session
There is support for many platforms
There’s a decompiler that can go from binary to C pseudocode
There are collaboration features
…and these are just a few of the differences.
Ghidra instantly became the one and only true competitor for IDA Pro, and in many ways it’s far superior.
This couldn’t have come at a better time, because I’m about to learn some basic RE myself.
It’s quite impressive actually, and I can’t wait to dive into some basic RE CTF challenges.
Summary
Solid show, for what it is.
If you come to RSA thinking you’re at Gartner Security, or reInvent, or DEFCON, you’ll be sad.
But if you see it as a chance to see old friends and learn what the industry is doing, it can be enjoyed.
Think of it as the Momentum Partners PDF in real life.
Notes
NSA also has other open source tools, including an SDR framework called REDHAWK.
Axonius is also another Asset Management play, which takes the asset inventories from tons of vendor products and unifies them into one.
Inky (which I’ve advised for in the past) is also super cool tech, if you’ve not seen it.
—
Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.
Daniel Miessler's Blog
- Daniel Miessler's profile
- 18 followers

