Daniel Miessler's Blog, page 123
May 14, 2017
Unsupervised Learning: No. 78
This week’s topics: The WannaCry ransomware worm, the president’s EO, Macron hacking, HP backdoors, laptop bans, Amazon releases, Chinese online commerce, CRISPR, Germany and renewable energy, beetles, dental health as social indicator, Reading superpowers, Net Neutrality, serverless, deep learning black box, The Three Body Problem, you can now support the site, The Mechanical Universe, TrueCaller, and more…
This is Episode No. 78 of Unsupervised Learning—a weekly show where I curate 3-5 hours of reading in infosec, technology, and humans into a 15 to 30 minute summary.
The goal is to catch you up on current events, tell you about the best content from the week, and hopefully give you something to think about as well.
The show is released as a Podcast on iTunes, Overcast, Android, or RSS—and as a Newsletter which you can view and subscribe to here.
Newsletter
Every Sunday I put out a curated list of the most interesting stories in infosec, technology, and humans.
I do the research, you get the benefits. Over 5K subscribers.
Recent Newsletters
05/14/2017 – Daniel’s Unsupervised Learning Newsletter: No. 78
05/07/2017 – Daniel’s Unsupervised Learning Newsletter: No. 77
04/23/2017 – Daniel’s Unsupervised Learning Newsletter: No. 76
04/23/2017 – Daniel’s Unsupervised Learning Newsletter: No. 75
04/16/2017 – Daniel’s Unsupervised Learning Newsletter: No. 74
04/09/2017 – Daniel’s Unsupervised Learning Newsletter: No. 73
04/02/2017 – Daniel’s Unsupervised Learning Newsletter: No. 72
03/26/2017 – Daniel’s Unsupervised Learning Newsletter: No. 71
03/19/2017 – Daniel’s Unsupervised Learning Newsletter: No. 70
03/13/2017 – Daniel’s Unsupervised Learning Newsletter: No. 69
03/06/2017 – Daniel’s Unsupervised Learning Newsletter: No. 68
02/27/2017 – Daniel’s Unsupervised Learning Newsletter: No. 67
The podcast and newsletter usually go out on Sundays, so you can catch up on everything early Monday morning.
I hope you enjoy it.
__
I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
For a Life Upgrade, Swap TV Time for Reading Time
I have a lot of really smart friends, but I have few friends that read.
Most of my friends watch TV. And when I say “TV”, I mean all its modern forms—actual broadcast television, DVR’d shows, Netflix, whatever. They watch lots of it. Probably 10-30 hours a week, if I had to guess, which ends up being hundreds or thousands of hours per year.
But they don’t read.
And I’ve started to notice a yawning gap between their understanding of the world and my own. When we talk they offer me little clips of wisdom that were interesting five years ago when they first came out, but they have no awareness of the actual research behind it. The only reason they know the story is because it went viral on Facebook, or hit CNN.
So when we hang out, I hear about the inane TV shows they’re watching, and I tell them about the remarkably interesting concepts I’m learning about on a weekly basis through reading.
It makes me feel we’re sparring while I’m in a futuristic, 28-foot battle mech, and they’re naked after being malnourished for a week.
It’s not a fair fight, and that’s what’s frustrating me. They’re choosing to fight—not against me—but against the world, without upgrading themselves.
It’s not me, it’s the upgrades
Then I hear from others how smart I am. Or how productive I am. Or how I’m always doing interesting projects, and writing, and creating things. And I get questions about how I do it.
Well, that’s the thing: I didn’t do anything. I just happen to be in a 28-foot battle mech called reading.
Reading is a genetic upgrade. Every book I read enhances me, like a battle mech, like a smart drug, like an IQ implant, or like a CRISPR upgrade of my creativity. I am so vastly superior to my 10-years-ago-self that I can barely remember being that limited and ignorant.
And it’s all because of reading.
Reading makes you creative, just like exercise gives you energy. When I stop reading, I stop having ideas. It’s a very simple causal relationship described by numerous other people (which you would know if you read more). In short, I’m not smart. Reading makes me smart.
And because I know this, I make choices to keep this IQ & Creativity engine working consistently. I haven’t watched a TV show in months. I spend all that time reading instead. I read on planes. I read in the car (Audible). I read in bed. I go to coffee shops on nights and weekends and read. I read everywhere, constantly.
The Concentration of Wisdom
It’s true that there is good TV out there. John Oliver’s show is quality. Bill Maher has some solid discussion on a frequent basis. VICE is spectacular. And I’m sure there are many others.
But I don’t think they compare to reading books because reading has a naturally higher Quality Density, or Wisdom Concentration.
I honestly believe that reading a good book is ten to twenty times as useful as watching a quality show. I think it’s something about the purity of the thought that’s coming into the brain, and how much work your brain does to consume, model, and structure that input.
It’s almost as if reading is working out in a gym with weights, and watching TV (even good TV) is like watching someone else work out.
I think the “weight” in this metaphor is the creation of the world that’s being described, the placement of characters and concepts within it, and the maintaining of that world in your mind as the story progresses. With reading, your mind is doing all this work itself. It’s engaging in terraforming and world-building on its own, every moment of every book. With TV, all that work is done for you, and you’re just watching from the outside.
Making the switch from TV to reading
So what I recommend for everyone—and especially my friends—is that you consciously make the choice to exchange some of your TV time for reading time.
You might think you don’t have any time to read, but that’s because you’re probably spending close to 100 hours a month watching 11 different shows. And let me just say again: every hour of reading you swap will likely give you the value of 5, 10, 20, or even 100 times as much TV.
So it’s not a matter of not having the time; it’s a matter of making the time.
A side note about video games, by the way. They should be treated much like TV, and not the good kind of TV either. I like video games, and play a few myself, just like I watch a few TV shows. But I never treat either of those as my primary inputs, as they both equate to entertainment as opposed to enrichment.
So bottom line there is this: add up all your TV and video game time that you spend per week. That’s your enrichment / upgrade time budget. That’s the time you have to spend on improving yourself.
You can literally go to the gym. You can engage in some sort of physical activity like running, or walking, or whatever your sport of choice is. You can read. Or you can watch TV and play video games.
My recommendation is that you spend 80% of your time reading, 10% of your time exercising, and 10% of your time on video games or watching TV. Maybe that’s too extreme. Maybe you want to get to 70/15/15. Or 60/20/20. I don’t know what the best mix is, but what I’m arguing is that the more reading you add the better off you’ll be.
And just to be clear, I know this will be harder. Watching TV is extremely easy. And so is playing video games. They provide all the hooks and incentives themselves, so you basically just eat fistfuls of M&Ms until you can’t feel your face anymore.
Reading is harder to start because of the cognitive load I talked about earlier. For those who don’t read, or who aren’t currently reading, it’s harder to pick up the book and get started than it is to sit on a couch and stare at something.
I get it, and I’m asking you to push through it, for your benefit.
My Plea to You
So rather than a summary I’m going to make a plea.
Start reading.
This is not a matter of changing entertainment types—it’s a matter of upgrading yourself, your creativity, your imagination, and your intelligence.
It will make you more prolific in your own personal projects, it’ll make you more effective at work, and more interesting to talk to with strangers and at social events.
But let’s not be ephemeral about it. Let’s make it practical. I recommend you do the following starting next week.
Pick a book from my Unsupervised Learning book list.
Spend an hour a day reading it.
That’s it. An hour a day.
No matter what, you keep reading. It’ll be hard to get into the schedule at first, because your mind will rebel. It’ll say:
Um, this is hard. Why are you making me concentrate? Can’t we do something fun? Let’s watch TV instead. Let’s play a video game.
No. We’re reading this book right now. That’s what we’re doing.
After a week of doing this, and possibly much sooner, it’s no longer going to feel like work to make yourself read. It’ll com naturally, and it’ll become just as enjoyable (if not more) as watching TV or playing a video game.
And I’ll take it a step further. If you want to read a fantasy book, go and download The Name of the Wind and read/listen to that. For sci-fi, get The Three Body Problem. And if you’d prefer non-fiction, start with Homo Deus.
An hour a day.
Make the change.
You’ll love it.
Notes
I have another group of friends, much smaller in number, that read maybe 1-5 books a year, mostly fiction, which almost counts as being a non-reader.
“Reading” also includes other types of high-quality content, such as the the Waking Up, Intelligence Squared, and a16z podcasts, for example.
I’m purposely being a bit of a dick in this post because I think it’s worth it to get you to read. It’s a trick called “being an asshole”. I hope it’s working.
I make T.V. exceptions for Game of Thrones, Black Mirror, and a couple other shows, but in an average month out of a year I watch virtually no media. It’s all binging of a few top-quality shows or nothing at all.
The Wisdom Concentration, or Quality Density of non-fiction books is also extraordinarily high because content is often dense and concise in good non-fiction books.
It’s also worth mentioning that this isn’t about people who are already readers and just have lots of other positive activities going on, and who don’t watch much TV. This is for people who have lots of free time but spend it watching TV or playing video games instead of reading.
__
I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
May 12, 2017
Support
Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas.
USD
Donate Now__
I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
May 11, 2017
Some Quick Takeaways from the 2017 Verizon DBIR
For those who lack the time to read the entire report, here are some of the key findings along with .
Attackers
75% of breaches done by outsiders.
25% involved internal actors.
18% state actors.
51% involved organized crime actors.
I see 25% involving internal actors as quite high, but that depends on the definition of “involved”.
Targets
24% of breaches affected financial organizations
15% of breaches affected healthcare
Public sector were third at 12%
Retail and hospitality combined for another 15% of breaches.
Tactics
62% of breaches used “hacking”
51% of breaches used malware
81% leveraged stolen/weak passwords
43% were social engineering based
What does “hacking” mean? And how much hacking did or did not involve malware?
Other findings
66% of malware got in via email
73% of breaches were financially motivated
21% of breaches were espionage related
27% were discovered by third parties
Analysis
I find the 1/4 insider involvement to be high. Not saying it’s wrong. Just saying it seems high.
I think they could use a better term than “hacking” to describe their most common type of tactic. Perhaps “manual intervention”?
I’d love to see some sort of analysis of controls in this report, or a similar report. So basically what controls from say the CIS set are most recommended this year based on the DBIR findings?
That’s not a bullseye because every company is different, but maybe they could do a recommended controls list for each industry or something.
Anyway, solid stuff as usual from he team. And I enjoyed the summary as well.
Notes
I imagine a lot of these questions were answered in the full version of the report. This is analysis of the executive summary.
__
I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
Robert Graham is Wrong About John Oliver Being Wrong About Net Neutrality
For example, he says that without Net Neutrality, Comcast can prefer original shows it produces, and slow down competing original shows by Netflix. This is silly: Comcast already does that, even with NetNeutrality rules.Comcast owns NBC, which produces a lot of original shows. During prime time (8pm to 11pm), Comcast delivers those shows at 6-mbps to its customers, while Netflix is throttled to around 3-mbps. Because of this, Comcast original shows are seen at higher quality than Netflix shows.
Comcast can do this, even with NetNeutrality rules, because it separates its cables into “channels”. One channel carries public Internet traffic, like Netflix. The other channels carry private Internet traffic, for broadcast TV shows and pay-per-view.Source: Errata Security: John Oliver is wrong about Net Neutrality
Rob has come down on the wrong side of another issue—this time Net Neutrality.
He’s arguing here that Net Neutrality is a wide open issue, with smart people on both sides, and that John Oliver’s treatment was liberal unfairness. That wouldn’t be unprecedented, of course, and I think it’s something to remain vigilant against, but this isn’t a case in point.
Here are two different ways Rob got this violently wrong.
Net Neutrality isn’t about cable channels, it’s about the Internet. It’s about ISPs providing an internet connection and then throttling, tweaking, adjusting, blocking, and otherwise tampering with that connection based on their varied and constantly evolving business associations. In short, it’s saying they’re giving you one thing, which is unfettered access to the Internet, and then giving you something else entirely because it suits them financially.
He claims nothing negative has happened that would have required a Net Neutrality rule to be in place. But this is just false. There have already been numerous abuses that directly show the need for such legislation. We’ve had traffic throttled and outright blocked by ISPs because that traffic competed with services provided by the ISPs parent company. It was blatant and indisputable.
Rob is smart as hell, and he’s right about so many things.
Net Neutrality isn’t one of them.
__
I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
May 8, 2017
My Current Predictions for Thinking Machines
I’ve been thinking a lot about what I called “Getting Better at Getting Better” in my book. It’s the idea of accelerating machine intelligence, where computers aren’t just getting better at solving problems, but the pace at which they get better increases drastically. I think this comes in two forms:
Improved machine learning that improves as we provide higher quantities of quality data.
Evolutionary algorithms that use evolution to innovate.
I’m also reading a book called What to Think About Machines That Think, which is a collection of short thoughts by dozens of experts in various fields on whether computers will soon be able to think like—or better than—humans.
This spawned a few ideas of my own on the topic, but not being an expert in the area I was at first reluctant to capture them. But then I remembered that it’s ok to have raw thoughts as long as you have an appropriate respect for your limitations. So here are my current ideas on the topic of Thinking Machines.
First, I don’t think human intelligence is all that special. I think it’s a matter of complexity, connection counts, etc., and this seems to be what we’re observing with our massive breakthroughs in neural nets and Deep Learning. So it’s mostly a matter of complexity, which is now becoming technologically approachable.
Second, consciousness, as many experts have alluded to in neuroscience, philosophy, etc., is not a single special thing that sits on top of a mountain, but rather an emergent property of multiple, segmented components in the human brain that reach a certain level of complexity. Or as Daniel Dennett says, it’s simply a bag of tricks. Further, it’s my belief that this strange emergent property provided advantage by allowing one to experience and assign blame and praise, which provided tremendous advantage to early adopters who were creating communities. It’s also quite distinct from intelligence.
Third, the core game to be considered when looking at whether AI will become human-like is not intelligence or consciousness, but rather goals. Humans are unique in that our goals come from evolution. At their center they are survival and reproduction, and every other aspiration or ambition sits on top of and secondary to those drives. So in order to make something like a human, it seems to me that you’d have to create something where every component of its being is steeped in a similar sauce.
In other words, we were made over millions of years, step by step, with the goals of survival and reproduction guiding all successful iterations. So if we don’t want to end up with something extremely foreign to ourselves, we’ll need to somehow replicate that same process in machines. Failing to somehow emulate this process will likely result in a painted-on vs. baked-in feel to their goals and ambitions.
So when we talk about the mystery of human intelligence, or thinking machines (which usually means something that reminds us of ourselves), we’re really talking about three things:
Something smart.
Something conscious.
Something with a recognizable goal structure.
The key is realizing how distinct these three things are, and that our “humanness” seems to emanate from the combination of these things, not from one of them in particular.
Summary
So, human intelligence is just a matter of sufficient complexity, which we’re quickly approaching and will soon exceed. Consciousness is separate from intelligence, and will turn out to be a rather unremarkable hack caused by different parts of the brain working independently from each other. And the most difficult component of this entire “replicate humans” equation—instead of super-intelligence or consciousness—will actually end up being the creation of human-like (and human-aligned) goals.
OSS: Intelligence is easy, consciousness is a red-herring, and the hard problem is actually goal creation.
This is my current, non-expert prediction for how the “Thinking Machines” story will play out in coming years and decades.
Notes
Some of these ideas were inspired by Waking Up, by Sam Harris, multiple essays by Daniel Dennett on the nature of consciousness, and dozens of other books I’ve read on various orthogonal topics.
OSS = One Sentence Summary. I think we should be able to take anything interesting and make it a 1,000 page book, a 100 page book, a 1,000 word essay, or a one-sentence summary. I strive to keep this flexibility of explanation in anything I’m learning or trying to understand.
Because this is a prediction, and I love tracking and learning from being wrong, I’ll be updating the text in update sections below the original content and not changing the prediction itself.
__
I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
Some Thoughts on Thinking Machines
I’ve been thinking a lot about what I called “Getting Better at Getting Better” in my book. It’s the idea of accelerating machine intelligence, where computers aren’t just getting better at solving problems, but the pace at which they get better increases drastically. I think this comes in two forms: 1) improved machine learning that improves as we provide higher quantities of quality data, and 2) evolutionary algorithms that use evolution to innovate.
I’m also reading a book called What to Think About Machines That Think, which is a collection of short thoughts by dozens of experts in various fields on whether computers will soon be able to think like—or better than—humans.
This spawned a few ideas of my own on the topic, but not being an expert in the area I was at first reluctant to capture them. But then I remembered that I do that all the time, and that I just need to have an appropriate respect for my limitations. So here are some random ideas about the nature of human intelligence, whether machines will be able to achieve it, and similar topics.
First, I don’t think human intelligence is all that special. I think it’s absolutely a matter of the number of connections, and this seems to be what we’re seeing by improving the complexity of our neural nets, which have yielded extraordinary results in Deep Learning.
Second, consciousness, as many experts have alluded to in neuroscience, philosophy, etc., is not a single special thing that sits on top of a mountain, but rather an emergent property of multiple, segmented components in the human brain that reach a certain level of complexity. Or as Daniel Dennett says, it’s simply a bag of tricks. Further, it’s my belief that this strange emergent property provided advantage by allowing one to experience and assign blame and praise, which provided tremendous advantage to early adopters who were creating communities.
Third, the core game to be considered when looking at whether AI will become human-like is not intelligence or consciousness, but rather goals. Humans are unique in that our goals come from evolution. At their center they are survival and reproduction, and every other aspiration or ambition sits on top of and secondary to those drives. So in order to make something like a human, it seems to me that you’d have to create something where every component of its being is steeped in a similar sauce. In other words, we were made over millions of years, step by step, with the goals of survival and reproduction guiding all successful iterations. So if we don’t want to get something extremely foreign to ourselves, we’ll need to somehow replicate that same process in machines. The alternative would be a painted-on vs. baked-in feeling to their goals and ambitions, which I’m not sure would feel as authentic.
__
I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
May 7, 2017
A Look at Application Testing in a Near Future
For some silly reason I was just awoken by the following thoughts about how application tests will be done in the future. I’m thinking this starts phasing in over the next 3-10 years.
The application will be decomposed using an automated tool to break it into the binary, the source, the network interfaces, the application-layer interfaces, and any other components / surface area.
The binary will be published to a harness location that allows approved tools and partners to access it in a standard way. This will include third party services, third party paid algorithms, as well as a host of free automated fuzzing algorithms.
The source code is published to a source code testing harness that works in a similar way. A set of free automated tools start working on it immediately, a beacon is published to the trusted consultancies that have paid for access to the test interface, and vetted individual testers will be notified as well that they can begin testing as well.
Network and application layer interfaces are enrolled into similar testing harnesses, which also spawn notifications to the tools, consultancies, and individuals who are partnered to test the application.
Submitted vulnerabilities, contracts for who is paid what based on what is found, are all handled by Ethereum, i.e., a smart-contract-based blockchain. This handles who found what, which items are duplicates, and how much was paid to find each vulnerability.
So it becomes all about the inventory of the components of the application and their arranging into a standard testing harness.
From there, tens, hundreds, or thousands of automated testing algorithms start firing to find vulnerabilities. They range from simple, rules-based systems to deep learning powered algorithms powered by terabytes of testing data for extremely similar apps—and those algorithms are updated continually.
Those algorithms are also competing with the human element, which might also have their own algorithms they can leverage as well. So human testers / researchers will be signed up for multiple contracts, using Ethereum or something similar. And when the notification goes out that an app has become available in a contract you’re a part of, you can immediately launch any of your automated tools as well as sit down to test yourself.
So imagine a binary sliding into a ship hangar, and a number of cables come from all sides and slide into their ports on the binary. 14 million people are notified that this binary is available for testing, and tens of thousands of algorithms start their work, which discovers vulnerabilities instantly and reports them to the blockchain.
Same for the remote components. The running application is slid into a similar hangar, and connectors touch the TCP/IP stack, the app layer in various ways, etc., and automated probes start instantly. The best common rulesets. The best ML/DL algorithms for that particular application type. A set of fuzzing algorithms kicks off that was just updated 12 minutes ago and is the best in the world at crashing these particular services.
For many applications that do or will have external public interfaces, notifications are also sent out to millions of independent testers simultaneously, with the only criteria being that they must be part of the Ethereum network that manages the integrity of contracts, submissions, and payment.
And then everyone goes to work on the app. All its components. Humans and algorithms battling it out. With findings streaming in over a high-integrity, high-transparency blockchain.
Insurance companies will require that applications be continuously tested using these types of harnesses, and they’ll be rated with various levels of security based on the type and duration of scrutiny the application has endured without finding additional vulnerabilities.
A number of these elements are already starting, or are already in place, but what I find most interesting about the model is breaking the app into components, using common interfaces for testing those components, and then a common contract and reporting mechanism based on the blockchain.
—
Ok, now coffee and breakfast.
Notes
I say applications here, but this is really any kind of security testing. And actually, not even just security. It’s a testing and validation framework, and the testing can have multiple forms.
A number of things will be needed to get this going, including some sort of mechanism for instancing. You can’t have everyone touching the same system at the same time. Especially for dynamic testing.
Side note: I really dislike posts titled, “The future of”, or “In the future”. It’s cliche. It’s pompous. It’s just bad. Need to figure out better ways to convey these types of interesting conjectures / predictions of what’s to come.
__
I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
May 6, 2017
How to Get Extracted Fields into Your Splunk Alerts
I’ve been continuing to improve my Splunk> game, and part of that has been improving the information that comes in email alerts.
The image above shows the final result I am looking for, where you get custom information that’s contextual to the alert that was generated.
The problem
The issue is that this doesn’t work by default. You can’t just save the search above as an alert and have it give you the results in the email above. What you’ll get instead is empty fields.
Here’s what the template looks like:
The issue is that those $result.City$ and $result.SSHExceededUser$ fields may not show up at all in the email, even if you make sure those fields are included in the search result.
The fix
The fix is easy enough, although I wish it weren’t necessary.
What you do is send your search result to the fields command, followed by the fields you want to be able to use in your email template.
Once you do that you’ll get the extracted results, as seen in the email above.
Hope this helps someone.
__
I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
May 3, 2017
Embracing Female Oppression as a Sign of Feminism
The regressive left, or Lupus Liberals as I like to call them, are heroically confused about the Hijab.
The source of the confusion seems to be that there are western people, especially men, who don’t like the Hijab for various reasons. Some are good, some are bad. Good reasons include not liking the fact that this is a religious practice that outright limits the rights of women. Bad reasons include bigots associating the hijab with everything else they hate in their xenophobic little brains.
But either way, because the Hijab is countered by many in the west, it’s now become impossible for these far-left types to see the issue clearly.
Basically, it almost doesn’t matter what the practice is—it simply becomes legitimate if it’s 1) ethnic or religious, and 2) opposed by people in the west.
The concept of the Hijab is unbelievably anti-liberal, and anti-feminist. That’s why it was protested by so many women in Iran when the law went into place. They didn’t want the Hijab. They saw it for what it was—a suppressive and oppressive force that affected only one gender.
And its source is equally non-liberal. The argument is that women need to cover their hair because it’s overtly sexual, and that men shouldn’t be expected to restrain themselves around women who are showing their hair openly. It’s basically a veiled form of, “If you dress like that you deserve to get raped.” It’s quite sickening.
And that’s the origin. That’s the religious and cultural backing for the practice. And, crucially, in many countries it’s required. Women who don’t wear it are looked down upon as whores and/or arrested in many countries that believe such things.
The counter argument goes something like this.
You don’t understand Islam. Islam loves its women, and it wants to protect and cherish them. That’s why you’re not supposed to show your hair, or talk to or shake hands with men who aren’t in your family. It’s because Islam loves women. And since I love Islam, and you’re attacking its practices, then that means you’re anti-Muslim, and anti-me. And I will protest your oppression.
So many American feminists are accepting and defending this line. The concept can be simplified as this: we already know white westerners are bad, so if white westerners dislike the Hijab then it must be worth defending.
The problem, of course, is that this also applies to other, extremely common practices in countries that embrace Islamic law. Honor beatings, honor killings, genital mutilation. These all spring from the exact same well as the Hijab. And in any normal light the feminist would oppose them outright. But since the west dislikes them, they must be worth defending.
Summary
As I said in the Lupus Liberalism piece, there’s a way out of this labyrinth.
Simply keep in mind that intolerance of the oppression of women and the intolerance of female equality are not the same. They’re both intolerance—and that’s what’s confusing the far-left—but one is being intolerant of attacks of womens’ rights, while the other is actually limiting those rights.
Those are not the same.
Don’t allow the labels and shapes of the debate participants to confuse this point.
__
I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.
Daniel Miessler's Blog
- Daniel Miessler's profile
- 18 followers
