Daniel Miessler's Blog, page 131

February 5, 2017

An Exploration of Human to Computer Interfaces



I read and think a lot about how humans interact with computers, and what that interaction will look like at various points in the future.



I was going to call this a hierarchy of human to computer interfaces, but quickly realized that it’s not a hierarchy at all. To see what I mean, let’s explore them:



Input interfaces

This is what most think of when they think “interface”, i.e. how you interact with the computer.




Manual Physical Interaction: original key-based keyboards, physical switches, etc.
Manual Touchscreen Interaction: smartphones, tablets, etc.
Natural Speech: Voice: Siri, Google Assistant, Alexa
Natural Speech: Text: messaging, chatbots, etc.
Neural: You think, it happens. No mainstream examples yet exist.


Output interfaces

A key part of that interaction, however, is how the computer returns content or additional prompts to the human, which then leads to additional inputs.




Physical or Projected 2D Display: standard computer monitor, LCD/LED display, projectors, etc.
Physical or Projected 3D Display: augmentation of vision using glasses, or projection effects that emulate three dimensions.
Audible: The computer tells you its output.
Neural Sensory: You “see” or “hear” what’s being returned, but it skips your natural hardware of eyes and ears.
Neural Direct: You receive the understanding of having seen or heard that content, but without having to parse the content itself (NOTE: I’m not sure if this is even possible).


Technology limitations vs. medium limitations

Given our current technology levels, we’re still working with Manual Touchscreen Interaction and Display output for the most part, and we’re just starting to get into Voice input and output.



But like I mentioned above, this isn’t a linear progression. Voice isn’t always better than visual displays for displaying information to humans, or even for humans giving input to the computers.



Benedict Evans has a great example:



@Rotero try choosing a flight on the phone

— Benedict Evans (@BenedictEvans) February 5, 2017




My favorite example is Excel. Imagine working with a massive dataset like so:




Read row one-thousand forty-three, column M…




…and your dataset has 300 thousand rows and 48 columns. Seeing matters in this case, and voice might be able to help in some way, but it won’t replace the visual. It simply can’t because of bandwidth limitations. When you look at a 30″ monitor with massive amounts of data on it you can see trends, anomalies, etc.



And that doesn’t even include the concept of visuals like graphs and images that can convey massive amounts of information very quickly to the human brain. Voice isn’t ever going to compete with that in terms of efficiency, and that’s not a limitation of technology. It’s just how the brain works.



Hybrids mapped to use cases

The obvious answer is that various human tasks are associated with ideal input and output methods.




Voice input is great if you’re driving.
Text input is great if you’re in a library.
Voice output is great if you’re giving your computer basic commands at home.
Visual output is ideal if you need to see lots of data at once, or if the content itself is visual.
Neural interfaces are basically hardware shortcuts to all of these, and it’s too early to even talk about them much.


Voice vs. text

One way I see voice and text that I’ve not heard anywhere else is to imagine them as different forms of the same thing, i.e., natural language. You’re using mostly natural language to convey ideas or desires.




Show me this. I’ll be right there. Tell him to pick me up. I can’t talk now, I’m in a meeting. Order me three of those.




These are all things that you could do vocally or via text. There are of course conventions that are used in text that aren’t used in vocal speech, but they largely overlap. Text, in other words, is a technological way of speaking naturally. You’re not sending computer commands; you’re emulating the same speech we had 100,000 years ago around the campfire.



Common reasons to use text vs. voice include lower social friction, the ability to do it without being as disruptive to others around you, etc. But again, they’re very similar, and in terms of human to computer interface I think we can see them as identical save for implementation details. In both cases the computer has to be good at interpreting natural human speech.



Goals

The key is being able to determine the ideal input and output options for any given human task, and to continue to re-evaluate those options as the technologies for each continue to evolve.



Summary


There are many ways for humans to send input to, and receive output from, computers.
These methods are not hierarchical, meaning voice is not always better than text, and audible is not always better than visual.
Voice and text are different forms of “natural language” that computers need to be able to parse and respond to correctly.
Human tasks will map to one or more ideal input/output methods, and those will evolve along with available technology.

---

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

 •  0 comments  •  flag
Share on Twitter
Published on February 05, 2017 12:52

February 4, 2017

The Clash of Extreme Left and Extreme Right Will Create a New Centrism



Right now we’re seeing the extreme right clash with the extreme left, and people in the middle are being forced to choose.



It’s getting ugly.



There’s basically no place for people who think the following:




Both sides are messed up and wrong
The left has gone too far with their fetishization of being offended
Too much of Trump’s right has embraced white nationalist ideas
Too much of Trump’s right has decided to discard evidence and truth and replace it with fantasy


Even worse, most mistake the few centrists who remain with one extreme or the other. You’re either a crazy liberal (as judged by Trump people), or a non-thinking Trump supporter (if you’re talking to crazy liberals).



And those forces, of being confronted by various raving masses, then forces even more centrists to one side or the other. Or to abstain completely because there’s no place for them.



But I hope this will soon change.



In reading a number of books by Charles Wheelan recently, I came upon a political philosophy that could be exactly what we will need once we finish with the current ideological war.



Radical Centrism.



From Wikipedia:




The “radical” in the term refers to a willingness on the part of most radical centrists to call for fundamental reform of institutions. The “centrism” refers to a belief that genuine solutions require realism and pragmatism, not just idealism and emotion. Thus one radical centrist text defines radical centrism as “idealism without illusions”.

Most radical centrists borrow what they see as good ideas from left, right, and wherever else they may be found, often melding them together. Most support market-based solutions to social problems with strong governmental oversight in the public interest.




We’re about to see a nasty clash between extremes in this country, and it’s going to appear as if there’s no middle left.



But there is, and they’re tired of being forced to choose between different sets of emotional, short-sighted, and harmful ideas.



From the ashes we’ll assemble something new.




Something that refuses to be offended for fun
Something that stops seeing the past as something to worship
Something that embraces evidence and changes its mind when its shown wrong by data
Something that puts human humanist wellbeing above all else


Until we have something better, maybe we can point to Radical Centrism as some sort of beacon.



In the meantime, buckle in.



Notes


The image above is not mine, and if you know the creator I’d love to give them credit.

---

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

 •  0 comments  •  flag
Share on Twitter
Published on February 04, 2017 21:08

January 31, 2017

What is Mobile 2.0?



Today, ten years after the iPhone launched, I have some of the same sense of early constraints and assumptions being abandoned and new models emerging. If in 2004 we had ‘Web 2.0’, now there’s a lot of ‘Mobile 2.0’ around. If Web 2.0 said ‘lots of people have broadband and modern browsers now’, Mobile 2.0 says ‘there are a billion people with high-end smartphones now’*. So, what assumptions are being left behind?

Source: Mobile 2.0



Benedict Evans makes some great points in this piece, and it got me thinking about how I would characterize both Web 2.0 and Mobile 2.0 if I were asked to do so.



I think I’d say the following:




Web 2.0: was the conversion from flat, link-based experiences to application-like experiences.


Mobile 2.0: will be the conversion from primarily using applications to using digital assistants via voice and text. In other words, it’ll be the transition from direct interaction to brokered interaction, with the brokers being your digital assistant and chat bots.




As I write about in The Real Internet of Things, this brokering is inevitable for several reasons. Here are two of them:




Computers (i.e., digital assistants) will be able to interact with the hundreds or thousands of daemons that surround us on our behalf, whereas humans will not be able to.
Voice and text (and eventually gesture, eye-tracking, and neural interfaces) are far more natural than poking at glass in different ways for different apps. The human will just express desires, and it’ll be up to the tech to sort it out, as opposed to the human explicitly poking buttons in the way demanded by the app.


Benedict does make a great point about a limitation of voice in replacing applications. If you have 20 applications on your mobile device, how are you supposed to remember them all? And how to use them all with pure voice / text.



I think the answer will come from a combination of high-quality digital assistants that make quality assumptions about what you want to do (and thus require you to be explicit less often), and advances in eye-tracking that can let you make selections from options more naturally than poking glass.



But to support his point, those will be a while. If we can’t remember what all Alexa can do for us, that same problem will follow us on mobile as we try to move to voice. Icons on glass will become reminders that you have the functionality as much as anything else.



Summary


Web 2.0 was the transition from links to web apps.
Mobile 2.0 is the transition from poking glass to naturally expressing desires to digital assistants and bots.
Because it’s hard to remember all the different capabilities of a strong digital assistant, there will still be a use case for displaying functionality—in whatever form—at least for the foreseeable future.


Notes


Even further out is the digital assistant using deep context to anticipate desires, curate choices, and otherwise remove the need for exhaustive choice selection.
I love his comment about visual sensors vs. cameras, which I also talk about in my book. The idea applies to all kinds of sensors, and all kinds of machine learning algorithms. The game is sensor to algorithm. Humans looking at a snapshot is going to be extremely old thinking soon.
If you’re not subscribed to Benedict’s newsletter, I recommend it strongly. It served as the inspiration for the reboot of my own newsletter, especially around the simple text-based design.

---

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

 •  0 comments  •  flag
Share on Twitter
Published on January 31, 2017 20:56

Exploring the Nature of Evil



Recent events have led me to contemplate the nature of evil—specifically as it pertains to government leaders.



I feel like there are two different types of ruler.




Those who believe they’re doing unpleasant but necessary things that will ultimately make things better and lead to them being loved by the people, and…
Those who don’t care what the people think and just want control and all the advantages that come with it.


I’m not an expert on either, but it seems like Quadafi just wanted to rule and didn’t care about the people’s opinion, whereas Hitler thought he was doing good for Germany and wanted its love and respect.



What I’m trying to untangle is whether the difference matters or not.



Let’s say you hated Obama’s presidency and you believe that even though he was trying to do the right thing he actually caused extreme harm to the country. This shouldn’t be hard to imagine, since millions of American’s clearly believe that.



And let’s say you hated George W. Bush’s presidency because you believe the Iraq war was unjustified, that it was based on a boy trying to impress his father and to become respected like Ronald Reagan.



In both cases, from each perspective, evil was done. Obama weakened our country, weakened our conservative values and our strength in the world, etc. And Bush lost us over 5 trillion dollars, killed hundreds of thousand of Iraqis and hundreds of Americans, and Iraq is now more of a mess than when Saddam was there.



So is Obama evil? Is George W. Bush evil?



I’d say no.



I’d say they’re misguided, or that they were at the time, and that their flawed understanding of the world caused them to make decisions that ultimately caused harm.



But then I think about Trump.



What does he want?



Is he someone who cares about America and who wants to be loved and respected for helping it succeed, like Bush and Obama and Hitler? Or is he someone who’s pretending those things simply for the purpose of gaining influence and wealth?



Does he really care, in other words, or would he happily cause pain and suffering to the entire country if he could become a supreme ruler with no risk of overthrow?



I think it’s the former. I think he deeply cares about the country and is actually trying to fix it.



But so was Jimmy Carter. And Reagan. And yes, Hitler.



I’m obviously not equating any of these people in any other way than intentions and motivations, but I’m starting to wonder if it matters at all.



Let’s assume that Carter and Obama were philosopher kings who were too good for the presidency. Let’s assume they tried too hard to be nice, and the result was harm to the country.



And let’s assume that Reagan and Bush and Hitler thought the answer was force and unpleasantness, but they truly believed that once it was all done they’d be left with a healthy, thriving country that remembered them as the leader who got them through.



Does it matter?



Does it matter what your intentions are? Or where your heart is? If Santa Claus, Sean Hannity, and Ted Bundy would all be bad world leaders, does it matter what would make them bad?



When I see Trump playing Celebrity Apprentice: White House, obsessing over his perceived popularity, insisting that people laugh at his jokes during official public addresses, making clumsy and dangerous policy decisions with no understanding or regard for implications, and attacking media who don’t report on him favorably, I am hit with multiple signals and thoughts.




He’s trying to do the right thing and he’s just inexperienced.
He’s a raving lunatic and we should start impeachment hearings immediately.
The liberals really did mess things up, and maybe when the dust settles we’ll see some actual positives out of all this.
The guy is 70 years old and everyone he cares about is a billionaire. He’s not in this for money.
It doesn’t matter what he’s doing it for; he’s fucking everything up.


Honestly I’m not sure where I’m going with this. I was hoping for some resolution or clarity.



In the past when I saw what I perceived to be evil acts I always asked the question:




What is the person trying to do? What’s their goal?




And now I’ve realized it isn’t actually a good benchmark.



Hitler might have wanted lots of art galleries and Christmas music during the holidays. I like those things too. And maybe he had a great sense of humor and could do really good animal impressions.



I don’t care.



Maybe he wanted a united and vibrant Germany where everyone loved each other and smiled when they passed each other on the street.



I don’t care. He slaughtered millions of people.



And maybe Bush wanted to be a hero, and earn his Dad and brother’s respect, by doing the thing that everyone said he couldn’t do. And maybe he thought Iraq would love him just like America.



I don’t care. He lead to the deaths of hundreds of thousands, destabilized an entire region, and created ISIS.



And maybe Obama thought being nice solves problems, and that closing Guantanamo was complicated, and that giving Iran most of what they wanted and letting Putin walk all over us was best in the long run.



I don’t care. The result is that we have an alt-right revolution in this country right now because he let hyper-liberals hijack all the narratives.



Maybe the only thing that matter are actions, and whether those actions lead to better or worse outcomes.



And maybe that’s completely relative, based on who’s making the judgements of good and bad.



So we’re lost.



You can’t judge by intentions because you can have the best of intentions and produce the worst of outcomes. And you can’t judge by desired outcomes because nobody can agree on what the goals should be.



So I have no solutions for Trump, or even any good ways to analyze the problem. He probably wants to do good things. He’s had good ideas. He is also disconnected from reality in a frightening way, has shown at the very least lenience towards extremely non-humanist ideologies, and he appears prone to very random behavior.



In many of my essays this is where I give some sort of solution, or at least a direction to look for one.



In this case I have neither.


---

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

 •  0 comments  •  flag
Share on Twitter
Published on January 31, 2017 08:10

January 30, 2017

Unsupervised Learning: No. 63

This week’s topics: Peak Prevention at AppSec Cali, Austrian Hotel Ransomware, Russian FSB Drama, WordPress Issues, AV Conflicts, Uber Pays Another Company’s Bounty, Data Science, Rules for Rulers…





This is Episode No. 63 of Unsupervised Learning—a weekly show where I curate 3-5 hours of reading in infosec, technology, and humans into a 15 to 30 minute summary.



The goal is to catch you up on current events, tell you about the best content from the week, and hopefully give you something to think about as well.





The show is released as a Podcast on iTunes, Overcast, Android, or RSS—and as a Newsletter which you can view and subscribe to here or read below.











Click the image to read the full newsletter.





Thank you for listening, and if you enjoy the show please share it with a friend or on social media.



Daniel Signature


---

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

 •  0 comments  •  flag
Share on Twitter
Published on January 30, 2017 05:13

January 29, 2017

We Have an Idiocracy Problem, Not an Orwell Problem



People are extremely fond of referencing Orwell right now, and specifically 1984. So much so in fact that it’s just become a #1 bestseller again.



I think the analysis is a bit off.



The key characteristic of Orwellian society is oppression. That means that the people are vibrant, full of life, and the government is a well-organized force that keeps them down. The key element there is that they actually control information. They determine what people hear and therefore what they believe.



That’s not what we have in America right now.



What we have is more like what Huxley warned about, which is a situation where the people no longer care what’s true. They simply go about their business, chase shiny things, amuse themselves with recreation, and let the government do whatever it wants.



We’re closer to that today than we are an Orwellian society, but there’s another model that’s even more applicable.



Idiocracy.



In Idiocracy, the people are so ignorant and gullible that they celebrate the wrong things. The respect bragging, power, bling, and all the other lower forms of signaling strength. And because they’re so ignorant they can’t tell the difference between what’s true and what’s not.



That’s what got us here—not an Orwellian level of control and deception.



We can see the differences in a few different ways:




First, the current administration is not that organized or coordinated. They’re more like a Magic 8-ball of mostly bad ideas.
Second, they don’t have control over what’s being heard. They’re just blasting their narratives at full volume and hoping it’ll confuse and convince some percentage of the masses—which they have.
Third, there isn’t a singular goal that they’re pursuing. It’s many different and opposing groups emotionally pushing their own individual agendas.


This isn’t Orwell, and it’s not even Huxley. Huxley’s dystopia, like Orwell’s, was maliciously designed to make people not care. It was engineered to distract the people and put them to sleep so that the real world order could reign.



Again, that requires a lot of organization, very long-game thinking, and meticulous planning and execution.



We don’t have any of that in this mess. What we have are Idiocratic opportunists shouting slogans and getting the ignorant masses riled up. They’re tapping into emotions, obscuring facts, and taking full advantage of their audience’s distaste of subtlety and evidence in discussion.



I’m never one to discourage people from reading Orwell, but it’s a bit of a waste to build a mental defense against an ailment that we’re not actually facing.



Quite simply, our problem isn’t the government; our problem is the people who brought it to power.



As someone recently pointed out at a protest,




Don’t blame Trump. He repeatedly demonstrated that he was unfit to lead the country, and we elected him anyway.




Pointing to the government as the problem, which is the message with both Orwell and Huxley, doesn’t help us much right now.



If we want to find our way out of this mess we need to find a way to address the millions of people who wanted him here and think he’s doing a good job. If you can’t fix that, you can’t fix anything else.



That’s an Idiocracy problem, not an Orwellian one.



Notes


And no, I’m not saying anyone who voted for Trump is an idiot or part of the Idiocracy. But as a rule I would say that he absolutely perpetrated a mass-deception that required significant ignorance in his followers. That doesn’t mean he didn’t make good points, or that people couldn’t have voted for him for good reasons. It simply wasn’t the majority of what happened.

---

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

 •  0 comments  •  flag
Share on Twitter
Published on January 29, 2017 13:03

A Simplified Definition of “Data Scientist”



There is a lot of controversy around the definition of a Data Scientist.



Some think it means being a statistician, others think it means being a technologist, and others have still other requirements.



I think the best definitions are more general and goal-based, and look something like these:






Data Scientist


/’dadə sīən(t)əst’/


noun
1. Someone who specializes in collecting, massaging, and/or displaying data in order to tell a story that results in a positive outcome.

2. Someone who can technically extract meaning from information in a way that enables decision makers to make better choices.

3. Someone who can extract business value from data using mathematics and technology.




Importantly, this could be a triple-Ph.D in statistics, maths, and computer science, or a talented graphic designer with some decent Python skills.



The key is that they’re able to use data to illuminate how the world works and facilitate progress.



So you can break down the definitions into 49.6 different categories and sub-categories, or you can use this approach and focus on outcomes.



I think this approach is more resilient, especially given how quickly the field is changing.



Notes


The definitions above assume both good faith and possession of requisite talent/skills. Manipulation and incompetence are not in scope.
There’s a humorous alternative definition which says, “A data scientist is someone who’s better at statistics than any software engineer, and better at software engineering than any statistician.”

---

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

 •  0 comments  •  flag
Share on Twitter
Published on January 29, 2017 00:42

January 27, 2017

Hitting Peak Prevention

These are my slides from AppSec Cali 2017, where I delivered a conceptual talk called Peak Prevention. It was a crap presentation/delivery, but the idea is pretty solid I think.



In retrospect, that’s not the conference for this type of talk. I knew that already, but when it comes time to submit I tend to just submit whatever’s on my mind at the time. I need to get better at matching content to conference, since I like to both technical stuff and idea stuff.



I’ve been thinking about this idea of Peak Prevention for many years, and the concept is quite simple:



Risks is made up of probability and impact, and we have hit a point of diminishing return with preventing bad things from happening. If we want to significantly reduce risk at this point we need to lower the other side of the equation (impact), which equates to resilience. In short, the future of risk reduction in an open society in many, many cases will come from resilience, not from prevention.



It really should have been a 15 minute talk, which has an associated essay. That’s the direction I’m started to head for these types of things. Crisp, concise concepts—delivered in a way that doesn’t waste anyone’s time. Instant value, instant takeaways.



Anyway, those are the slides. It’s not very textual so you’ll have to sort of imagine the flow, but I’ll do a standalone essay on the topic soon.



Notes


The conversion to PowerPoint borked my super sick fonts. Don’t hate on my typography; it got mangled.

---

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

 •  0 comments  •  flag
Share on Twitter
Published on January 27, 2017 12:12

January 26, 2017

The Future of Education



The future of education is fascinating. I see a few things happening simultaneously with it, and I want to capture them here.




Traditional brick and mortal schooling is fading in reputation because the quality has massively diluted over the last two (or so) decades. A degree is less likely to guarantee a level of quality or competence than it used to.
Non-Traditional schooling is gaining prominence, both because traditional approaches are getting worse and because the notion of remote anything/everything is becoming more accepted.
Because the overall penetration of higher education is increasing, and the quality is falling as a result, many are thinking more about assessment-based vetting rather than giving credit for simply having endured a period of time.
This means hiring companies and schools are likely to start thinking about validation through testing (or accepting accreditation from schools that use fairly standardized testing as a criteria for completion).
Many top-end institutions, with the best professors in the world, are now starting to give away their lectures and exercises for free.
The content is also being put online for everyone in the world with an internet and a mobile device to consume.
The combination of these effects could start act like a prism, and have the effect of breaking traditional education into its components of: 1) presentation of high-quality material, 2) hands-on learning/building/practicing, and 3) validation and verification that the student has a level of competency in the subject.


It’s a modular approach, and modular approaches are great for the internet.



Imagine some organizations just creating or curating the best possible lecture and presentation material for whatever subject. Then another group that creates and/or facilitates the best possible exercises and hands-on activities. Building, breaking, implementing, practicing. And finally another group of organizations that excels at ensuring that individuals have mastered the material in question.



The fourth piece of the mix would be the groups that want to hire and use educated / trained people. They would of course go to the testing organizations and figure out which of them had the credentials that were most correlated with success in a particular subject.



Longer term

The long game here is—you guessed it—machine learning.



You’ll tell the system what kind of job you want, and an education service will build you a customized curriculum. The more data you give it, the better that curriculum will be customized for you.



Same for employers. They’ll know the best combination of exposure to ideas, practice, and credentials work best for their jobs, and they’ll have recommendations built right into their openings.



The class element

There is another element that I left out of the equation. Ivy League education doesn’t just proxy intelligence; it also proxies family quality and class. Not always, but often.



There will be education components that focus on this as well. They may integrate with the lecture content, hands-on components, and certification pieces from above, but they’ll emphasize the elitist piece of the mixture.



Top-end facilities at a beautiful physical campus, lots of exclusive sports and activities, and most importantly—lots of networking with other successful people. It’s like prep school, private school, and every other exclusive type educational organization that we’ve had for hundreds of years. The difference is that it’ll have the best of the actual education components integrated in from outside, and those will be overlaid on what’s basically training for country clubs and executive board rooms.



Summary


Education gets broken out into its component parts of: a) the world’s best lectures delivered online, b) a hands-on component, c) the testing/validation/certification, and d) any social/class-based exclusivity.
The different entities in the equation care about different pieces of that mix, with the most important piece being the employer who wants to validate that people have a level of competency in the thing they’re being hired for. Everything else is secondary to that from a practical standpoint.
Machine Learning will ultimately be the death of traditional education because it will dissect where true talent and capability actually derive from. Some people might just need to listen to some lectures and practice. Others might need years of disciplined study. Still others might be unable to gain the requisite level of competency no matter what they do. And Machine Learning will be scary good at telling the difference.


TL;DR: Expect education to become far more modular and results based. The only thing that will matter is predictive power for how well you’re likely to be able to perform a given task.


---

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

 •  0 comments  •  flag
Share on Twitter
Published on January 26, 2017 20:01

Does Next-gen AppSec Require Next-gen Developers?



I just got back from AppSec Cali, which is quickly becoming my favorite infosec conference. The venue is fantastic. I know so many of the people, and they tend to be super humble and laid back. And the content is decent as well.



It’s just a great conference.



Anyway.



One of the big themes that was discussed this year (it’s been talked about in previous years, but it’s getting much stronger now) is the concept of developer enablement in top-performing organizations.



I’ve been tracking this trend across multiple industries for a couple of years now, and we see the trend heavily focused in places like Netflix and Facebook, i.e., the top-end of development shops where functionality is primary and the product is super respected.



These are a characteristics that tend not to exist in other types of organization.




Developers are given massive amounts of responsibility and leeway.
They tend to be able to push code to production pretty easily, often whenever they want.
They do what works, and have very few forced boundaries in terms of languages, frameworks, specific rules to follow, etc.
But they are ultimately responsible for the quality of what they produce, so if they produce insecure or otherwise feeble code, that’s on them.
They’re responsible for whatever harm they cause.
They can be fired easily.


That’s a fascinating combination of elements, and they also produce some interesting behaviors from security.



Security at the high-end

Security in an organization like this becomes the waterboy in a football game. They’re there to facilitate. Do you have everything you need? Is there anything I can get for you? What can I do for you to make your job easier?



Imagine if the waterboy walked up to the quarterback during a big game and was like:




Hey, so listen, we talked about you doing that short route to the right. I don’t like it. It’s bad. I’m going to need you to run more because it’s safer. And if you don’t I’m going to tell the assistant coach, and you’ll get in big trouble.




The quarterback would look at this person like they had a head injury, and then they’d gesture slightly and that person would be banished from the stadium.



That’s an appsec group in a high-end development shop.



Security is not in charge. Their purpose is to enable the athletes to perform well. That means giving them the tools they need to be their best, and to be as safe as possible while doing so.



The athlete analogy

The athlete comparison continues to bear fruit, actually, because star athletes are gods until they’re kicked off the team.



Being able to push code to production whenever you want is a lot like having the freedom during a game to run in the wrong direction with the ball in order to get around a defender. It’s all fun and games until you get caught in the backfield.



You can do whatever you want, and you can make a couple of mistakes. But when you make one too many you’ll be tapped on the shoulder, and that’ll be it for you.



That’s true whether you’re showboating and dropping balls or you’re pushing crap code to production and causing outages during peak times.



The benefits

So why are elite companies heading in this direction? What’s so attractive about it?



Simple: companies with developers who are empowered in this way are able to produce better product.



As I’ve written about a number of times before, Evolution beats Design when creativity is the goal.



Old style organizations are Design Based. The “good” ideas come from above. From on high. From the mountain. The lowly engineers simply implement the plans of their betters.



And the result is often mediocrity.



The new model is bottom-up. Evolution style. With evolution the power comes from the bottom, from the people. And the developers are the people. They’re the artists. They’re the creators. They’re the producers.



When they are enabled they can produce more ideas which then mix powerfully with other ideas. Mutation occurs, tests are performed, and outcomes are created that blow away most anything produced by top-down teams.



That’s the benefit. That’s what Netflix, Facebook, and places like Riot Games have figured out. And they’re embracing it.



So why doesn’t everyone do it?

Now we arrive at the point of this piece.



I engage with many very large organizations in my consulting work—Global 50-100 companies often with thousands or tens of thousands of developers.



So the question is,




Can we just train up 10,000 developers from ACMECORP and turn them into these super high-speed Netflix types?




I think the answer is (mostly) no.



When Netflix, Facebook, and Riot Games do their hiring for developers (and their security team) they’re filtering for a special combination of tech and culture. You have to be in the top n percent in terms of tech skills, AND have this spectacular ability to take responsibility, exercise good judgement, be a team player, etc.



Most companies don’t have these kinds of standards. Not anywhere close. Much of these developer workforces are actually contractors. They’re giant swarms of low-paid resources that get dragged and dropped onto projects with very little vetting.



It’s not even the same sport as what the elite groups are doing in terms of hiring.



My worry

I’m somewhat concerned about the gap between mainstream, corporate development and the Holy Grail of Netflix/Riot Games (as it relates to developer productivity/responsibility/etc.).



It’s one thing to talk about DEVOPS and Agile and all this new high-speed Kung-fu, but it’s quite another to roll it out in organizations full of developers that simply can’t handle it.



I do believe that there are many in a pool of say 10,000 that CAN handle it. But we don’t know who those people are because we didn’t filter for those characteristics when they were hired. And it’s somewhat reckless to simply create projects, throw non-vetted folks onto the project after some training, and tell them to act like Netflix developers.



The more you move to new-style development (empowered/continuous/low-friction/etc), the more you move responsibility downwards towards the developer, and my feeling is that this shift is going to require a corresponding increase in developer quality.



Let’s play with some numbers.



Let’s say that top-tier orgs hire 1 of every 100 applicants that could get a job at other mainstream development companies, like Accenture or whatever. 1%. They discard 99% due to not being technical enough or not having the right mindset.



[ NOTE: I’ve no idea if that’s a reasonably accurate percentage or not, but I wouldn’t be surprised if it was even lower. ]



And let’s say you have a regular, corporate organization with 1,000 developers, and they’ve been told to “Move to a Netflix model.”, or to otherwise get to some nebulous approximation of the elite dev shops we’ve been talking about.



What happens when you give average developers maximum autonomy? What happens when you give them all the superpowers of ultimate responsibility, and the keys to production?



Now obviously this isn’t some switch you’re going to throw, and suddenly give a bunch of people access they’re not used to. You’re not going to do this overnight.



But my point is that we’re likely to have to go through a similar vetting process as these other companies used, to find the developers—out of that 1,000—who are capable of handling the new model.



And we’re most likely going to have to filter using an evolutionary model rather than a design one. You’re going to have to try people out, in other words, and be willing to discard them if they don’t work out. That’s likely going to be a big switch for traditional companies.



We probably won’t have to get to the 1% filter level to make major progress. Maybe we can take the top 10%. Or maybe the top 25%. I don’t know where that bar is, but it’s definitely not going to be the entire pool, and it will very likely be way less than half.



Summary


The new model for high-output and high-quality development is based on empowering developers with massive amounts of creative control, power, and responsibility.
This requires a different type and quality of developer.
The cutting-edge companies in the new model spend massive amounts of effort finding these people from within already qualified talent pools.
Because developers of this quality are so hard to find, it’s going to be far harder than people think to move the software industry over to the new style of product development.


TL:DR: The more creative freedom and responsibility you give to developers, the higher quality they need to be, and there are only a small percentage of the overall developer pool that will make that cut. For this reason, we might want to temper our expectations for some mass migration to Netflix-style software development in the corporate world.



Notes


When I say that Netflix or Facebook or Riot Games does this or that, I’m speaking fairly generally in a way I think is safe based on what I know from various contacts I have in these companies. But I’m not claiming to be some authoritative resource on their exact hiring criteria. If you have insider information that runs counter to any of this, do let me know.
I am also a strong believer in the concept that giving responsibility to people makes them smarter and better in MANY ways, so this effect will definitely help convert many of the lower-quality developer resources into rockstar artist types. In truth those people always were that; they just never had a chance to show it. This will help with the move, but even then I think we’ll have massive percentages of developers who simply can’t make the transition.
I’m a security guy and could very well be wrong about some of these difficulty levels. Happy to have my models corrected by someone who knows better.

---

I do a weekly show called Unsupervised Learning, where I collect the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

 •  0 comments  •  flag
Share on Twitter
Published on January 26, 2017 00:40

Daniel Miessler's Blog

Daniel Miessler
Daniel Miessler isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Daniel Miessler's blog with rss.