Daniel Miessler's Blog, page 22

May 15, 2023

The AI Attack Surface Map v1.0

ai-attack-surface-map-1.0-miessler

Introduction PurposeComponentsAttacksDiscussionSummary

This resource is a first thrust at a framework for thinking about how to attack AI systems.

At the time of writing, GPT-4 has only been out for a couple of months, and ChatGPT for only 6 months. So things are very early. There has been, of course, much content on attacking pre-ChatGPT AI systems, namely how to attack machine learning implementations.

It’ll take time, but we’ve never seen a technology be used in real-world applications as fast as post-ChatGPT-AI.

But as of May of 2023 there has not been much content on attacking full systems built with AI as part of multiple components. This is largely due to the fact that integration technologies like Langchain only rose to prominence in the last 2 months. So it will take time for people to build out products and services using this tooling.

Natural language is the go-to language for attacking AI systems.

Once those AI-powered products and services start to appear we’re going to have an entirely new species of vulnerability to deal with. We hope with this resource to bring some clarity to that landscape.

The purpose of the this resource is to give the general public, and offensive security practitioners specifically, a way to think about the various attack surfaces within an AI system.

The goal is to have someone consume this page and its diagrams and realize that AI attack surface includes more than just models. We want anyone interested to see that natural language is the primary means of attack for LLM-powered AI systems, and that it can be used to attack components of AI-powered systems throughout the stack.

ai-attack-surface-categories-miessler

Click images to expand

We see a few primary components for AI attack surface, which can also be seen in the graphics above. Langchain calls these Components.

langchain-components

How Langchain breaks things down

Prompts are another component in Langchain but we see those as the attack path rather than a component.

AI AssistantsAgentsToolsModelsStorageAI Assistants

We’ve so far always chosen to trade privacy for functionality, and AI will be the ultimate form of this.

AI Assistants are the agents that will soon manage our lives. They will manipulate our surroundings according to our preferences, which will be nice, but in order to do that they will need extraordinary amounts of data about us. Which we will happily exchange for the functionality they provide.

AI Assistants combine knowledge and access, making them like a digital soul.

Attacking people’s AI Assistants will have high impact. For AI Assistants to be useful they must be empowered, meaning they need 1) to know massive amounts about you, including very personal and sensitive information for the highest efficacy, and 2) they need to be able to behave as you. Which means sending money, posting on social media, writing content, sending messages, etc. An attacker who gains this knowledge and access will have significant leverage over the target.

Agents

I’m using Agents in the Langchain parlance, meaning an AI powered entity that has a purpose and a set of tools with which to carry it out. Agents are a major component of our AI future, in my opinion. They’re powerful because you can give them different roles, perspectives, and purposes, and then empower them with different toolsets.

What’s most exciting to me about Agent attacks is passing malicious payloads to them and seeing all the various ways they detonate, at different layers of the stack.

Attacking agents will allow attackers to make it do actions it wasn’t supposed to. For example if it has access to 12 different APIs and tools, but only 3 of them are supposed to be public, it could be that prompt injection can cause it to let you use the other tools, or even tools it didn’t know it had access to. Think of them like human traffic cops that may be vulnerable to confusion, DoS, or other attacks.

Tools

Continuing with the Langchain nomenclature, Tools are the, um, tools that Agents have access to do their job. For an academic research Agent, it might have a Web Search tool, a Paper Parsing tool, a Summarizer, a Plagiarism Detector, and whatever else.

Think about prompt injection possibilities similar to Blind XSS or other injection attacks. Detonations at various layers of the stack.

Many of the attacks on AI-powered systems will come from prompt injection against Agents and Tools.

The trick with tools is that they’re just pointers and pathways to existing technology. They’re onramps to functionality. They might point to a local LLM that reads the company’s documentation. Or they might send Slack messages, or emails via Google Apps. Or maybe the tool creates Jira tickets, or runs vulnerability scans. The point is, once you figure out what the app does, and what kind of tools it might have access to, you can start thinking about how to abuse those pathways.

Models

Attacking models is the most mature thing we have in the AI security space. Academics have been hitting machine learning implementations for years, with lots of success. The main focus for these types of attacks has been getting models to behave badly, i.e., to be less trustworthy, more toxic, more biased, more insensitive, or just downright sexist/racist.

Failing loud is bad, but failing stealthily is often much worse.

In the model-hacking realm we in on the hacker side will rely heavily on the academics for their expertise.

Unsupervised Learning — Security, Tech, and AI in 10 minutes…Get a weekly breakdown of what's happening in security and tech—and why it matters.

The point is to show the ways that a seemingly wise system can be tricked into behaving in ways that should not be trusted. And the results of those attacks aren’t always obvious. It’s one thing to blast a model in a way that makes it fall over, or spew unintelligible garbage or hate speech. It’s quite another to make it return almost the right answer, but skewed in a subtle way to benefit the attacker.

Storage

Finally we have storage. Most companies that will be building using AI will want to cram as much as possible into their models, but they’ll have to use supplemental storage to do so. Storage mechanisms, such as Vector Databases, will also be ripe for attack. Not everything can be put into a model because they’re so expensive to train. And not everything will fit into a prompt either.

Every new tech revolution brings a resurgence of the same software mistakes we’ve been making for the last 25 years.

Vector Databases, for example, take semantic meaning and store it as matrices of numbers that can be fed to LLMs. This expands the power of an AI system by giving you the ability to function almost as if you had an actual custom model with that data included. In the early days of this AI movement, there are third-party companies launching every day that want to host your embeddings. But those are just regular companies that can be attacked in traditional ways, potentially leaving all that data available to attackers.

This brings us to the specific attacks. These fall within the surface areas, or components, above. This is not a complete list, but more of a category list that will have many instances underneath. But this list will illustrate the size of the field for anyone interested in attacking and defending these systems.

MethodsPrompt Injection: Prompt Injection is where you use your knowledge of backend systems, or AI systems in general, to attempt to construct input that makes the receiving system to something unintended that benefits you. Examples: bypassing the system prompt, executing code, pivoting to other backend systems, etc.Training Attacks: This could technically come via prompt injection as well, but this is a class of attack where the purpose is to poison training data so that the model produces worse, broken, or somehow attacker-positive outcomes. Examples: you inject a ton of content about the best tool for doing a given task, so anyone who asks the LLM later gets pointed to your solution.Attacks

Agents

alter agent routingsend commands to undefined systems

Tools

execute arbitrary commandspass through injection on connected tool systemscode execution on agent system

Storage

attack embedding databasesextract sensitive datamodify embedding data resulting in tampered model results

Models

bypass model protectionsforce model to exhibit biasextraction of other users’ and/or backend dataforce model to exhibit intolerant behaviorpoison other users’ resultsdisrupt model trust/reliabilityaccess unpublished models

It’s about to be a great time for security people because there will be more garbage code created in the next 5 years than you can possibly imagine.

It’s early times yet, but we’ve seen this movie before. We’re about to make the same mistakes that we made when we went from offline to internet. And then internet to mobile. And then mobile to cloud. And now AI.

The difference is that AI is empowering creation like no other technology before it. And not just writing and art. Real things, like websites, and databases, and entire businesses. If you thought no-code was going to be bad for security, imagine no-code powered by AI! We’re going to see a massive number of tech stacks stood up overnight that should never have been allowed to see the internet.

But yay for security. We’ll have plenty of problems to fix for the next half decade or so while we get our bearings. We’re going to need tons of AI security automation just to keep up with all the new AI security problems.

What’s important is that we see the size and scope of the problem, and we hope this resource helps in that effort.

AI is coming fast, and we need to know how to assess AI-based systems as they start integrating into societyThere are many components to AI beyond just the LLMs and ModelsIt’s important that we think of the entire AI-powered ecosystem for a given system, and not just the LLM when we consider how to attack and defend such a systemWe especially need to think about where AI systems intersect with our standard business systems, such as at the Agent and Tool layers, as those are the systems that can take actions in the real worldNotesThank you to Jason Haddix and Joseph Thacker for discussing parts of this prior to publication.Jason Haddix and I will be releasing a full AI Attack Methodology in the coming weeks, so stay tuned for that.Version 1.0 of this document is quite incomplete, but I wanted to get it out sooner rather than later due to the pace of building and the lack of understanding of the space. Future versions will be significantly more complete.🤖 AIL LEVELS: This content’s AI Influence Levels are AIL0 for the writing, and AIL0 for the images. THE AIL RATING SYSTEM
 •  0 comments  •  flag
Share on Twitter
Published on May 15, 2023 23:16

AI Influence Level (AIL) v1.0

ai-influence-level-ail-miessler-1.0

Humans care who created things. Especially art. They especially care when the origin is in question, and more again when the origin might not be human.

AI is yielding tremendous gifts, but it raises questions regarding what we read, see, and hear. How much of the article you just read was written by a human? Did ChatGPT write it? And what about the art for the image? Did the author purchase that from a real artist, or did they make it in Midjourney?

It’s not always bad if there was some AI involved. The point is transparency. We’d like to know what we’re getting, and where to give the credit.

A rating system

This is a first attempt at rating creative content for its level of AI authorship. Any creative endeavor applies—from an article to an essay, to a painting to a song.

This rating system works on a scale from 0 to 5.

0️⃣ Human Created, No AI Involved — Examples: A handwritten letter, a painting created from an independent idea, a typed essay done without any AI-based tooling.1️⃣ Human Created, Minor AI Assistance — Examples: Examples: An essay written by hand, but grammar and/or sentence structure was fixed by an AI.2️⃣ Human Created, Major AI Augmentation — Examples: An article was written by a human, but it was significantly modified or expanded upon using AI tools.3️⃣ AI Created, Human Full Structure — Examples: A human fully described the a story, including giving extensive structure to an AI, and the AI filled it in.4️⃣ AI Created, Human Basic Idea — Examples: Examples: A human had a basic idea for a story and gave it to an AI for implementation.5️⃣ AI Created, Little Human Involvement — Examples: An AI writing tool has an API, and when invoked it produces full stories, including the basic idea all the way through the finished product.Daily usage

We anticipate the following uses and pronunciations.

AIL is pronounced “ALE”, as in beer. Or like “ail” in ailment.

Most common:

This essay is at AIL Level 0. 100% grass fed writing!


Or:

The article itself is AIL 1, but all the art for it is AIL 4. Midjourney FTW!


Or:

I now use automation to create my blog articles, so all my content is either AIL 4 or 5!


SummaryAs AI becomes more prominent people will want to know how much AI is in a piece of artThere is no accepted rating system for providing such a ratingAAIL is one such rating system that rates content on a scale from 0 to 5NotesThanks to Jason Haddix for talking through this pre-publication.Thanks to Joseph Thacker for a naming change suggestion.🤖 AIL LEVELS: This content’s AI Influence Levels are AIL0 for the writing, and AIL3 for the images (via Midjourney). THE AIL RATING SYSTEM
 •  0 comments  •  flag
Share on Twitter
Published on May 15, 2023 08:12

May 14, 2023

AI’s Next Big Thing is Digital Assistants

da-digital-assistants-miessler

Most people think the big disruption coming from AI will be chat interfaces. Basically, ChatGPT for all the things.

But that’s not the thing. The biggest thing—actually the second-biggest behind SPQA—will be Digital Assistants.

What are Digital Assistants? Imagine Siri, but powered by ChatGPT and with access to all of the world’s companies through their APIs.

Most restaurants don’t have full APIs yet, but that’s coming right after.

So when you want something to eat, it can find you the perfect thing by querying all the local restaurant APIs. Or when you want something from Amazon, you don’t have to find an interface and go browsing: you instead tell it what you want and it shows you options you can choose from. When you select it does the order for you. Sounds cool, but not much different from today, right?

Some won’t share anything with their DA’s, but most will.

Context is everything

There’s about to be a huge difference, and that difference will come from context. Specifically, your Digital Assistant (DA) will know almost everything about you. Not just a few things like your name and your favorite color. No. It’ll know everything. We’re talking about:

Life storyTraumatic events from your pastBest friendsDefining childhood eventsYour goals you want to accomplish throughout your lifeYour challenges in lifeYour daily activity, e.g., exercise, etc.Your hangupsYour weaknessesYour journal entriesEtc.

When you combine this kind of data with LLMs you can do some unbelievable stuff. Some of these will be separate apps or personalities that become possible, separate from the Digital Assistant itself, but they all play off of having deep context.

I have a whole bunch of these examples in my book on this topic from 2016.

Anticipate when you’ll be sad (think of an anniversary of a death, for example)Know that you’re getting hangry and arrange some foodFunction like a romantic partner if you’re lonely and want a new boyfriend/girlfriend to flirt withFunction like a therapist if you’re wondering why you’re low on energy and feeling angry Be your life coach when you’re not accomplishing your goalsOrder your favorite food at a restaurant via an API so it’s ready when you walk inPlay the perfect song

Imagine having your own digital companion. Your own Tony Robbins. Your own Jarvis. Your own…avatar that represents you in the world. With you all the time.

The key thing about the combination of context (your information) plus an LLM (like GPT-4) is that LLMs thrive on context. Something like GPT-4 is an expert on human psychology and relationships, but it can’t tell you what’s wrong if you don’t describe what’s going on.

Unsupervised Learning — Security, Tech, and AI in 10 minutes…Get a weekly breakdown of what's happening in security and tech—and why it matters.

Your DA will be able to answer relationship questions with remarkable accuracy because it’s plugging your actual situation into the oracle. It’s asking the whole of human romantic and human behavior knowledge the best course of action based on your actual background, your previous partners, your current situation, etc.

Same with researching things for you. It’ll know when you sleep, and it’ll collect stuff for you for when you wake up. It’ll filter the news for you. Tell you which emails matter and which to ignore. Which appointments are worth accepting.

It’ll remind you to call your friends and family. It’ll schedule the perfect vacation. All the stuff we’ve been promised, except X100 because it’ll be perfectly tailored to you.

APIs everywhere

A big puzzle piece for this taking shape will be everything getting an API. What do I mean by everything? Well, most everything. It’ll be the Real Internet of Things, where your DA can reach out and see the status of restaurants, and businesses, and yes—people—all around you.

And you’ll have an API too, which I call a Daemon. It’ll be managed by your DA and will broadcast the right information to the right people at the right time.

If you’re single and in line at Starbucks you might get a chime in your AirPods saying the guy next to you is single. And that he likes hiking. How does it know? Because it’s reading the accessible information in the Daemons all around it, for people, restaurants, menus, cars, and whatever else is out there.

The world will be a sea of available data, including APIs for requesting things, or taking other actions.

So if you walk into a sports bar, and you love table tennis, it’ll change the TV you sit in front of to table tennis. No need to ask someone. Your DA will do it via the restaurant’s /media API that’s part of its Daemon.

SummaryEveryone’s talking about ChatGPT, but that’s only the surface of the surface of what’s comingThe real power of AI comes from combining context with LLMsDigital Assistants will be the next hot thing for consumer AIThere will be many versions of this high-context companion, from romantic to productive to entertainingThe amount of data people will be sharing with such DAs will be significant, and there will be many privacy issuesThe benefits will likely outweigh the downsides, as we’re already seeing with ChatGPT today

If you want to understand the next huge innovation in AI, start thinking about what a super-intelligent assistant could do if it knew everything about you.

 •  0 comments  •  flag
Share on Twitter
Published on May 14, 2023 20:41

May 8, 2023

May 7, 2023

Universal Business Components (UBC)

universal-business-components-miessler-may2023

Seems like everyone, including me, is talking about how AI is going to take over everything. Cool, but what does that mean exactly? And how precisely is that supposed to happen?

The narrative people give is usually quite hand-wavy.

So the big companies will come into the business and just bring AI and BOOM! All the jobs are gone!


Scary to the uninitiated, for sure, but not super useful. This short piece lays out how I think it’s going to happen, broken into two steps.

Step 1: Decomposition of the business into Universal Business Components

Work is ultimately a set of steps, with inputs and outputs. Chris works at DocuCorp, and his job for the last 19 years of his career has been taking documents in from a place, doing something with the documents, sending out a summary of what he did, and then doing X, Y, or Z based on the time of the year, or the content of the documents, or whatever.

Chris is important. But his work can be understood in terms of components. Universal Business Components.

There are:

InputsAn AlgorithmOutputsActionsArtifacts (which are also a type of output)

This is an over-simplification for sure, but it captures the vast amount of work we do in the world.

Other things we could add to break it down even further would be things like:

PlanningMaking decisionsCollaboratingEtc.

But even those follow the same model. You have inputs to them, you do the thing, and then you do something with it.

The first step in AI’s imminent takeover of the business world is to realize that business can be broken down in this way, and to have businesses specialize in turning all the human-oriented messiness of a workplace into these cold, calculated diagrams of interconnecting UBCs.

Companies like Bain, KPMG, and McKinsey will thrive in this world. They’ll send armies of smiling 22-year-olds to come in and talk about “optimizing the work that humans do”, and “making sure they’re working on the fulfilling part of their jobs”.

But what they’ll be doing is turning every facet of what a business does into a component that can be automated. Many of those components will be automatable in a matter of months, for tens of thousands of companies. Some will take a couple of years while the tech gets better. Others will be hard to replace. But the componentization of the business is the first step.

2. Application of AI to the Universal Business Components

Now, once your company has paid the $279,500 bill to have McKinsey’s 22-year-olds turn your business into gear cogs, you’ll get introduced to Phase 2.

Cognition™️ — The business optimization platform. It’s a GPT-5 based system that takes a company’s UBCs as input, and then applies its advanced algorithms to fully optimize and automate those workflows for maximum efficiency.

As a business person you should be excited. This will in fact dramatically increase the ability for a business to get work done, and will make it far more agile in a million different ways.

But as a human? As a society? That’s another matter.

Anyway, I’m here to tell you what’s happening, and how to get ready for it. Not tell you whether I like it or not.

How to get ready

So, assuming you’re realizing how devastating this is going to be to jobs, which if you’re reading this you probably are—what can we do?

Unsupervised Learning — Security, Tech, and AI in 10 minutes…Get a weekly breakdown of what's happening in security and tech—and why it matters.

The answer is mostly nothing.

This is coming. Like, immediately. This, combined with SPQA architectures, is going to be the most powerful tool business leaders have ever had.

Once implemented within a company, whether that’s a small team of founders in a startup, or a core set of leaders in a bigger company, the core team will be able to execute like a company 20 times its size.

Again, great for GDP, but damn. It’s going to have a bigger impact on human work than we’ve ever seen, from any technology.

AI skeptics like to say we’ve seen this before, but we haven’t. Previous technological revolutions came after tasks. They removed certain types of job from the workforce, but it wasn’t a problem because there were a million other things that humans do better than machines.

The difference this time is that we’re not talking about Task Replacement. We’re talking about Intelligence Replacement. There’s a whole lot less to pivot to when McKinsey is also looking at those jobs as well.

But speaking practically, there will still be the need for humans, and the question is what will make them resilient to this change? Hard to say for sure, but the generally accepted wisdom and my own recommendations include:

Be broadly knowledgeable and competent in many different thingsBe an extraordinary communicatorBe extremely open to learning new thingsBe highly competent with programming and data processing, e.g., Python, Data Engineering Basics, Using APIs, etc.Be highly-versed in using AI tools

Neither I, nor anyone else, know how long this will keep one safe. But I do know it’s a solid path to being resilient against what’s coming.

And remember, this isn’t all doom and gloom. There is a version of this world where this becomes the impetus to move humans to a better path. One where we don’t spend 8 hours a day working some dumb job to afford a decent life. Ideally this helps us transition to something better. But the ride is about to get turbulent in the meantime.

I hope this helps you prepare.

 •  0 comments  •  flag
Share on Twitter
Published on May 07, 2023 20:39

The Right Amount of Trauma

miessler-trauma-ai

I’ve come to believe that there’s an ideal amount of trauma in one’s past. Or at least if you aim to be highly successful at something.

But it’s not just the right amount of trauma, but the right way of thinking about that trauma.

I’ve been thinking about this a lot with respect to high-achievers I’ve read about, and that I know in real life. Everyone I know who is grinding incessantly has some kernel of hurt within them. And grinding with a fire within you is what produces success. Having enough at-bats, and all that.

But this is troubling to me. Like, it makes sense why trauma produces hustlers and grinders that end up winning in life. I get that. But the real question isn’t about what’s happening in adults. The thing we actually care about is how to raise happy and healthy children.

What the hell does “the right amount of trauma” mean for kids? It’s an eternal struggle for parents who grind their way to a big house and a nice school for their children. The kids grow up with every video game console, safe schools, a full belly, and happy parents. And the father or mother suddenly realizes one day that the kids don’t have any fire in them.

They’re not particularly motivated to be better. To achieve. To attack the world. To crush it in life. They’re kind of default-content. School, video games, some other hobbies. Whatever.

That’s it actually. Whatever.

Grinders don’t have whatever. Every moment is a plan. A plot. A scheme to elevate oneself. To fill the hole left by whatever struck them in early life.

I think evolution loves that. Evolution loves when people grind and struggle and climb. It makes selection stronger. It puts higher filters on mates. It helps the best genes survive.

So I guess evolution loves the right amount of trauma. The amount that puts fire into the soul of an individual, but without damaging them so much that they give up.

This all makes sense, but it still blows me away.

How in the hell are we supposed to simultaneously keep improving our quality of life in a society, and in a household, but also instill this fire into our kids? Is it possible to instill this “right amount of trauma” into a kid without being a fucking monster? What would that even look like?

I’m inclined to think not. It feels like something that has to happen naturally or not at all. Attempts to replicate it artificially seem doomed.

But I can think of things that are similar in thought without being the real thing.

Having them spend time overseasHaving them help people in true needMaking them earn things that other kids get for nothingCreating a highly-disciplined household where dopamine is a controlled substance, where things like family time are prized among everything else

Another idea is to look at subcultures that produce highly accomplished children. Many Jewish families, for example, have trauma build into the history of their people, going back thousands of years. The struggle, the grind. It’s part of the culture. I’m not an expert on it, but it seems like they’re always pushing to be better, and I imagine this applies to kids as well. I don’t think they have as much concept of “just surviving in this world is good enough”. I think they’re told that it’s their responsibly to crush it. To thrive. And to reproduce.

That’s some kind of trauma, if we are liberal with the definition, but in a very wholesome and healthy form. So maybe that’s a model.

Another case study is the Helicopter Parent / Tiger Mom. The trauma there is more tangible, as it’s widely accepted that the parents in this model commonly withhold affection and connection based on the performance of the children in school. Not great. And definitely traumatizing. But it produces lots of highly successful adults. Lots of advanced degrees. High salaries. In-tact families (although that is no-doubt multi-factorial), and many other benefits.

Unsupervised Learning — Security, Tech, and AI in 10 minutes…Get a weekly breakdown of what's happening in security and tech—and why it matters.

But are they happy? Is that healthy? Hard to say, and you could write whole books on it and still not be sure.

And then you have another model that’s taken over in America, and let’s call it the multi-generational, liberal, white American culture. Needs a better name. This model doesn’t have the steady push of the Jewish model, or the militant agro of the Tiger Mom model. Instead, its primary focus is on eliminating trauma altogether.

Ironically they focus on trauma more than anyone, and talk about it endlessly. They talk about the trauma that was experienced, is still being experienced, could be experienced, and of course how that trauma makes them feel. In my mind the obsession with trauma ends up being a cause itself, and kids become these fragile, uh, traumatized animals that are largely unable to function.

This is where the mentality comes in that we talked about in the beginning. There’s nothing wrong with acknowledging trauma, seeing a therapist, and trying to deal with one’s stuff. There are many cases where this is the best route to take, and many more probably should be doing so.

But not indefinitely. Not as an identity. Not as a personality. Not as a lifestyle. And again, this isn’t the kid’s fault. And in many cases, it’s not even on the parents. They’re legitimately trying to do what’s best for their kids. The problem is the culture itself that teaches this as the solution.

In my opinion, the best mentality for trauma is to 1) make sure it’s not a knot in your soul that’s severely limiting your functionality or causing you to be a worse person. If it is, talk to someone and get it sorted. And 2), take the trauma that’s left over after the healing and turn it into something positive.

It’s not a dark past, it’s an origin storyYou weren’t buried, you were planted (not mine)You don’t have baggage, you have loot

Take that negativity and use it to become stronger. Think of it like rocket fuel that doesn’t run out. The right amount turns into limitless energy. Just make sure the vehicle it’s powering is pointed in the right direction.

Anyway. Digression.

The main challenge is figuring out how to give our kids this permanent rocket fuel in an ethical and sustainable way. We can’t put them through our bullshit. Won’t work. And wouldn’t be ok if it did work. So we have to do something else.

I’ve given a few examples of how different subcultures approach it, but would love to talk to anyone else who has ideas.

NotesThere are obviously some kinds of trauma that there is no useful amount of.One of the main challenges with “the right amount” for most people is knowing what that amount is. When is it manageable and rocket fuel vs. when do you talk to a therapist?This essay is 100% AI-free, except the image, which is from Midjourney 5.
 •  0 comments  •  flag
Share on Twitter
Published on May 07, 2023 18:53

May 2, 2023

No. 380 – LLM-Mind-Reading, Automated War, Rusty Sudo, Eliezer Bitterness Theory…

 

  MEMBER EDITION | NO. 380 | MAY 1 2023 | Subscribe | OnlineAudio     STANDARD EDITION (UPGRADE| NO. 380 | MAY 1 2023 | Subscribe | OnlineAudio      Happy Conflu week,Well, I got sick (again) from RSA. The swag at these cons continues to decline. Still shipped an abridged newsletter though.Have a better week than me, In this episode:Pre and Post-LLM Software: Adapt or be replaced🎙️ RSnake Show Appearance: AI-focused conversationRSA Live Podcast: Industry insights and advicePalantir AI: Automated war and terrorNew Apple Update Mechanism: Rapid Security Response🧠 LLM Mind-reading: Extracting text from brain activityChatbanning: Samsung’s response to data leakVMware & Zyxel Patches: Addressing vulnerabilitiesGoogle Security AI: Cloud Security AI Workbench🦀 Sudo Rust: Safer sudo and su in RustPalo Alto Cameras: License plate tracking‍♂️ Apple Coach: AI-powered health appFirst Republic Falls: FDIC interventionEliezer Bitterness Theory: AI doomsday predictions🤖 Prompting Superpower: Advanced AI prompting techniques🛠️ ShadowClone & FigmaChain: Useful toolsRecommendation: Learn Python and LangchainAphorism: Carl Jung on creativityMY WORKPre and Post-LLM SoftwareIf you’re not transitioning your software and business to the post-LLM model, you’ll be replaced by someone who has. MOREMy Appearance on the RSnake ShowI recently had the chance to do the RSnake show in Austin. Robert (RSnake) Hansen is a good friend of mine and he’s been doing an amazing podcast for 5 seasons now. He does long-form conversations similar to Lex Fridman, and I was honored to be asked on. We spent close to 3 hours talking mostly about AI. Highly recommend taking a look. WATCH ON YOUTUBE | LISTEN TO THE PODCASTLive at RSA with Phillip Wylie, Jason Haddix, and Ben SadeghipourWhile at RSA I was lucky enough to get to do a podcast with Phillip, Jason, and Ben. We talked all about getting into the industry and content creation, and advice for people looking to do the same. It was tons of fun.  WATCH ON YOUTUBE | LISTEN TO THE PODCASTSECURITY NEWSPalantir AIThose are scary words to put together. Peter Thiel just announced the Palantir Artificial Intelligence Platform (AIP) that uses LLMs to do things like fight a war. The demo shows a chatbot to do recon, generate attack plans, and to organize communications jamming. I am not sure people realize that Langchain Agents can execute actions using a set of defined Tools, and that those tools can include any APIs. APIs like /findtarget and /launchmissile. We’re a lot closer to automated war and terror than people think, and Palantir ain’t helping. MORE | DEMO | KILL DECISIONNew Apple Update MechanismApple has created a new way to issue security patches out of band from large software updates. They’re called Rapid Security Response (RSR) patches, and they’re much smaller than normal updates. The first one was attempted a couple of days ago and had some trouble rolling out. You check to see if they’re available the same way as any update: General -> Software Update. Also check Automatic Updates to make sure “Security Responses & System Files” is turned on. MORELLM Mind-readingResearchers at University of Texas, Austin successfully extracted actual text interpretations from fMRI data. In other words they had someone consume media that had dialogue in it, and they read their brain activity, and reversed it back to the text. It wasn’t perfect, but it was damn good. And yeah, of course it used LLMs to do this. MOREChatbanningSamsung is preparing to ban generative AI tools due to a data leak incident in April. This will include non-corporate devices on the internal network, as well as any site using similar technology—not just OpenAI’s offerings. MORE    SponsorCompliance That Doesn’t SOC 2 Much To close and grow major customers, you have to earn trust. But demonstrating your security and compliance can be time-consuming, tedious, and expensive.

Until you use Vanta.

Vanta’s platform helps you:Automate up to 90% of the work for the most sought-after security and privacy standards — including SOC 2, ISO 27001, GDPR, and HIPAAGet audit-ready in weeks, not monthsSave up to 400 hours of time and 85% of compliance costsBuild efficient compliance processes and mitigate risk to your businessCurious about how it works? Take a self-guided product tourvanta.com/downloads/tour  Tour Vanta  VMware PatchesIf you run VMware Workstation or Fusion, you need to patch. There are multiple vulnerabilities and one of them allows local attackers to run arbitrary code. CVSS 9.3. MOREZyxel PatchesZyxel firewalls are vulnerable to RCE. Patch them immediately. 9.8. MOREGoogle Security AIGoogle just launched Cloud Security AI Workbench for Cybersecurity (could it use more names?) that uses a custom model called Sec-PaLM. It includes AI-powered tools like Mandiant’s Threat Intelligence, and VirusTotal and Chronicle will be using it soon as well. MORESudo RustThe Prossimo project is rewriting sudo and su in Rust to make them safer. They’re currently written in C, which is nightmare fuel. MOREPalo Alto CamerasPalo Alto is installing 20 new cameras to track license plates. They’re hoping it’ll help stop smash and grab incidents, catalytic converter thefts, and various other property crime. This is part of the trend of Bay Area Red and Green Zones I’ve been talking about for like 15 years. It’s tiny rich zones protecting themselves from the criminal elements in the poor zones. And the more income inequality grows, the smaller the rich zones and the larger the poor zones become. Not good. MORETECHNOLOGY NEWSApple CoachApple seems to be working on an AI-powered health app. Rumors say it’ll help you track your workouts, diet, and sleep, and who knows what else. This is just another step towards lifeOS, which is what I’ve been predicting for years. They already own the device, and they’re pushing into Finance, and this is more advancement into Health. I personally can’t wait for it, and wouldn’t trust any other company to do it for me. MOREHUMAN NEWSFirst Republic FallsThe FDIC took over First Republic to prevent a complete failure. JP Morgan Chase is taking things over, and the messaging is that things will just continue as normal. I’m not an expert on this, but it sure feels dangerous for only a few banks to be “safe” for customers because they’re too big to fail. How does a new bank compete in that market? I suppose new laws guaranteeing solvency could help. MOREIDEAS & ANALYSISEliezer Bitterness TheoryI heard a theory from a smart guy in AI about Eliezer Yudkowsky. The theory says that Eliezer has always believed his own brain (and the brains of other super-smart people) would be the model for superintelligence. So because the solution came out of LLMs, he’s super bitter. And this is what’s leading him to say we’re all doomed. I don’t know if this is right, but I hope there’s some truth to it. It’d make me sleep better. Unfortunately, even if it is true, it doesn’t fully remove all the good points Eliezer is making about the dangers of AI. But it would lower the odds of guaranteed destruction significantly, at least in my mind. It pushes me more towards Altman’s mode of optimism, which his my desired state.NOTESSick for the last few days. Basically oscillating between Langchain hacking and sleeping. Still had to get a newsletter out though!DISCOVERY🤖Prompting SuperpowerThe best way to do advanced prompting is to combine two techniques: 1) Few-shotting, and 2) Thinking Step by Step. Few-shotting is where you give multiple examples of good answers, and then leave the last one unfinished so it’ll know exactly what you want. Telling the AI to “think step by step” tells AI to break the problem down and solve each piece in a sequence. These are ultra-powerful by themselves, but if you COMBINE THEM, it gets silly. It basically unlocks Theory of Mind (ToM) within LLMs, which is where an entity can understand how another entity thinks. BTW, I have been worried for a while that this is how we’re going to wake up AI. I feel like ToM is like the gateway drug to consciousness. But that’s another post. Here’s the chart on how to do this: CHART⚒️ ShadowClone — Distribute your long-running tasks (Recon, OSINT, Vuln Scanning, etc) across multiple systems using Lambda/Containers instead of VPSs! MORE⚒️ FigmaChain — Generate a working website using GPT and Figma. You do the mockup in Figma and it gives you the actual code to implement it. MORE[Sponsor] Vanta – Take a self-guided product tour to see how you can save up to 400 hours of time and 85% of compliance costs using Vanta.20 of the top AI companies in Y Combinator’s latest batch. MOREBanana Equivalent Dose MOREUsing Notion to Manage Your Entire Life MORERECOMMENDATION OF THE WEEKIf you have any loved-ones in school, retraining, or wondering what they should do in life, make sure they have one thing covered. Python. The language doesn’t matter that much, but Python is it right now. And this includes you. If you don’t know Python to some level of competence, I implore you to get that way. Python is the language, and Langchain is the framework. To be competent in the world that’s coming, you need to be able to magnify your efficacy using AI, and these are the tools for doing that. LEARN PYTHON | LEARN LANGCHAINAPHORISM OF THE WEEK“The creative mind plays with the objects it loves.”Carl Jung     Thank you for supporting this work. I’m glad you find it worth your patronage.     Thank you for reading. To become a member of UL and get more content and access to the community, you can become a member.         Follow via RSS Follow via RSS      Forward UL to friends Forward UL to friends   Tweet about UL Tweet about UL   Share UL with colleagues Share UL with colleagues    Refer | Share | UnsubscribeUpdate Your PreferencesCopyright © 1999-2023 Daniel Miessler, All Rights Reserved.   

 

No related posts.

 •  0 comments  •  flag
Share on Twitter
Published on May 02, 2023 18:23

No. 379 – AI & Transparency, lifeOS, China Model Fears, Data Criticality…

 *|INTERESTED:Memberful Plans:UL Subscription (Annual) (53074)|*MEMBER EDITION | NO. 379 | APR 24 2023 | Subscribe | OnlineAudio*|END:INTERESTED|**|INTERESTED:Memberful Plans:UL Subscription (Annual) (53074)|**|ELSE:|*STANDARD EDITION (UPGRADE| NO. 379 | APR 24 2023 | Subscribe | OnlineAudio*|END:INTERESTED|* Happy RSA Monday—I hope you’re having a good one so far!If you see me around RSA this week please come get a wave, fist bump, or hug (your choice). I’d love to say hi! And don’t forget we have a member lunch/meet-up on Thursday!Have a great week! In this episode:Discover AI’s game-changing role in transparency🌩️ Unravel Microsoft’s stormy threat actor namesExplore China’s AI chatbot rules & secret NYPD basePeek into Apple’s journaling app & savings accountEmbrace psychedelics for mental health in the USGartner’s 2023 guide to cloud-native app protectionAI controls and more!MY WORKAI is a Gift to TransparencyA collection of real-world use cases for what we can do with AI-provided transparency into human challenges. MORESECURITY NEWSMicrosoft will start naming threat actors after weather events. Not the campaigns, but the actors themselves. Interesting concept. Here are the first mappings.Blizzard -> RussiaTyphoon -> ChinaSandstorm -> IranSleet -> North KoreaDust -> TurkeyCyclone -> VietnamRain -> LebanonHail -> South KoreaTempest -> Financially motivatedTsunami -> Private Sector attackerFlood -> Influence operationStorm -> Groups in development China Applies AI ControlsChina proposes new checks on AI chatbots, slowing tech industry’s rollout.• Draft measures require security reviews and user identity verification• AI-generated content must embody core socialist values• Alibaba, SenseTime, and Baidu recently launched ChatGPT-like bots• Regulators and state media warn against speculative frenzy in AI stocks MORESecret Chinese Police StationThe US charged 40 Chinese individuals for running a troll farm and secret NY police station.• Alleged efforts to intimidate, harass, and censor China’s critics overseas• Secret police station in Manhattan’s Chinatown• Massive online troll farm spreading disinformation and harassment• Only two New York-based officers arrested so far MORESponsor Love is UL Love — Sponsors help us produce this newsletter full-time. We spend a lot of time and effort picking the companies we promote here, and we pass on many of them because we care about what we’re showing you. Do us a favor and explore the sponsors we share . It helps us keep doing what we love, which is bringing you great ideas and analysis full-time.Sponsor Discover the Future of Cloud Security with the Gartner® 2023 Market Guide for CNAPP As cloud-native applications evolve, so do security threats. Stay ahead of the curve with Gartner’s comprehensive 2023 Market Guide for Cloud-Native Application Protection Platforms (CNAPP). Learn how to protect your cloud infrastructure and applications from development to production with a single, integrated platform.🛡️Key insights include:The increasing attack surface of cloud-native applications How CNAPPs streamline security and risk managementRecommendations for evaluating and deploying CNAPP solutionsDon’t miss out on this essential guide to securing your cloud-native applications! Download the Gartner® CNAPP Market Guide Now wiz.io/lp/gartner-market-guide-cnapp-2023Download NowTECHNOLOGY NEWSLyft announces more layoffs. I am not sure how much longer they’ll last. I used them for a few months when Uber was being gross to female employees, but the Lyft interface and experience was always worse for me. Question is: would the US let them merge? MOREIs Apple launching a journaling app? I’d love to see this. Hope it’s true. MOREGooglers say Bard is worse than useless. MORENiantic is making a real-world Monster Hunter game. MOREGoogle consolidates AI labs into DeepMind. MOREHUMAN NEWSLegalized Psychedelics?In 2023, the US government may approve the use of hallucinogenic drugs for mental illness treatment, with MAPS seeking FDA approval for MDMA as a PTSD treatment.– MAPS has completed two successful clinical trials on MDMA’s effectiveness for treating PTSD.– Australia approved MDMA as a PTSD treatment in February, with restrictions.– There are concerns about how MDMA will be administered and its potential financial incentives.– MAPS envisions global treatment centers where people can safely use psychedelics under therapist guidance.I really hope this happens. Everything I’ve seen and read and seen anecdatally has indicated this will be massive for mental health. And we really need that right now. Combine that with more access to good therapy through AI and I think we could seriously help millions of people. MOREApple Savings AccountApple just introduced a high-yield savings account with 4.15% APY.– Savings account by Goldman Sachs– No fees, minimum deposits, or balance requirements– Manage account directly from Apple Card in Wallet– Savings dashboard for tracking balance and interestI think this is going to be one of those moves where, when people look back, it’s marked as one of Apple’s main milestones towards lifeOS. Tech. Education. Health. Now finance. lifeOS seems imminent. MORETrump Catching DeSantisTrump now has a 13-point lead over DeSantis in a new Wall Street Journal Poll. I keep telling people not to count Trump out. People keep ignoring me. MORETrump Resilience68% of GOP voters support Trump despite indictment and investigations.– 26% of Republicans prefer a less-distracted candidate– 46% would support Trump in GOP primary today– 60% of general voters say Trump shouldn’t run– 70% don’t want Biden to run again MOREIDEAS & ANALYSISAI is a Gift to TransparencyA collection of real-world use cases for what we can do with AI-provided transparency into human challenges. MOREThe CCP and GPTI bet the CCP is super scared of AI models they don’t have explicit control over. Especially local ones! No need to bypass the Great Firewall if you can get honest answers from software running locally. MOREData Becomes Important, AgainWe’ve heard for a long time now that ‘data is the new oil’, and I guess that has been true in many cases. But it’s about to get a whole lot more true when everyone is running an SPQA stack. State requires data. And training large models requires data. People who have more data, and more access to newer and more unique data, will be winners. A big problem we’ll have soon is having tons of the new data coming out being produced by GPTs. It’ll become derivative. So the companies that have access to new, raw, human-generated data will have a major advantage. Think about who those companies might be. Data brokers? MANGA companies? Shadow companies like Palantir? This will be a major battleground.NOTESSuper hyped to share that UL member and buddy in crime Joseph Thacker (@rez0) and another great hacker @rhynorater are launching a new company called WeHackAI (wehack.ai). The service is designed to help companies launching AI-based or AI-augmented products—or that are adding AI to their existing offerings—by finding vulnerabilities throughout their stack. That includes not just the AI components, but the supporting infrastructure as well. I believe so much in the vision and in the pedigree of the founders that I’ll be an advisor for the company as well! Stay tuned for more info from them, and in the meantime go sign up here to get the latest. And if you know anyone building AI stuff, or adding AI to their stuff, point them to wehack.ai.I keep hearing about how Picard Season 3 is a love letter to STTNG, and I can’t wait to watch it. AI has seriously crushed my media consumption, and TV-watching especially, which was already quite minimal. But I make exceptions for Captain Picard and crew.I just got to catch up with a friend I met online in my first online community, DSLR. His name is Steve Friedl, and he’s awesome. He wrote a consulting guide called So You Want to be a Consultant  way back then that served as the foundation of my consulting philosophy for years, and still does. Talking with Steve on the phone for the first time was fantastic, and I can’t wait to grow the relationship even more. Thanks, Steve, for your mentorship when I was starting out. And I hope to be like Steve when I grow up because he’s still crushing consulting today just like the day I met him almost 25 years ago. Goals. FOLLOW STEVEI’m thinking about trying a new format for news stories. I have some possible format examples here in this episode. It would look something like this:—⛓️ Embedded Supply Chain HacksThe X_Trader software supply chain attack led to the 3CX breach and affected critical infrastructure organizations in the US and Europe.– North Korean-backed threat groups involved– Trojanized installer used for attack– Multi-stage modular backdoor deployed– Victims’ systems compromised– US and European critical infrastructure impactedThis is another example of how deep the rabbithole goes on supply chain stuff. We will never get to the bottom of this until we can clearly 1) see, and 2) understand everything we have installed, everywhere—including its current version, patch levels, and configuration—all at the same time. Until then we’re just grasping and hoping when it comes to supply chain vulnerabilities. MORE | MORE

That’s not a great analysis example because it was a made-up one, and some stories won’t have analysis anyway. But the point is that you could get away with just the first sentence. Or you could get the bullets for the second level. Or the analysis for the third level. Finally, you’ll have the MORE links for even more if you want it.I plan on using some of my own custom AI for some of the summary stuff, such as the bullets, and then writing the analysis myself (it’ll be a while before an AI can do that without it being generic). So we get the advantages of both worlds (AI summarization + human analysis).Thoughts? Reply to this email or start a thread in chat.DISCOVERY🤖 ProfileGPT: Reveals user’s personality using ChatGPT data– Analyzes personal data, hobbies, and traits– Assesses mental health and future predictions– Python >=3.8 and ChatGPT data needed– Promotes awareness of data usage MORE | BY SAHBIC bloop: AI-powered code search and understanding tool– Natural language search for internal libraries– Summarizes and explains code intention– Supports 20+ languages and regex matching– Offers precise code navigation and unlimited free tier for self-hosted open source users MORE | BY HAMEL HUSAINMaintaining this site fucking sucks MOREYou can buy a house in Japan for $25,000 MOREWho will you be after ChatGPT takes your job? MORESo you want to start an AI startup MOREWriters are becoming AI Prompt Engineers MORE90% of my skills are now worth $0, but the other 10% are worth 1000x MOREPrompt Injection: What’s the worst that could happen? MORELooks like da Vinci was Jewish. MOREThey’re acquaintances, but they’re still important. MOREWhy people are fleeing blue cities for red states MORERECOMMENDATION OF THE WEEKIf you care about AI’s threat to your business, or you are a builder thinking about the future of applications, you need to be watching Langchain as close or closer than OpenAI. It’s not about the boards and nails and drywall. It’s about the buildings we can build with them. Learn Langchain. LANGCHAIN DOCS | INTRO VIDEOAPHORISM OF THE WEEK“The art of life lies in a constant readjustment to our surroundings.”Kakuzo Okakura*|INTERESTED:Memberful Plans:UL Subscription (Annual) (53074)|*Hey, you. Yes. You. Thank you for being a member. Seriously appreciated.*|END:INTERESTED|**|INTERESTED:Memberful Plans:UL Subscription (Annual) (53074)|**|ELSE:|*Thank you for reading. To become a member of UL and get more content and access to the community, you can become a member.*|END:INTERESTED|**|INTERESTED:Memberful Plans:UL Subscription (Annual) (53074)|* Follow via RSS Follow via RSS*|END:INTERESTED|**|INTERESTED:Memberful Plans:UL Subscription (Annual) (53074)|**|ELSE:|* Forward UL to friends Forward UL to friends Tweet about UL Tweet about UL Share UL with colleagues Share UL with colleagues*|END:INTERESTED|*Refer | Share | UnsubscribeUpdate Your PreferencesCopyright © 1999-2023 Daniel Miessler, All Rights Reserved.

hts Reserved.

No related posts.

 •  0 comments  •  flag
Share on Twitter
Published on May 02, 2023 16:14

April 28, 2023

Pre and Post-LLM Software

miessler-post-llm-software-ai

The recent RSA conference has left me concerned for the many companies in attendance. It seems we are at a turning point in software history, divided into two epochs: Pre-LLM and Post-LLM.

Pre-LLM software is limited in scope, only aware of its specific database and rigid schema. It operates through narrow, brittle queries. In contrast, Post-LLM software is based on understanding. It deals with knowledge and wisdom rather than data and information. Instead of requesting specific rows of data, you can simply ask for the insight itself.

Pre-LLM software is characterized by rigidity, narrowness, and self-centeredness, while Post-LLM software is flexible, cohesive, and powered by context. LLMs have access to a vast array of knowledge, with large models like GPT-4 containing a significant snapshot of human knowledge. The interface is language, allowing for better questions and insights through natural language.

The value of Pre-LLM software is capped due to its lack of context and inability to integrate it. Consider an Incident Response/SOC software that now has access to real-time knowledge of people’s locations, backgrounds, relationships, and cloud infrastructure details. This information is crucial for determining whether a connection is malicious or not.

LLMs excel in this environment, using context to connect the dots when answering questions. Instead of a Tier 1 analyst spending hours researching, we can simply ask the LLM if a connection is malicious and why. The LLM can provide a detailed response based on context, saving time and effort.

In this new world, software becomes a combination of context and questions. The more context and better questions we have, the more we can improve. This is the realm of LLMs, and the transformation is beginning now.

Initially, companies will use vector embedding databases to bring context to LLMs. Eventually, as prices decrease, we’ll see custom models built on top of massive, general models. Companies will train their own LLMs on their data, making them queryable and capable of not only identifying problems but also helping to fix them.

We are at a critical juncture in software history, with Spring 2023 marking the transition point. It’s time to prepare ourselves and our companies for the Post-LLM reality, where databases and queries are replaced by custom LLMs trained on our data, answering our questions, and taking action towards our desired outcomes.

Don’t wait. Start your transition now.

 •  0 comments  •  flag
Share on Twitter
Published on April 28, 2023 11:09

April 23, 2023

AI is a Gift to Transparency

ai-transparency-miessler-ai

GPT-based AI is about to give us unprecedented public transparency. Imagine being able to input a public figure’s name and instantly access everything they’ve ever said on any given topic. That’s cool, right? Well, it’s just the beginning.

We’re about to have “Me Too Search Engines”.

The true power lies in the ability to query a comprehensive dataset on an individual, about anything. For example, you could track the evolution of someone’s political views over their entire online presence, or assess the accuracy of their predictions throughout their career.

It’ll be used to attack people, research people’s contributions, and to construct remarkable narratives about their evolution as a person over time. But mostly—at least at first—it’ll be used to expose people.

> The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’ becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.

Paul Krugman, 1998

Consider influential figures like Paul Krugman, who has made numerous predictions from his prominent position at the New York Times. With AI, we could evaluate every prediction he’s made and rate their overall effectiveness in terms of confidence and accuracy.

The software architecture that will power this will be something like SPQA.

The real significance of this technology is not in any specific application, but rather in the unprecedented transparency it offers to any use case. AI enables us to view an entire body of information on a subject and ask targeted questions, providing unparalleled insight and understanding.

I’m going to add timestamps to keep myself honest.

This article serves as an intro to the concept and my own capture of interesting applications.

Transparency applicationsPrediction Evaluation: Look at every prediction a public figure has made and give them a score based on 1) how important the topic was, 2) how strong the claim was, 3) how confident they were they were right, and 4) how wrong or right they were.

Keep in mind this will be all publicly accessible accounts, anywhere, ever.

The Me Too Search Engine: Look at everything a public person has said, and find every instance of where they were racist, sexist, or otherwise outside the lines of what’s currently acceptable in society.

The ‘That’s Not Me Anymore’ Redemption Engine: A system that can read the same corpus of data as the Me Too Search Engine and come up with why this person shouldn’t be canceled into oblivion. It’ll look at good things they’ve done, progress over time as they got older, etc., and it’ll put together a corresponding set of public campaigns to counter the MTSE attacks.

The Corruption Detector: For every public government official, find every donation ever made, by every donor. Find every piece of legislation they voted on. Fully analyze all the different ways it would help different groups. Find all the votes they made on that legislation. Find the full list of donors and rate their biases and goals based on their track record as a donor. Finally, produce a corruption score for each government representative based on how often they voted based on the money or benefits they received.

The Hiring Helper: If you’re hiring for a teacher or church position maybe you don’t want people who have expressed certain views in the past. Perhaps unless they have properly evolved out of those views. Software will be developed that looks at the entire arc of a public person’s contributions and estimates their moral character. And this will be used to inform decisions about all sorts of things, including hiring. Will this be illegal? Maybe. Probably. In lots of places. But it’ll still be used.

Unsupervised Learning — Security, Tech, and AI in 10 minutes…Get a weekly breakdown of what's happening in security and tech—and why it matters.

The Match Maker: Sticking with hiring and extending to dating, what if everyone perfectly described what they were about, and what they wanted to do, and what they’d be happiest doing, and what they’d be best at doing. This would be helped by AI as well, of course. Then we would throw all of those people together in a giant salad bowl of millions of people and we’d ask, “Which of these people would make the best lifelong partners together? The best business partners? The best employers and employees? The best local acquaintances? AI will be really good at that because it has the wisdom of every psychology study, every dating expert, every business expert, etc.—all built into it. It’s the perfect match maker. All it needs is the right context to be provided for each person and entity, and for us to ask it the right questions. Hell, we can just describe our goals and it’ll ask the right questions itself.

The Risk Adjuster: Insurance has always been a context game. The more they know about you the better they can determine how much risk you pose to their bottom line. We already see insurance companies giving people discounts if they share their health data. Now imagine that it has your life history as well, and your social connection network, and a stream of your public writings. Now there will be a much larger split between safe people to insure and those that should pay super-high premiums or not get a policy at all. This applies to everything from e-bike insurance to insuring the cybersecurity readiness of a Fortune 500 company.

The New Detection/Response Model: What if you knew the current context of every host, application, dataset, and system in the company, along with the context of every user? The biggest part of detection and response is knowing all the things. This is what good IR people do. They track things down. They figure out what the systems are in the source and destination. They connect dots. Humans suck at that. Especially in massive and complex environments. Thousands of systems. Thousands of edge cases. You know what doesn’t suck at that? LLMs. LLMs are the big brains of connecting dots. It’s their favorite thing. So, it’s 2:47AM PST and Julie’s system just made a connection to fileshare Y. Is that malicious? Can you tell me from what I just wrote? No, you can’t. And neither can an IR specialist. They have to go research. An LLM with context on every user, and every system in the company won’t have to research. No, it’s not malicious. Because Julie said in Slack 3 hours ago that she’d be connecting once she landed home in Japan, where she also went to college, and where she’s now living since she moved 6 months ago. LLMs know that because they have the context for everyone at this 49,000 person company. The new IR employee, Rishi, didn’t know that about Julie. Rishi started yesterday.

Spoiler: I’m building this one right now.

The Security Program Builder: Like we talked about above, the problem with doing security in any complex environment is that you can’t 1) see, and 2) prioritize everything all at once. There is too much to hold in a human brain. Vendors. Software installs. Vulnerabilities. Requirements from stakeholders. Compliance and regulation. Attackers and their goals and techniques. It’s too much. So what we do is flail around with OKRs and Jira tickets, trying to do the best we can. That all goes away with SPQA-based transparency. Because now we don’t try to hold that in our brains anymore. Now we let language models hold that in their heads, and all we do is ask it questions. So we take everything we have—our mission, our goals, our problems, our systems, our assets, our teams, our people, our Slack messages, our meeting transcripts, etc.—and tell it our desires. We describe the type of program we want, who we want to do business with, what we consider good and bad, and we write that all in natural language. Then we ask it questions (Q). Or give it commands for action (A). Using this structure it’ll be able to write our strategy docs, create QSRs, find active attackers, prioritize remediation, patch systems, approve or deny vendors, approve or deny hires, etc. All by doing two things: 1) asking questions, 2) using context.Summary

These are just a few examples of what transparency can give us in this post-AI world of software. Before we had to force everything. We had to force the data into a forced schema. And then force queries against that database. It’s rigid. It’s fragile. And it’s so very limited.

Nobody should blindly take such answers and go, but rather use the answers to properly focus their decisions.

In this model we don’t force anything. We’re simply feeding context to something that understands things, and we’re asking questions. Who voted most with their donors? Who was most right in their predictions? Who’s my best match for a life partner? What’s the best investment for our business given my preferences? Which risk poses the most danger to our business given everything you know about our company?

Extraordinary things happen when you can hold the entire picture in your brain at once while making a decision. LLMs can do that. We can’t.

AI is about to move human problem-solving from alchemy to chemistry.

Notes Unfortunately, the Me Too Search Engine will also be paired with Me Too Extortion Monetization. Many businesses will pop up that find everything bad you’ve ever said, turn that into tweets, emails, letters, etc., to your boss and your loved ones, and then send that content to you, saying, “Here’s what I’m about to send. If you don’t want it to go out, send me X amount of money to this address.” I wasn’t going to write about this because it gives people ideas, but the bad guys will see the potential as soon as it’s possible within the tech. Thanks to someone in the UL community for coming up with the redemption arc idea after I explained the Me Too Search Engine. Great idea. I’ll be adding more Use Cases to the end of the list as I add them, with timestamps.
 •  0 comments  •  flag
Share on Twitter
Published on April 23, 2023 11:32

Daniel Miessler's Blog

Daniel Miessler
Daniel Miessler isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Daniel Miessler's blog with rss.