Jump to ratings and reviews
Rate this book

You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place

Rate this book
"You look like a thing and I love you" is one of the best pickup lines ever... according to an artificial intelligence trained by scientist Janelle Shane, creator of the popular blog "AI Weirdness." She creates silly AIs that learn how to name paint colors, create the best recipes, and even flirt (badly) with humans--all to understand the technology that governs so much of our daily lives.

We rely on AI every day for recommendations, for translations, and to put cat ears on our selfie videos. We also trust AI with matters of life and death, on the road and in our hospitals. But how smart is AI really, and how does it solve problems, understand humans, and even drive self-driving cars?

Shane delivers the answers to every AI question you've ever asked, and some you definitely haven't--like, how can a computer design the perfect sandwich? What does robot-generated Harry Potter fan-fiction look like? And is the world's best Halloween costume really "Vampire Hog Bride"?

In this smart, often hilarious introduction to the most interesting science of our time, Shane shows how these programs learn, fail, and adapt--and how they reflect the best and worst of humanity. You Look Like a Thing and I Love You is the perfect book for anyone curious about what the robots in our lives are thinking.

272 pages, Hardcover

First published November 5, 2019

Loading interface...
Loading interface...

About the author

Janelle Shane

3 books73 followers
While moonlighting as a research scientist, Janelle Shane found fame documenting the often hilarious antics of AI algorithms.

Janelle Shane's humor blog, AIweirdness.com, looks at, as she tells it, "the strange side of artificial intelligence." Her upcoming book, You Look Like a Thing and I Love You: How AI Works and Why It's Making the World a Weirder Place, uses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining.

According to Shane, she has only made a neural network-written recipe once -- and discovered that horseradish brownies are about as terrible as you might imagine.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
1,305 (39%)
4 stars
1,410 (42%)
3 stars
490 (14%)
2 stars
80 (2%)
1 star
12 (<1%)
Displaying 1 - 30 of 562 reviews
Profile Image for Sleepless Dreamer.
842 reviews199 followers
November 1, 2019
Thanks to NetGalley and the publisher for providing me with a copy in return for my unbiased review!

This book provides an excellent summary of AI and how it works. It's written in a funny and easy going style, with absolutely adorable sketches. Seriously, it's worth reading this for the AI doodles. They made me burst out laughing a few times.

Moreover, as someone with almost no knowledge about AI, I can say confidently that this book manages to be clear and understandable, even if you don't know anything. It gets ideas across without being too technical or using too much professional jargon. The author uses hilarious metaphors and real life examples to highlight important points and it definitely makes it all clear and interesting!

AI is such a buzzword nowadays so I enjoyed receiving more facts on the abilities of AI. Realizing that AI is best when solving narrow problems, that AI develops through mistakes, that AI struggles when seeing the unknown and also doesn't have a long term memory was all new to me.

I found the parts that talked about creativity and AI absolutely fascinating. It's very cool to think about how AI isn't bound by our human thoughts and therefore can go to places and connections we usually don't. Like I'd struggle to think about original cat names but an AI with enough input can just list thousands (and yeah, most won't be relevant but still).

I loved reading about AI shortcuts ("how do I gamble the best? Simply don't gamble"). Of course, it's concerning (like AI assuming there are no diseases because they are rare) but it's neat to think of how far this can go and how AI sees our world differently.

All in all, if you're up for a short, funny and informative book about AI, this is a good read for you.

What I'm taking With Me
• The knowledge of AI very much depends on its data bank. Which makes me feel like we need philosophers and other humanists involved when creating AI for real life applications, you've got to have someone that's thinking about social repercussions, about the ethical implications of representation.
• Companies often claim to use AI but in fact use people because it's cheaper. Combining AI with human help works well, such as advertising bots that redirect complicated questions to human workers.
• Man, I'm just here waiting for AI to come up in a conversation so I can talk about this book.

First Week Uni Adventures
• My Peruvian roommate said that I'm so dramatic, I could be from Latin America. To be fair, she said this after she walked into the room and found me lying on my bed and saying, "math will be the death of me, I'm doomed". But really, math is freaking hard and I am scared.
• People from my degree are so smart and so serious and all of them have so many life goals and I'm just here like, "idk man, I'll probably go back to being a graphic designer after this".
• Econ is so confusing, what the heck
• If one more person tells me I seem like I'm from Tel Aviv, I'm going to cry.
• I need to stop signing up for things and I feel physically unable to because everything is so cool and interesting and I want to do it all.
• Comparative Politics is the best thing ever and I am in love with our professor and really, it's just a wild class.
• A guy in my PPE classes is convinced he saw me in a left wing propaganda video and like, I'd like to be confident enough to say that couldn't be me but I am scared it might be and that I don't know of it.
• My dorm floor is about 50% international students. It's fun because I was considering studying abroad and well, I feel like I'm getting the dorm room experience of studying abroad.
Profile Image for Blair.
1,728 reviews4,078 followers
September 29, 2019
A fun, irreverent guide to the world of artificial intelligence from the woman behind the fantastic AI Weirdness blog. The book's central premise can be summed up in a sentence: artificial intelligence is more widespread than we think... but it's also pretty stupid. Hence the many funny, charming and even cute examples of machine-generated oddness throughout: recipes that call for 'liquid toe water'; a list of Halloween costumes that includes 'Panda Clam' and 'Failed Steampunk Spider' (I actually want to see that one); and the book's title, which was the result of an AI being tasked with devising chat-up lines. Shane's light-hearted style is very accessible – there are loads of laugh-out-loud anecdotes, but you'll learn quite a bit too.

I received an advance review copy of You Look Like a Thing and I Love You from the publisher through NetGalley.

TinyLetter | Twitter | Instagram | Tumblr
Profile Image for Alex Sarll.
5,610 reviews223 followers
September 19, 2019
I have to be very careful when I check Janelle Shane's AI Weirdness blog, because it has more than once left me laughing so much I couldn't breathe with its lists of an artificial intelligence's efforts to generate new entries in a given category – if you've somehow not seen any, I'd particularly recommend the paint colours and the names for guinea pigs. This book does draw from those lists, not least in the title – an AI-suggested chat-up line, and TBH one which would probably work on me. But more than the blog it tries to restrain itself to using them as examples, while educating the general reader in how AI works in the real world, as against the bolder projections of science fiction (a category which includes much mainstream media coverage of AI). As Shane is at pains to remind us, for the moment most AI has approximately the cognitive capability of a worm, rather than Skynet, and when it goes wrong even the dangers are more likely to stem from stupidity than omniscience. That can be human stupidity too, though, whether in terms of machines replicating the biases of the lamentable species which created them, or being given a bad initial dataset from which to learn, or simply not having the nature of the question properly spelled out for them. Google researcher Alex Irpan* says he's found it helpful to picture AI as a a demon deliberately trying to misinterpret any instructions it's given, which while amusing is also one of the more alarming moments in the book – see also the bit where the NPCs in Oblivion had to be toned down a bit because they were getting up to the sort of mischief only players were supposed to be able to do. More often, though, this results in robots which fall over because it's easier than walking, or conclude that the best way to stop a car crashing is to immobilise it, or just start claiming there are giraffes everywhere (a more common failure mode than you might have expected). Although I didn't find the algorithmically created recipes significantly more nonsensical than the ones humans perpetrate, and given my feelings on sport, I love that one task simple enough for them to reliably handle is match reports. I may or may not remember the difference between a Markov chain and a GAN by the end of next month (assuming, of course, that technological civilisation in Britain lasts beyond the end of next month anyway), but the general understanding of how to spot ludicrous overclaiming for the powers of AI, and why some tasks really don't suit it, will definitely remain. I also have a newfound respect for their determination to solve many problems either by strategic laziness, or rewriting the laws of the universe.

*Does nominative determinism include initials too? There's also a Karl Sims working on simulations.

(Netgalley ARC)
Profile Image for Anna.
1,631 reviews600 followers
January 3, 2021
I've been teaching masters students about data mining and machine learning for a couple of years now, so the main points of 'You Look Like a Thing and I Love You' were familiar. I was really reading it for the entertaining examples, which were much more fun than my own. I liked the repeated cockroach factory motif and laughed several times at neural net-generated recipes, names, and general nonsense. Moreover, I learned much more about Markov chains and Generative Adversarial Networks than I knew before. Shane is a really engaging and fun writer, who makes complex concepts easy to understand. Most importantly, and I also tried to do this in my teaching, she demystifies narrow AI and deflates the hype around it. As neatly summarised at the end:

On the surface, AI will seem to understand more. It will be able to generate photorealistic scenes, maybe paint entire movie scenes with lush textures, maybe beat every computer game we can throw at it. But underneath that, it's all pattern matching. It only knows what it has seen and seen enough times to make sense of.

Thus the book spends many chapters explaining the mistakes that machine learning makes, which can be very different to the mistakes humans make. It often replicates and amplifys human biases as well, a very important point. I appreciated Shane's scepticism about fully automating cars, as driving involves responding to an incredibly wide range of different situations. It's hard to see how training data could ever cover them all adequately.

Personally, I think using the term Artificial Intelligence for machine learning is highly misleading. A so-called narrow AI may be able to optimise a very specific task, but it is not intelligent in any useful or meaningful sense. I grew up reading cyberpunk, in which AIs are godlike incomprehensible beings, not irritating bits of glitchy code that keep showing you ads for life insurance. AI has become an empty buzzword, as this book makes clear. Shane notes that many so-called AI startups never get machine learning to do the intended tasks, so humans end up doing it instead. There's even the phenomenon of bot farms, in which humans pretend to be automated algorithms on social media. We certainly live in a cyberpunk reality, just not quite the one that 80s and 90s sci-fi led me to expect. For one thing, I anticipated wearing sunglasses a lot more often.

Anyhow, the fact that I read this book in one sitting without intending to demonstrates that it's an accessible, amusing treatment of an important and interesting topic. If you enjoyed it and fancy a much more worrying book about the economic implications of machine learning, may I recommend The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.
Profile Image for Shane.
421 reviews8 followers
December 16, 2019
This book was fun and informative enough to be worth reading, but it was also a little thin and repetitive at times. I do feel like I know more about how algorithms work and how they can go wrong, so there we go - mission accomplished.
Profile Image for Wreade1872.
674 reviews132 followers
June 19, 2020

Four stars plus a bonus star for relevancy. I mean they need to teach this in schools. AI is like the evil genii/devil/monkeyspaw that give you exactly what you ask for but never what you want.

So many fascinating and disturbing examples of the types of AI's that are already being used around the world. I mean this is by no means a book of doom and gloom but most of my personal take-away focused on AI's abilities to amplify bias and its almost hysterically evil penchant for taking short-cuts.

This book delivers soild, useful (more like essential) info on the real state of AI and claims about AI and does so in a straight forward easy to follow manner with some humour to make the medicine go down. HIGHLY recommended.
Profile Image for Ann.
84 reviews35 followers
February 29, 2020
This was fun and actually made me laugh out loud several times. I skimmed some sections as I'm not trying to become an AI scholar, just wanted to get a better idea of what those sneaky AIs are doing out there. The cartoons are great -- I would recommend this as a print book not an audiobook.
Profile Image for Jerzy.
467 reviews104 followers
March 11, 2020
I crack up every time I read Shane's ridiculous tumblr posts about neural-net-generated paint colors and recipes and pie names. So I asked for this book for Christmas, expecting merely a few more silly jokes.

Instead, I got an incredibly well-written and thorough (but still funny!) overview of the realistic possibilities and limitations of what's currently being hyped as "Artificial Intelligence"*... It's not what I expected, but definitely wonderful.

I was looking for a resource like this to give to my students & colleagues who think the singularity is looming (it ain't!)... or who think that progress in Machine Learning and AI means that you don't need Statistics anymore (statistical thinking is exactly what you need to address AI's shortcomings, from biased data to inadequate testing/evaluation and beyond). Much of it was stuff I already know but phrased more effectively and humorously than I ever could---however, some of the particular foibles of neural nets were new to me, and I really enjoyed learning about them from Shane.

My college would like to be a leader in AI education among small liberal arts colleges. I can't imagine a better way to start than by requiring *each* of our students to read this book.
(I'd also like them to read O'Neil's Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy and Maciej Cegłowski's talks, but Shane somehow manages to be a gentler, friendlier intro while also going deeper into technical details. Genius.)

*(One quibble: the term "Artificial Intelligence" has a long history, and it used to mean the kind of rules-based programming that Shane explicitly *excludes* from her definition of AI. This is fine---she's clear in the book that she uses "AI" as shorthand for the methods that are being hyped in today's AI revival---just be aware that not everyone who works on AI would use the same definition.)

Notes to self / favorite parts:
Profile Image for E.M. Swift-Hook.
Author 47 books190 followers
November 29, 2019
Secrets Snowmen Won't tell You

In a time when we are all being told about the terrors of sentient AI taking over the world, AI's inventing their own languages and having to be turned off and other such terrifying prospects, discovering that an AI lists in it's top ten favourite animals 'razorbill with wings hanging about 4 inches from one's face and a heart tattoo on a frog' is the perfect antidote!

This book is full of such hilarious AI misunderstandings, but it is also an excellent survey of what AI can and can't do - and what it might and might not be expected to do in the future. For someone like me, who has only the vaguest of sci-fi show ideas about what AI really is, this is a great introduction to the topic. You will close the book feeling both reassured but also very aware of the real dangers of allowing AI to make decisions. Whilst its ability to spot anomalies in cells is already helping to make our lives better and safer assisting with medical diagnoses, there are many areas in which it is less helpful.

When it can't tell the difference between a sheep and the field it is in, a puppy and the child who holds it, is it really a good idea to be thinking of allowing the military to use such tech to choose targets on a battlefield? When the AI is trained on a 'previous successful candidates' list, is it surprising that it throws out the resumes of women and those from ethnic minorities? When it is allowed to use postcode as a guide, is it really going to be an impartial aid to policing?

The book explores such ethical issues as it looks at how AI learns what it learns and what can be done to make it learn better. It offers an ultimately optimistic view of what AI has to offer and an absolutely hilarious insight into how it does what it does.

I loved this book. The title of this review is a quote from an AI in it, by the way. Since we all interact with it on a daily basis, everyone needs to understand the limits and strengths of AI. So I thoroughly recommended this demystifying book - especially to technophobes!
Profile Image for Nicky Drayden.
Author 37 books791 followers
November 14, 2019
I'm legit scared of murderbots now, so thanks? Great read. Fascinating insight into the best and worst AI has to offer.
Profile Image for Bandit.
4,395 reviews441 followers
October 2, 2019
Well, first of all, You Look Like A Thing and I Love You is a pick up line AI came up with and as far as pick up lines go it’s actually pretty good. And hilarious. Pretty good and hilarious is an apt way to describe this entire book, actually. Especially if, like me, you’re interested in AI and find autocorrect hysterical. Because, as it turns out, advancements in robotics, specifically robotic intelligence are nowhere near as…well, as advanced as you might think. Or hope. Which, personally, I find very sad, I ‘m always hoping and wishing for some artificially intelligent company, since the alternative leaves so much to be desired. But no, feet are being dragged and there are still so many limitations. To be fair, we can get AI to do narrow limited tasks pretty well. But independence of thinking on the Turing Test passing level is still but a fantasy, mostly. This book started off as a blog and I’m so glad it was turned into a book, because I don’t read blog, but a book with this title, description and cover is certain to grab my attention. And so chapter by chapter the author subjects AI to test after test to produce recipes, pick up lines and dessert flavors. The results are laugh out loud funny, I don’t think I’ve ever laughed that much while reading a work of nonfiction. The robots are pretty adorable, much like the author’s accompanying drawings. And it isn’t just fun and games either, you do get a fair amount of information and science behind the AI development, which I found very interesting. Robots, much like us, can be quirky, random and have a penchant for shortcuts. They are just not quite ready yet for the complexity of tasks science fiction has them perform. That’s pretty much the gist of the book, it’s the sort of thing where you can read the final summarizing chapter and get it, but if you read the entire thing, you get the lovely drawings and the comedy, so it’s totally worth it. Plus it’s a very quick read. Thoroughly entertaining book, albeit sad on a personal level for someone who can’t wait for sci fi future with super intelligent robots. Even if they might take over the world. Recommended. Thanks Netgalley.
Profile Image for Emily.
1,697 reviews37 followers
January 12, 2020
I would have loved to have this on my kindle, because there was plenty of highlight-worthy material: lots of interesting facts to remember and lots of hilarious AI-generated lists.
I’m not sure why I developed such a fascination with AI, but it’s probably Hannah Fry’s fault. Her delightful Hello, World certainly encouraged it. People who enjoyed that book would enjoy this one too.
Janelle Shane based it on her blog aiweirdness.com, and sections of it made me laugh so hard a coworker threatened to ban it from the break room.
It’s an informative book too, and I learned a lot about how neural networks process datasets to generate their own original—and often super weird—output. The title of the book is from a list of pick-up lines an AI generated after the author trained it on a large dataset of actual pick-up lines.
It was surprising to see what AI came up with in the early stages of learning, such as the lines of k’s that it thought were knock-knock jokes in a different training scenario.
The author’s explanations got a little mathy at times, but for the most part, I understood what she was saying, and I understood more than I ever have the limits to what AI can do. As the author said in her last chapter, “Will it get smart enough to understand us and our world as another human does—or even to surpass us? Probably not in our lifetimes. For the foreseeable future, the danger will not be that AI is too smart but that it’s not smart enough...it’s all pattern matching. It only knows what it has seen and seen enough times to make sense of.”

Profile Image for Chrystopher’s Archive.
529 reviews32 followers
December 1, 2019
"For the foreseeable future, the danger will not be that AI is too smart but that it’s not smart enough."

This was a really fun read. It's not the overly optimistic tech utopia book that I was afraid it would be, but also it has a lot of optimism in it. I also really liked how thoroughly the problem of bias in tech and how that translates to AI was covered.

The material itself was fascinating and often hilarious, and if I have a complaint it's that a lot of the information is repeated in what seemed like needless detail.

Would definitely recommend.
Profile Image for Merc Rustad.
Author 57 books78 followers
November 26, 2019
A delightful, hilarious, fascinating look at what AI can (and can't) do; the illustrations are like icing, so sweet and perfect. I loved every page of this book! :D A readable, cheerful voice and entertaining anecdotes about the weirdness of AI and machine learning makes this a fast-paced, completely absorbing read. It's wonderful and highly recommended!
Profile Image for Andrew Breza.
329 reviews20 followers
January 4, 2020
You Look Like a Thing and I Love You is to deep learning what Nate Silver's The Signal and the Noise is for predictive modeling: a must read for everybody with even a passing interest in the topic. I run a data science department and spend much of my time in the weeds of building models, cleaning data, and attending meetings. It's easy to lose the big picture. This book offers an urgently needed high level view of the field of AI. Beginners and experts alike can benefit from Shane's insight and humor.
Profile Image for Shannon (That's So Poe).
921 reviews104 followers
June 23, 2021
This book does something rare and manages to be a great read for both people who know a ton about AI, and people who know nothing. I have a video review where I talk about all the reasons why I loved this book, but the main reasons are that it's hilarious to read about all the AI strange behavior, and that it does an excellent job of explaining exactly how AI works and what its limitations are to a non-technical audience. If you have any interest in AI at all, I strongly recommend this one!
Profile Image for Aaron Mikulsky.
Author 2 books20 followers
November 8, 2019
This book was a quick read with not a lot of meat. I’ve captured the nuggets below that highlight my findings from the book. I would not waste my time reading the book, but it’s ok if you know nothing about AI and ML.

More and more of our lives are being governed by algorithms.
Sometimes AI is only a small part of a program while the rest of it is rules-based scripting. Other programs start out as AI-powered but switch control over to humans (CSC from chat bot to humans or self-driving cars) if things get tough (pseudo-AI).
“People often sell AI as more capable than it actually is.”
Flawed data will throw an AI for a loop or send it off in the wrong direction. Since in many cases our example data is the problem we’re giving AI to solve, it’s no wonder that bad data leads to a bad solution.
Machine Learning (ML) is a part of AI. It’s Deep Learning, Neural Networks, Markov Chains, Random Forests, etc.
The difference between ML algorithms and traditional rules-based programs is ML figures out the rules for itself via trial and error. As AI tries to reach the goals its programmers specify, it can discover new rules and correlations. All it needs is a goal and data set to learn from.
Algorithms are good at finding trends in huge data sets but not good with nuance. ML algorithms are just lines of computer code.
Researchers are working on designing AIs that can master a topic with fewer examples (I.e. one-shot learning) but for now a ton of training data is required.
While a human driver may only need to accumulate a few hundred hours of driving experience, Waymo’s cars have collected data from driving more than 6M road miles plus 5B more miles driven in simulation.
“Many AIs learn by copying humans. The question they’re answering is not ‘What is the best solution?’ But ‘What would the humans have done?’”
“It’s often not that easy to tell when AIs make mistakes. Since we don’t write their rules, they come up with their own...Instead, the AIs make complex interdependent adjustments to their own internal structures.”

“A monkey writing randomly on a typewriter for an infinite amount of time will eventually produce the entire works of Shakespeare.”

AI to generate new recipes - called for handfuls of broken glass.
AI to generate pickup lines - the title of the book.
AI to generate ice cream flavors - “Beet Bourbon” and “Praline Cheddar Swirl.”
AI shapes our online experience and determines the ads we see. AI helps with hyperpersonalization for products, music and movie recommendations.
Commercial algorithms write up hyperlocal articles about election results, sports scores, and recent home sales. The algorithm, Heliograf, developed by the Washington Post, turns sports stats into news articles. This journalism algorithm translates individual lines of a spreadsheet into sentences in a formulaic sports story; it works because it can write each sentence more or less independently.
Google translate is a language-translating neural network.
ANNs = Artificial Neural Networks, aka cybernetics or connectionism. They’re loosely modeled after the way the brain works. In the 1950s, the goal was to test theories about how the brain works. The power of the neural network lies in how these cells are connected. The human brain is a neural network made of 86B neural networks.
Markov Chains, like Recurrent Neural Networks (RNN), look at what happened in the past and predicts what’s most likely to happen next. Markov Chains are used for the autocomplete function in smartphones. Android’s autocorrect app, called GBoard, would suggest “funeral” when you typed “I’m going to my grandma’s,”
RANDOM FOREST ALGORITHM is a type of machine learning algorithm frequently used for prediction and classification. It’s made of individual decision trees, or flowcharts that leads to an outcome based on the information we have. It uses trial and error to configure itself. “If all the tiny trees in the forest pool their decisions and vote on the final outcome, they will be much more accurate than any individual tree.”
Companies use AI-powered resume scanners to decide which candidates to interview and who should be approved for a loan, recognizing voice commands, applying video filters, auto tagging faces in photos, and powering self-driving cars. VW, while testing AI in Australia, discovered it was confused by kangaroos as it had never before encountered anything that hopped.
AI is making decisions about who should get parole and powering surveillance.
AI’s consistency does not mean it’s unbiased. An algorithm can be consistently unfair, especially if it learned by copying humans, as many of them do.
Deepfakes allow people to swap one person’s head and/or body for another, even in video. They have the potential for creating fake but damaging videos - like realistic yet faked videos of a politician saying something inflammatory.
AI is pointing people to more polarizing content on YouTube.
Microsoft’s image recognition product tags sheep in pictures that do not contain sheep. It tended to see sheep in landscapes that had lush green fields - whether or not the sheep were actually there. The AI had been looking at the wrong thing.
At Stanford, the team trained AI to tell the difference between pictures of healthy skin and skin cancer. They discovered they had inadvertently trained a ruler detector instead. AI found it easier to look for the presence of a ruler in the picture.
AI is analyzing medical images, counting platelets or examining tissue samples for abnormal cells - each of these tasks are simple, consistent, and self-contained.
The Turing test (as Alan Turing proposed in the 1950s) has been a famous benchmark for the intelligence level of a computer program.

Chatbots will struggle if the topic is too broad. Facebook tried to create an AI-powered chatbot called M that was meant to make hotel reservations, book theater tickets, and recommend restaurants, August 2015. Years later, Facebook found that M still needed too much human help and shut down the service January 2018.
ANI = Artificial Narrow Intelligence
AGI = Artificial General Intelligence
GAN’s = Generative Adversarial Networks = a sub-variety of neural networks (introduced by Ian Goodfellow in 2014). They’re 2 algorithms in one - 2 adversaries that learn by testing each other (1 the generator and the other the discriminator). Through trial and error, both the generator and discriminator get better. Researches have designed a GAN to produce abstract art - managing to straddle the line between conformity and innovation. GANs work by combining 2 algorithms - one that generates imagines and one that classifies images - to reach a goal. Microsoft’s Seeing AI app is designed for people with vision impairments. Artist Gregory Chatonsky used 3 ML algorithms to generate paintings for a project called It’s Not Really You.

People have crowdsourced data sets - if you don’t have all the data you need on hand. Amazon Mechanical Turk - pays people to crowdsource data.

ML algorithms don’t have context for the problems we’re trying to solve, they don’t know what’s important and what to ignore. Google trained an algorithm called BigGAN which had no way of distinguishing an object’s surrounding from the object itself.
Security expert Melissa Elliott suggested the term giraffing for the phenomenon of AI overreporting relatively rare sights.
Bias in the dataset can skew the AI’s responses. Humans asking questions about an image tend to ask questions to which the answer is yes. An algorithm trained on a bias dataset found that answering yes to any question that begins with “Do you see a...” would result in 87% accuracy.
To maximize profit from betting on horse racing, a neural network determined the best strategy was to place zero bets.
Trying to evolve a robot to not run into walls, the AI algorithm evolved to not move, and thus didn’t hit walls.
It’s really tricky to come up with a goal that the AI isn’t going to accidentally misinterpret. The programmer still has to make sure that AI has actually solved the correct problem.
Why are AIs so prone to solving the wrong problems?
1) They develop their own ways of solving problems, and
2) They lack the contextual knowledge.

“It’s surprisingly common to develop a sophisticated ML algorithm that does absolutely nothing.”

Dolphin trainers learned that to get dolphins to help keep their tanks clean, they’d train them to fetch trash and bring to their keepers in exchange for fish. Some dolphins learned the exchange rate - tearing off small pieces to bring to their keepers for a fish apiece.

Navigation apps, during the 2017 CA wildfires, directed cars towards neighborhoods that were on fire since there was less traffic there.

Google Flu algorithm in the early 2010s made headlines for its ability to anticipate flu outbreaks by tracking how often people searched for information on flu symptoms. It vastly overestimated the number of flu cases (overfitting).

The algorithm COMPAS (sold by Northpointe) was widely used across the US to decide whether to recommend prisoners for parole, predicting whether released prisoners were likely to be arrested again. Unfortunately, the data the COMPAS algorithm learned from is the result of hundreds of years of systematic racial bias in the US justice system. In the US, black people are much more likely to be arrested for crimes than white people, even though they commit crimes at a similar rate.

Amazon discontinued use of the AI tool for screening job candidates upon revealing it was discriminating against women. If the algorithm is trained in the way that human hiring managers have selected or ranked resumes in the past, it’s very likely to pick up bias. Since humans tend to be biased, the algorithms that learn from them will also tend to be biased.

Predictive policing looks at police records and tries to predict where and when crimes will be recorded in the future. They send more police to those neighborhoods, and more crime will be detected there than a lightly policed but equally crime-ridden neighborhood, just because there are more police around. This can lead to over policing.

Treating a decision as impartial just because it came from an AI is know as Math-washing or Bias Laundering. The bias is still there, because the AI copied if from its training data, but now it’s wrapped in a layer of hard-to-interpret AI behavior. There are companies that have begun to offer bias screening as a service. One bias-checking program is Themis. One way of removing bias from an algorithm is to edit the training data until the training data no longer shows the bias, or selectively leave some applications out of the training data altogether (preprocessing).

Hackers may design adversarial attack’s that fool your AI if you don’t go to the time and expense of creating your own proprietary data set. People may poison publicly available data sets. People can contribute samples of malware to train anti-malware AI. For example, some advertisers have put fake specks of “dust” on their banner ads hoping people accidentally click on the ads while trying to brush them off their touch screen.

The infamous Microsoft Tay chatbot, a ML-based Twitter bot that was designed to learn from the users who tweeted at it, in no time learned to spew hate speech.

“In 2019, 40% of European start-ups classified in the AI category didn’t use any AI at all.”

A lot of human engineering goes into the data set. A human has to choose the subalgorithms and set them up so they can learn together.

“Practical ML ends up being a bit of a hybrid between rules-based programming, in which a human tells a computer step-by-step how to solve a problem, and open-ended ML, in which an algorithm has to figure everything out.” Sometimes the programmer researches the problem and discovers that they now understand it so well that they no longer need to use machine learning at all. We just sometimes don’t know what the best approach to a problem is. ML also needs humans for maintenance and oversight.
Profile Image for Kam Yung Soh.
676 reviews31 followers
December 26, 2019
An excellent and hilarious book about the state of actual AI technology in the world (as opposed to the AIs you may see in popular media) and why they can do weird things. As it turns out, the weirdness can be due to the data used to train the AI, in how the AI processes the data and in how we tell the AI to solve a problem for us. You will get a good understanding of how AIs actually work and what they can (and can't) do and also how AIs can actually help humans do their jobs (or entertain us with hilarious failures).

Chapter one looks at what kinds of AI are featured here. While the general public may have some ideas about AI from the popular media, the kinds of AIs looked at here are actual ones in use, which means machine based systems that accept data, apply machine learning algorithms to it, and produce an output. A brief look at how such AI are trained using data and what happens as it gradually learns what kind out output is 'acceptable' is shown. While humans may initially specify what to produce based on the provided input, such AIs may learn and process the data in unexpected ways, leading to weird and unexpected output.

Chapter two looks at what AI systems are now doing. From running a cockroach farm, providing personalised product recommendation to writing news reports and searching scientific datasets, AIs has it uses. The flip side is AI being used to produce things like deepfakes (swapping people's heads or making people appear to do things they didn't). In general, AIs are currently better at very specific tasks (like handing initial customer support request) and not very good at more general tasks (like creating cooking recipes or riding a bike). One reason is that such general tasks usually require some kind of long term memory to remember what has been done (like when creating long-form essays) but current AIs systems lack this memory capacity. Also, some general tasks create situations that AIs may never encounter in their input training data (like driving safely upon seeing unexpected obstacles).

Chapter three looks at how AIs actually learn by looking at the various types of AI systems: Neural Networks, Markov Chains, Random Forests, Evolutionary Algorithms and Generative Adversarial Networks. Examples of such AIs are given, like the autocorrect system used in smartphones, and their advantage and disadvantages in handling input data, processing it and producing the expected (or unexpected) output.

Chapter four looks at why AIs don't appear to work despite them trying to produce acceptable output. There may several reasons for this: the problem the AI is being asked to solve may be too broad (like making cat pictures after being trained on pictures of people). Or the amount of data provided to the AI may be too little for the task required. The input data may also be too 'messy', full of information not actually required by the task or containing mistakes that confuse the AI.

Chapter five shows how AI complete their task; only it's not the task the designers expected. There are several reasons for this. One is that the AI does the task, only not in the expected way. For example, moving a robot backwards because it was told not activate its bumper sensors (which are located at the front). AIs also usually learn in a simulated environment (to speed up learning), which may lead AIs to exploit 'glitches' in its simulated environment to solve a problem (like 'gaining' energy to jump high by making multiple tiny steps first). Other times, AI solve problems in unexpected ways because the expected learning behaviour is too hard. Like growing a long leg so as to fall from point A to point B instead of learning to walk (the expected behaviour) as walking is hard.

Chapter six covers more examples of AIs completing tasks in unexpected ways. The main reason for this is that AIs work in a simulated environment and the solutions it comes up with may only work there. Examples include making optical lenses that are very thick, exploiting mathematical rounding errors or even bugs in the simulation. Such solutions will, of course, not work in the real world.

Chapter seven looks at 'shortcuts' that AIs may take to get the solution. This is usually due to unexpected features found in the training data the AIs may fixate on. For example, an AI trained to recognise a certain type of fish based on images was found to be focusing on fingers in the images instead because in the training data, fingers were always present holding the fish to be recognised. Biases (sometimes hidden) in the input data can also cause the AI to provide biased solutions; for example recommending hiring only men because in the input data, men were the ones usually hired. Since how the AI comes to a decision is not usually examined, such biased decisions may instead become the norm based on the premise that 'the machine made the recommendation and the machine cannot be biased', not recognising that bias in the input data maybe the cause of the problem.

Chapter eight considers whether the AI works like the human brain and in general, it is not. AIs have problems remembering things in the long term. For example, passages written by AI tend to meander from topic to topic, generating output that, taken as a whole, is inconsistent. AIs are also prone to 'adversarial attacks' due to it tending to put too much weigh on certain inputs. Examples including modifying an image to mislead an AI recognition program to think a submarine is a bonnet. Or gradually modifying an image of a dog into that of skiers, yet leaving the AI to think it is still looking at a picture of a dog.

Chapter nine looks at the problem of distinguishing between an AI and a person doing a job. This is partially due to hyped up articles that proclaim that AI will be doing certain jobs instead of human (for example, driving). The author provides several ways to probe whether certain output has been produced by an AI or, possibly, a human pretending to be an AI.

Chapter ten looks at the future and shows that current way forward is a world where both AIs and humans have decision making jobs to do. AIs can be trained on data but it is up to humans to determine whether the results are valid and to modify or update the input data so as the let the AI do a better job. For now, the future is one where both AIs and humans coexists.
Profile Image for Rhys.
54 reviews5 followers
February 15, 2020
This goes right onto my Favorites shelf.

AIs and robots are a huge favorite topic of mine (both fictional ones and real life ones), so it probably isn't a huge surprise that I enjoyed this book so much (and especially when considering I follow Janelle Shane's blog already and have notifications turned on for updates). But You Look Like a Thing and I Love You is just such a hilarious and yet understandable exploration of modern day artificial intelligence, that I feel like you don't have to be weirdly enamored with artificial life like me to enjoy it (though it doesn't hurt to be).

The explanations are written clearly and Shane uses delightful hypothetical or real examples to clarify any unfamiliar concepts. The author also includes adorably wonderful little doodles (like the one on the cover) to accompany explanations, relevant real life stories, and just to be very fun and cute.

After reading this, I feel like I understand AIs and neural networks better than before, and I also had a great time laughing at neural network generated Buzzfeed article titles (for example, 27 christmas ornaments every college twentysomething knows and 24 times australia was the absolute worst). It was also helpful to learn more about the limitations of AIs in modern times (especially when many companies like to oversell what they can do), and ways to spot AIs from humans and vice versa. A huge portion of this book really does cover the limitations of our current technology, which I often think gets overlooked; Shane takes the time to point out that AIs can pick up on our intentional and unintentional biases as well, which can affect what output we get back from what many consider an impartial source.

It may get a bit repetitive at times for such a slim book as some other reviewers have mentioned, but I really didn't mind too much.

Overall, I just thoroughly enjoyed this book. If you have any interest in artificial intelligence, I would recommend this. You can get a preview of some of what you'll see in the book on her twitter or on her blog, AI Weirdness,.

(As a side note, it was great to have a book unrelated to gender be so great about acknowledging and respecting all genders :). )
Profile Image for Samantha.
4 reviews
November 25, 2019
AI explained through a set of logical and entertaining examples. Sometimes the examples even stray towards the absurd, in the best way. Janelle Shane puts together a comprehensive look at what AI is, how it works, what it's capable of doing, and most importantly - what it's NOT capable of doing. A must read for anyone who is interested (or concerned) about how AI affects our world.
Profile Image for Emily.
36 reviews3 followers
December 7, 2019
So fun, with the same irreverent tone as her blog, but while also covering an insane amount of information about AI and its capabilities (and failures). I listened to the audiobook, which was really nicely narrated. Hearing the AI-generated knock-knock jokes read aloud was a huge highlight.
Profile Image for Kara Babcock.
1,892 reviews1,209 followers
June 9, 2022
Machine learning is a hot topic. You have probably seen those social media posts that start with, “I made an AI watch …” and then proceeded to share a script “written” by the AI? Those are almost entirely fake, of course—as Janelle Shane explains in You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place, artificial intelligence is just not there yet. Not only are we nowhere close to Skynet … our AIs tend to have about as much brainpower as a worm.

Shane’s book is a short but comprehensive dive into what machine learning actually is, how it works, and why it so often goes wrong—by which she means, it doesn’t actually solve the problem it was meant to solve, or it solves the problem but in a really stupid way. Indeed, this is probably the main takeaway of the book: AIs are not smart. Not at all. And in the rare occasion when we do manage to make a smart AI, like Google’s Alpha Go or IBM’s Watson, it is incredibly specialized (and probably more expensive than it is worth).

Each chapter examines machine learning from a different angle. Some chapters cover problems—such as how difficult it is to assemble a good data set, how bias can creep into the training data, etc. Other chapters spend more time exploring the different approaches researchers and programmers take to machine learning, such as evolutionary algorithms. While there was plenty in this book I was aware of (particularly along the bias dimension), there was still a lot I either didn’t know or learned more about from the book.

Also, Shane is funny. Her writing is laced with a graceful wit, and it pairs perfectly with the examples of text she has had various AIs produce, from ersatz and confused recipes to names for ice cream flavours and verses of songs. I literally laughed out loud reading this book—did not expect that from a book on this subject, but I couldn’t resist how truly hilarious some of Shane’s results sound. This drives home the fact that AI is just not there yet, for most of our purposes, more so than any dry and technical explanation ever might.

Also also, shout out to Shane’s wonderful sketches throughout the book.

Despite all of the above, I would hesitate to recommend this to just anyone. While reading this book, I thought about my bestie, who has pivoted into freelance copywriting. She does a lot of copywriting for tech or tech-adjacent companies, so it behoves her to read more about the field, including AI. Nevertheless, I stopped short at recommending You Look Like a Thing and I Love You to her. Why? Simply put, the book lacks a human-interest through-line.

To be fair to Shane, it doesn’t need one. It is structured fine as it is (though I found she repeats some of her explanations of concepts at times). But my bestie prefers non-fiction that tells its story around human characters and experiences. While Shane sprinkles her explanations of concepts with anecdotes, the book lacks a true central protagonist or story.

Consequently, while the book is not too technical for laypeople, I recommend it more towards people who are interested in learning about AI for AI’s sake.

Still, don’t let me damn the book with faint praise: I enjoyed it. Muchly. If anything, it gives me plenty of ammunition to remain skeptical and challenge the next person or company that claims their AI-powered product is going to change my life.

Pair this with Hello World, by Hannah Fry; Weapons of Math Destruction, by Cathy O’Neil; and [Algorithms of Oppression], by Safiya Noble.

Originally posted on Kara.Reviews, where you can easily browse all my reviews and subscribe to my newsletter.

Creative Commons BY-NC License
Profile Image for Nilesh Jasani.
955 reviews119 followers
March 29, 2022
The book is a witty, breezy read on a host of technical topics, all credit to the author. Its true value, however, is in what it covers. And, its flaws, consequently, are in what it overemphasizes, if not leaves out.

Most books on the impact of AI are what they will do to us as a society in the long term. Some thinkers expect artificial intelligence to take over most work in a few decades, leaving most humans more like housepets playing video games. Their opponents do not see any overarching artificial intelligence (AGI) developing so quickly and expect centuries of human-machine co-existence. And there are thinkers who expect human bodies to turn more machine-like, others who expect minds to be uploaded, etc. etc.

Ms. Shane stays far away from all such ponderous, unanswerable topics. The book focuses on nuts-and-bolt issues with artificial intelligence programs today. In many ways, today's AIs are like human toddlers, in the earliest learning phases, clumsy while performing the simplest of tasks, and so accident-prone that it is difficult to imagine them doing anything meaningful for long times. The author lists a great set of examples where machine learning or neural network-driven programs abjectly fail. If nothing else, readers should walk away with one conclusion from the book: do not trust anyone who claims its machines are doing something genuinely amazing based on AI and AI only today. We are remotely not at a point where anything critical can be left entirely with the machines.

That said, almost every example used in the book - from the question on how many giraffes to the scales at the side of scans being used as a disease marker to the biases in interviews/articles/posts - is an unsolved, known problem that will suddenly be solved one day, and then forever. One may worry about a three-month-old child not walking, but once she learns and is able to, the issue vanishes forever.

There is a lot of value in knowing the transitional, transient, and technical issues facing the AI world. The author could have spent some time providing the context that none of what she discusses is somewhat insurmountable for long.
Profile Image for Kirsti.
2,438 reviews96 followers
August 3, 2020
What do you think would happen if you created an AI system and gave it the instruction to move from point A to point B in its virtual world? Do you think the AI would grow legs, feet, wings, wheels, tank treads, something else? No. The AI system would build itself into a high tower and then fall over. And that's because it will try to do what you tell it, but it will always use the easiest method.

This is an informative and often very funny book about what artificial intelligence can do now and what it may be able to do in the future. ("You look like a thing and I love you" is a pickup line that a neural network generated. I guess it would work if you're good-looking enough!)

I was surprised to learn that the neural network Shane created on her laptop has about as many neurons as a worm. Larger, more sophisticated neural networks have about as many neurons as a honeybee. Will AI outpace humans intellectually and take over the world? Elon Musk says yes, but Shane says no, not within our lifetimes.

AI systems crunch numbers quickly, but they often have trouble identifying people and animals unless the dataset they are trained on is very small and specific. Real-world datasets often overwhelm AIs. I laughed when I read that a neural network identified green grass as sheep. The humans had to tell it that just because sheep often are photographed standing on green grass does not mean that green grass is a sheep. What did that AI say when it saw a picture of a sheep in a car? "Dog!" What did it say when it saw a picture of a sheep in someone's living room? "Cat!"

All joking aside, algorithms often end up showing racial, ethnic, or gender bias because the data that scientists train them on reflects a long history of those biases. You don't have to tell the algorithm a prisoner's race. Instead, the algorithm can look at the person's ZIP Code of origin and decide that because there are a high number of arrests or convictions in that ZIP Code, that prisoner does not "deserve" parole. In short, AIs can help with repetitive tasks and are worth training, but we cannot and should not abdicate moral or ethical responsibilities to them.
92 reviews
February 23, 2022
Hilarious and accessible intro to Machine Learning and great breakdown of the many pitfalls and limits to ML. Short, briskly paced. I particularly appreciated how Shane highlights that ML can serve to perpetuate cultural biases and stereotypes - because an algorithm is only as good as the data it is based on. If you start with biased data, don't be surprised if your algorithm is biased.

Only held back from five starts because I feel like Shane could have introduced basic concepts like false positives / false negatives and the precision vs recall tradeoff (would ROC curves have been too much?), but that's just a nerd's quibble.
Profile Image for Becky.
549 reviews37 followers
September 3, 2022
This is an interesting overview of some of the hows and whys of where we see AIs—specific, not general matrix level— and the many challenges inherent to their learning. The results are frequently hilarious. Between the chuckle-inducing examples of AI-generated output and J. Shane’s wry narration, I was frequently laughing out out. Sometimes I laughed so hard I cried. A great pick-me-up, even when you may feel like skimming sone technical details.
Profile Image for rosalind.
490 reviews66 followers
May 22, 2020
210520: this book is great. it’s funny, informative, accessible to the layperson, and has very cute drawings. i don’t know a single thing more about janelle shane’s personal life than i did when i originally picked it up. men should have to read this before they speak to me.
Displaying 1 - 30 of 562 reviews

Can't find what you're looking for?

Get help and learn more about the design.