Based on exclusive information from whistleblowers, internal documents, and real world test results, Emmy‑award winning Wall Street Journal contributor Hilke Schellmann delivers a shocking and illuminating expose on the next civil rights issue of our how AI has already taken over the workplace and shapes our future.
Hilke Schellmann, is an Emmy‑award winning investigative reporter, Wall Street Journal and Guardian contributor and Journalism Professor at NYU. In The Algorithm , she investigates the rise of artificial intelligence (AI) in the world of work. AI is now being used to decide who has access to an education, who gets hired, who gets fired, and who receives a promotion. Drawing on exclusive information from whistleblowers, internal documents and real‑world tests, Schellmann discovers that many of the algorithms making high‑stakes decisions are biased, racist, and do more harm than good. Algorithms are on the brink of dominating our lives and threaten our human future—if we don't fight back.
Schellmann takes readers on a journalistic detective story testing algorithms that have secretly analyzed job candidates' facial expressions and tone of voice. She investigates algorithms that scan our online activity including Twitter and LinkedIn to construct personality profiles à la Cambridge Analytica. Her reporting reveals how employers track the location of their employees, the keystrokes they make, access everything on their screens and, during meetings, analyze group discussions to diagnose problems in a team. Even universities are now using predictive analytics for admission offers and financial aid.
When journalist Hilke Schellman set out to study AI tools’ impacts on the workplace, she was excited by the possibilities. What she found was much darker.
This book gives scary examples of the ways AI is being used to determine who companies will hire, promote and even fire. It shares examples like a makeup artist laid off by a beauty brand after “failing” an automated video interview (in what was almost surely a technical error) and an algorithm that downranked resumes with the word “women’s” (as in “women’s soccer team”) on them. What’s especially scary is that the discrimination baked into these decisions appears to be the result not of nefarious intentions, but in entrusting major decisions to half-formed, opaque algorithms.
I appreciated the research that went into this book. In addition to interviews with business leaders, AI developers and people harmed by these tools, the author was able to test out many of the tools for herself. I also appreciated the thoughtful way in which the author examined how these tools specifically harm disabled people and those with intersectional identities.
This book felt like a Black Mirror episode. The troubling part is that it’s all true.
Thanks to Netgalley and the publisher for this eARC.
This book is scarily accurate in its account of Human Resource departments' new dependence on using the fallible AI as a recruiting tool which 'identifies and targets' the perfect fit for a job. Unfortunately, rather than that pronouncement being true, this tool (because it is being used without ANY human oversight) is instead exposing companies to potentially crippling lawsuits as it unfairly creates biases based on race, gender, as it introduced countless other prejudices to the matter of hiring, firing, and promoting human beings.
The research in this book is shockingly sound, and I hope that people read this book and pay attention to its dire warnings before this almost Orwellian nightmare spirals even further out of control as companies continue to "pile on" the AI train, without worrying about where that journey might eventually take them.
This is a great book on AI and the fundamental changes it is already making to society at large.
As someone who uses AI and has written a few AI algorithms myself, I appreciate both sides of the arguments, and this book had a lot of really great points.
Absolutely recommend checking this out if you are looking towards wrangling a better hold on the wild west that is AI. I am not usually a big fan of regulations, so this is a more of a read of interest. Any time fight back is in the title of a book - I am in.
Schellmann says she is not against using candidate selection software in the hiring process, "per se," but the book is one argument after another for exactly that. There was useful information for the average job seeker and also the employed: A chapter covers employee tracking software.
Several chapters visit identity politics, and I couldn't help but feel that the author wants every minority applicant guaranteed an in-person interview, regardless of qualifications, for due diligence. The problem employers face is wading through the mounds of applicants to find them.
Schellmann makes strong arguments showing how some software has fallen well short of the claims made by its peddlers. As a programmer, I'm still hopeful that the algorithms will improve and eventually reach the goal of saving time while reducing bias--with the right algorithm.
I must confess I’ve only read the first half of the book thoroughly but I skimmed some of the latter chapters and largely found it to be more of the same.
I don’t disagree with the author’s underlying hypothesis/argument (not that that would be a reason to mark down a book), but I feel that they didn’t do a great job of backing it up. Truthfully, the use of empirical evidence is limited and any data provided is highly, highly anecdotal.
Much of the book also feels like a generic moan about the difficulties of getting a job - yes, it’s tough. But the existence of challenges in a hiring process doesn’t automatically imply that those challenges are a result of the use of AI. So much of this book’s criticism of the use of AI hiring tools revolves around arguments that would apply just as much to hiring that is driven by actual human interaction. The key question should be - what are the issues with AI-based hiring vs human-based hiring? It is crucial to consider the AI tools on a relative basis, otherwise - as is the case with this book - one risks mistaking issues which exist in hiring process generally, with issues which can actually be attributed to the use of AI hiring tools.
I also do feel that the book severely underestimates the importance of soft skills/traits in making hiring decisions - this is a separate discussion of course but the author seems to taken it as a given (though this is not explicitly stated) that prioritising soft skills over hard skills makes for a poor hiring process. Many, many jobs in today’s day and age (particularly white collar jobs) are not rocket science; they involve hard skills that can be easily learned from scratch in a few months but, in order for the employee to truly thrive, these jobs require soft skills that are often a product of the candidate’s fundamental personality traits. Such soft skills would take years to hone, if that is even at all possible. Once again, this dilutes the strength of the author’s arguments because they rest on so many questionable premises.
If you are looking for a book to help you brainstorm potential issues with AI hiring tools then you might get some high level ideas from this one but if you are looking for persuasive arguments, I’m afraid this probably isn’t the one.
The Algorithm was the most enlightening and eye-opening book that I have read in 2024. If you haven't been in a job search in the past two years, the game has changed dramatically and unfortunately the job searchers have no idea of the extent of the playbook.
Ms. Schellman extensively researched various aspects of work and personal life and brings forth many of the questions that we all should be asking as companies are adopting many AI services in hiring, evaluating new hires as well as monitoring their current employees. There are two key problems outlined in the book with the first being the accuracy of many of these AI models and secondly the transparency of companies using them and the criteria that they are employing. The lack of transparency is amazing considering how critical careers are to everyone's livelihood. Unfortunately, no one knows the extent of immoral or illegal criteria is being used now in the hiring process.
One additional problem that I reflected about is that AI sees us as a bundle of skills and personality traits which is a fixed view of us as humans. However, we are growing and have aspirations and potential for growth which isn't measured in AI based employment screenings. Basically, AI has a fixed mindset on us but we all have growth mindsets and potential as humans.
Perhaps the best chapter in the book is the Epilogue which provided a chilling perspective of our future but also leaves us with hope as long as we take action soon. She writes, "The world needs to be open to us as humans, with our thoughts and creativity. WE need to be given the chance to flourish and surprise others with our ingenuity- in our jobs and in our lives....."
Highly recommended if you are in a job search to read this now.
This book paints the bleak reality faced by workers in the working world of increasing AI surveillance. Not only the growing adoption of invasive predictive algorithms does not fulfil its promise of boosted productivity. It has led to exacerbated discrimination at work and turns workplaces into a performance theatre to check boxes to look more productive. Reflecting on these examples is a reminder of how little public and policymaker attention is given to the use of AI at work in Malaysia, as well as its intrusive implications on workers.
A very wonderful book Not only because he talks about something very important, which is employment, but because he explains important concepts in the way algorithms and artificial intelligence work. I hope it will be translated into Arabic as soon as possible
It seems that everywhere one turns today artificial intelligence (AI) is being added to every aspect of daily life. Whether it be the arts, education, entertainment, search, or the workplace – AI is everywhere.
Often, those of us who are distinctly dubious about the claims that are being made about the current generation of AI, more appropriately labeled machine learning, can often feel like Cassandra of myth – fated never to be believed. At worst we are labeled as luddites, rather than as people who believe that technologies should earn their places in our lives and societies rather than being instantly adopted after being told by people hoping to get rich that they work great and everything will be fine.
Ms. Schellmann’s exhaustive exploration of AI in the workplace is pretty damning.
It catalogs how Human Resource (HR) departments have been adopting technologies that are often little understood by their users and are often working under misapprehensions as to the scientific backing of the ideas behind these tools. The fundamental problem is often one of garbage in – garbage out; a phrase that has been with us from the dawn of the computer age. For more on this I recommend the excellent “Weapons of Math Deception” by Cathy O’Neil. The majority of AI tools are black boxes that we can’t look inside to see how they work. The manufacturers consider the algorithm’s inside these black boxes proprietary intellectual property. Without being able to look inside the magic black box, it is often impossible to know whether an algorithm is biased inherently, whether it is being trained on biased data, or just plain wrong.
One of the things that comes up again and again in “The Algorithm” is AI’s, or the people that program it, inability to know the difference between correlation and causation. Just because a company’s best managers all played baseball, does not mean that baseball should be a prerequisite for being a manager – particularly if it means that an AI would overlook someone who played softball – which is essentially the same sport. When one considers the fact that men tend to play baseball, and woman tend to play softball, it is easy to see just how problematic these correlations can be.
The problems with correlation and causation are of course magnified when junk science are involved. Tones of voice, language usage, and facial expressions, are being used in virtual one-way interviews for hiring and have little to no science behind them. In one highly memorable section of the book, Ms. Schellmann speaks German to an AI tool, reading from a Wikipedia entry, which is assessing her customer service skills and quality of English. The tool rates her highly in customer service and English even though she is speaking a different language and does not even try to answer the questions being asked.
Where the book falls down a little, but probably says more about the sad state of business thinking, is on personality testing. The author seems to accept as scientifically valid that employees can be categorized as one of a few simple types. You can read my review of “The Personality Brokers” by Merve Emre here for more on this nonsense and dangerous business tool. As Ms. Schellmann rightly states in her take down of how AI handles personality testing, but could actually just apply to all personality testing; “we’d be better off categorizing by star sign.”
It is disturbing just how much AI has already invaded the hiring space in the HR offices at large companies and gives one pause as these tools become more mainstream. While it is true that it is often not the AI software itself that is the problem, but how the humans that wield such technologies choose to use them. There is also the problem of how hard it is for a human employee to challenge a decision that is made by an algorithm – which by its very nature is a secret. The developers will often say that these tools should not be the final word in hiring or firing; but the knowing wink and smile behind these statements tells us everything we need to know.
Ms. Schellmann’s work is laser focused on human resources, an area where bias has been and often is a significant problem. The idea of a tool that can be used to eliminate bias, and that companies want to use tools like this, is not inherently a bad idea – in fact it is admirable. The problem is that bias in hiring is often unconscious bias and tools that are wielded by those who are not aware of their own biases are most likely fated to continue to have these biases and therefore affect the process. In addition, it is often difficult to impossible for candidates or employees to challenge decisions by managers which they may feel have been affected by bias. How much more difficult is it when it is not a human making the decision or recommendation? A tool of which we cannot ask the most basic of questions: what where you thinking?
This is an important work for our time – hopefully one not fated to be a Cassandra.
Just like every recent graduate, the post uni job hunt has been tough, and according to Hilke Schellmann, thanks to AI, it has gotten tougher.
“The Algorithm” is a shocking exposé of how AI has began to infiltrate the world of work - deciding who gets hired, fired and promoted.
Journalist Hilke Schellmann reveals this through whistleblower exclusives, leaked internal documents, and by testing them on herself.
From software analysing interviewees' facial expressions and tone of voice, to video games assessing their performance, to 'personality profiles' built from candidates' social media, almost all major employers use AI in recruitment. Programmes track their staff's activity, group dynamics and physical health, identifying who is productive, a bully, worth long-term investment, or likely to quit. But can we trust them?
Schellmann argues that no we cannot. In fact she argues that many algorithms making these high-stakes calculations do more harm than good, and traces their origins to troubling pseudoscientific ideas about people's ‘true' essence.
Other experts I’ve spoken to about this topic, however, argue that her view is perhaps a little too pessimistic. In many ways, AI presents many opportunities and possibilities. Equally, in the case of recruitment, AI is a little inevitable, as it has become very difficult for a human to go through the 1000s of CVs that recruiters receive. Schellmann doesn’t really present a solution to this problem.
In other areas, however, she presents a compelling case that has done little to ease my post-graduate fears. The way that AI continues to infiltrate every aspect of our lives remains a fascinating and terrifying topic.
In the Algorithm, Hilke Schellmannn discusses how artificial intelligence software plays a role in the Human Resources hiring process. Providing extensive research, statistics, and facts throughout the book, Schellmannn uncovers the serious implications AI hiring software has on jobseekers, employees, and the employers who are using these screening and hiring tools.
Schellmann emphasizes that many companies are now using AI software to screen job applicants due to HR departments being overwhelmed with the amount of job applications they receive for open positions. Schemman states in the Prologue, “99 percent of Fortune 500 companies use algorithms and artificial intelligence for hiring”. While AI is meant to simplify the hiring process while picking out the “best” candidate for a job, the darker side of this technology is slowly revealed throughout the ten chapters of The Algorithm.
Based on Schellmann’s findings, AI hiring platforms and their software provide insights into a job applicant’s personality traits, potential for success (and failure), communication skills, problem-solving skills, and even an applicant’s real-time emotions. The list goes on – and all these evaluations are made without a jobseeker speaking to a live person from the company they are applying to. Basically, AI is doing all the work when it comes to screening job applications – using its algorithms to provide an HR department with a list of candidates that it deems best fit for an open position. However, without the human element of phone-screenings and in-person interviews, Schellmann finds that these algorithms are rampant with biases. And the catch? The CEOs of these AI hiring tools can’t provide consistent answers as to how their algorithms work, and the companies who use these tools rarely provide feedback to a job applicant when they are immediately denied after an AI generated screening (even when they are well-qualified for the role).
According to Schellmann, AI hiring software tools typically involve phone-screenings and interviews with pre-recorded questions. Some AI tools even have job applicants play games to evaluate their personality traits. These include orderliness, enthusiasm, assertiveness, compassion, mannerliness, and potential impulsive or cautious behaviors. Schellmann takes some of these tests herself, finding that these interviews with AI feel more like “tests” which gave her feelings of unease – she consistently asks the question, “Does it [AI] know more about me than I do?”. Some AI tools go as far as facial and tone analysis on AI generated video interviews – scoring candidates on their facial expressions, eye-movements, and tone of voice as they answer interview questions. Meanwhile, the candidate being evaluated receives no real-time feedback on their answers – once their answers are submitted into the AI generated platform – it’s either a waiting game to hear back from the company, or in some cases throughout the book, an immediate e-mail stating the candidate is not selected to move forward in the hiring process. Schellmannn provides many real-life examples of the individuals these tools have affected like this through interviews and stories.
The Algorithm is a must read for all working individuals. The use of AI algorithms is rarely disclosed to candidates when they apply for roles where companies use these tools. Knowing when to ask questions, and “fight back” as Schellmann states in the book’s title is necessary in order advocate for yourself in the new world of artificial intelligence. Knowing that I may be turned down for a lucrative job that I possess all qualifications for on paper, solely based on an algorithm, is upsetting – and because the use of AI algorithms for hiring is not yet well-regulated, it’s important for job candidates and HR departments to understand the biases, complexities, and “unknowns” that these tools carry. Is it fair to judge an individual’s interview performance based on their facial expressions and tone of voice when talking to a computer? What data does an algorithm use to decide if someone is likely to be successful at a job, especially when so many variables are involved? How does playing games on a computer determine if someone is a good problem solver at their actual job? Most importantly – why is an algorithm making the final decision on an individual’s personality, work abilities, and potential? These are all the questions I found myself asking throughout this book.
This was fantastically well-researched and totally fascinating. Things that especially caught my interest:
1. Chapter 2 - the experiment Georgia Tech designed to illustrate the skepticism of technology where they created a fire alarm with fake smoke and most participants followed the robot into a brick wall dead-end instead of using the closest emergency exits that they knew about. "Even when the robot pointed to a dark room with no discernible exit, the majority of people did not choose to safely exit the way they entered." I would like to say I'm not one of those people, but I'm also not great in emergency situations. This is horrifying.
2. Chapter 4 - "That's the problem with using high performance, because you really have to ask, 'Did a company in the past give an equal opportunity to everyone to be a high performer?' And unless you can answer that question 'yes,' then your sample of high performers is already biased," Ajunwa said. I have never worked for an organization that I would be comfortable stating that they have historically granted equal opportunity for associates to achieve high performance. This statement really hit home for me that AI built on the current workforce is simply perpetuating racism, sexism, and ableism.
3. Chapter 7 - "But maybe that is the beauty of data science. It helps us find solid evidence for things we suspected or knew through qualitative analysis all along." Um, couldn't this also lead to confirmation bias? This seems like a slippery slope and evidence should always be used with caution.
4. Epilogue - this statement summed up the whole body of research for me. "So, neither in computer labs nor in applied settings did AI outperform traditional statistical models. And neither method achieved much statistical relevance."
I've already recommended this to my head of HR and I will trumpet its amazingness to anyone who will listen even a little. If you want to talk about it, let's! So many things to discuss in here.
The press gushes over AI “advancements” and the evangelists say it’ll change the nature of work. Actually, AI is already inserted itself in the process of hiring, monitoring, and firing people. None of it works as shown in the slideware. Instead it makes bad recommendations which the humans then implement without question. The book highlights all of the ills of the current crop of companies selling snake oil.
At the very root of it all is the desire to predict who will be the best fit for any given role. Humans are not all that great at figuring this out, so they want to use more automated solutions. HR is overwhelmed with the number of applications for jobs & turn to startups that promise to make their lives better. In each category, the author is able to show that the tools are no better than chance (flipping a coin) and sometimes worse.
Beyond hiring with the games and 1-way interviews, there is the monitoring side that can falsely mark an employee as bad and recommend they be fired. The humans are forced work like machines as they game the algorithms in order to remain employed. Once again the vendors promise increased productivity, yet they force the humans to do the opposite. Then quit.
The more people are treated as cogs in the gears of a company, the more they will check out. To get the best out of people takes a lot of work of the managers, mainly to guide & remove obstacles. Like many things, the philosophy of Yoda makes it easy to understand: The tools are the dark side of the force, a quicker, more seductive path to predict what humans will do & how to manage them via unfeeling machines. The light side takes longer, but can create a more cohesive workforce where you feel good to belong and help.
This editorial is decent and sometimes contains very good analysis of AI, but it other times conflates AI with simple data processing. I think Schellmann is smart and knows the difference, but she substitutes AI to add a little bit of mystery and scariness to some of the more mundane processes that employers use to find new hires. Many people are currently going through growing pains on how best or whether to use AI for different processes and applications. This book does a fine job of pointing out some ways where future employers should beware of relying on some specific flawed AI usage.
Schellmann may live and work in the US, but she brings her European values and world view. She argues for works councils which European companies and international companies with a certain number of employees must set up populated by company non-executives and comply with their regulations. My limited experience has been frustration with them.
Another European-centric view is that head hunters are not "authorized" to view prospects' social media. I say, it's out there. It's public. Everyone including head hunters is going to see this stuff.
While I agree that minorities and including those with disabilities should be given a fair shot at gaining employment, some of the proposed remedies would require a lot of resources especially for smaller companies. Perhaps this is more of a negotiating tactic, a starting point at the far left?
The final chapter on finding a job in the age of machine-read resumes and cover letters, robot interviews, etc. provides very good advice.
The creepiest thing is AI bots doing video interviews of people who have nobody to talk back to. No, not a Zoom or Microsoft Teams where you're talking to people on the other end, remotely. You're talking to, it seems, a blank computer screen; from the descriptions, you're not talking to a robotic face, even, but that might make it more creepy.
Second most creepy is the level of in/accuracy of these bots.
Third most creepy is how companies use them.
But, HOW do we fight back? With neoliberal capitalism in the ascent in the US more than anywhere else, getting the US Congress to take this seriously is kind of a laugh. And, if a state lege tried to do something, Company X would either try to do a workaround or else buy it off. Federal regulatory agencies are behind the curve enough as is. Looking at you, National Labor Relations Board. The Equal Employment Opportunity Commission, about the same, and we've seen the attacks from wingnuts in Congress.
That's the main reason the book doesn't get a fifth star.
As far as shelves? It's filed under "economics" because labor and employment relate to that. "Technology" should be obvious. "Salvific technologism" is my term, but even more stark, for what Yevgeny Morozov calls "solutionism." That, too, should be obvious.
Гледам да избягвам книги на тема AI, тъй като информацията е буквално остаряла в момента, в който авторът приключи да пише книгата, но тази книга специфично ми хвана окото именно заради връзката между AI и възможностите да си искарваш прехраната.
Авторката е журналис��ка и отделните глави спокойно могат да бъдат самостоятелни статии, свързани с различни аспекти от AI в работната сфера, но заедно чудесно се събират под общ знаменател.
Определено ми хареса книгата. Изключително интересна информация относно как бива използван AI в света на наемането и наблюдението на работници. Авторката се е постарала да събере максимално информация от различни източници и самата тя е пробвала различни продукти, за да сподели личен опит. Най-много ми хареса анализът, който прави относно отделни елементи и като цяло за HR индустрията от гледна точка на използването на AI продукти.
Мисля, че доста дълго ще я мисля тази книга и определено ще променя мнението си относно четене на тема AI, тъй като тази книга ми потвърди, че в безумието си и преследването на пари, хората са готови на всичко, дори и на самоунищожение. Доста депресираща реалност и тъжно бъдеще за всички ни и ще е добре да се подготвям психологически за тази реалност, образовайки се по темата, колкото мога. :)
Hilke does a good journalist job - reviewing the potential problems and issues inherent with the use of AI within the Human Resources scope. Focused on deep research, including hands-on research on some hiring tools, and extensive interviews of analysts, researchers and company owners in the AI and technology world. She delivers her content herself, with a barely-there german accent, and manages to be deeply informative.
Particularly interesting were for me the parts on fairness and bias in the area of disabilities; the connection between hiring and personality testing; the discussion on voice markers for brain functioning, and the evaluation of performance. Given the black box quality of these algorithms, it is very hard to make corporations and/or technology accountable on the results. We're seeing the start of legal follow up for some cases, but I think there will be a lot more.
Yet, the book felt more anecdotical than deep. To be completely fair, it is a difficult topic - if going too deep, people will complain about being technical and will not reach a wide public, if going light, other will be disappointed.
This book is a documentary on how Human Resources groups in large companies are pushing unscientific and unvalidated methods to hire, evaluate, supervise, and in some cases fire employees. The Alight information is particularly problematic as it aggregates sensitive employee health data for employers. The work is well researched but falls short of recommendations for validation and solutions to the use of unscientific and unvalidated algorithms predicting employee outcomes based on employers desperate to sort large data sets to predict employee behavior. Purhaps this book will help class action attorneys to develop cases to remedy the use of invalidated systems to make critical decisions in large corporations. The book is another illustration that data correlation does not infer causation and cautions the use of big data analytics to drive decisions without appropriate validation.
This book offers an eye-opening look into how AI—particularly hiring algorithms—is quietly reshaping the job market. Schellmann digs deep into the hidden systems many companies use to screen and sort candidates, raising important ethical and legal questions.
While I initially expected a broader discussion about AI's impact on society, the focus here is very much on the hiring process. That said, the insights are sharp and timely, especially for HR professionals.
It took me a little while to get through, but I'm glad I stuck with it. Schellmann does a good job of weaving investigative reporting with real-world examples. For those of us working in digital spaces, it’s a reminder that the invisible tools shaping modern life deserve far more scrutiny than they often get.
This is a niche topic book, I saw the AI part missed its about hiring people. This book is about how AI is error prone, biased and employer follow it dumb.
This is a biased book, biased towards AI doesn’t work; author does not care how technology works and the role of the cohort who use AI for hiring.
I liked chapter 10 fired by algorithm and Epilogue:living in the predictive society. It’s the soul of the book. I would like to re read them.
Everything else was not surprising for me. I found this book boring and bit alarmist.
I would not pick such books, even if I do not have anything to read.
Well researched and very dogged in its task, I found it maybe a bit too narrow in its focus to merit a higher rating. True, AI in the work place is a very dark, dystopian turn in employment but also it is a much wider topic about life as we know it for middle and lower class people. A bit wider a scope would've been very much welcome as the office space jargon and talk of metrics etc becomes a bit tedious to those with no experience in these fields. Worth a read to those worried about AI working against them in the work place but there are probably better books for those looking for the totality of the issue.
Amazing book. Incredibly scary like most books on tech. This book is not about wether AI is good or bad (we can all have our own opinion), it’s more about the way we may use it in the work place. Or the way some companies are already using it. Companies use AI for HR tasks that used to be done by regular humans, saying that AI is more neutral and effective, but is it? (Spoiler : it’s not). This book is very dense so I wouldn’t be able to talk about everything in a review, but wether you work in HR or not I highly recommend reading it
Let me try… What AI or other methods can capture signals. However it is not likely the skills employers are looking for are captured in the signals. The signals are usually irrelevant and “science backed” claims are at the edges of pseudoscience, most being a farce, perpetuating biases inherent in a workplace…
Ok, what can you do to protect yourself? Don’t work for an employer who doesn’t invest time in properly evaluating their candidates. Keep your social media clean as “Cambridge Analytica” type of profiling results of which are questionable are being widely used.
The book offers a critical exploration of AI biases, highlighting how human biases transcend into AI systems. Backed by thorough research, it emphasizes the importance of contextual understanding, arguing that AI needs more than just datasets—it requires context. Furthermore, the book advocates for legislative measures and transparency to address the societal implications of unchecked AI. It's a must-read for policymakers, technologists, and anyone concerned about the ethical dimensions of AI, jobs, etc.
This entire review has been hidden because of spoilers.
Super interesting book! I had no idea this stuff was happening before I read, so highly recommend if you are looking for a job, thinking of switching careers fields, or work in HR. Very well-researched and I liked the personal accounts that were sprinkled in the chapters. However, I think if you read the epilogue you get the main idea of the whole book. The main themes were hounded and hounded without a lot of new information being provided. Also, found it strange how long some chapters were and then some were super short. Overall, good read for the information it provides!
This book showed how AI is used in respect to the workforce, specifically the hiring process and how major to small time companies use it to sift through, potentially, thousands of applications. It shows that many of these commercialized AIs are deeply flawed and discriminatory because we still don’t have an understanding of how to properly feed them the right parameters. As the book demonstrated, this can only come after years of trial and error but it’s going to be the people in the mean time that suffer as a consequence.
The use of AI algorithms to screen job applications has been around for over a decade. This book reveals how the over-reliance on algorithms to screen resumes, interview candidates, monitor employees, and fire workers is getting out of control with no government oversight. It's a wonder the U.S. has low unemployment and people are finding jobs, getting hired, and staying employed. I give props to the author for her thorough review of the software tools and dogged reporting to uncover fresh insights on this topic.
As Stevo’s Novel Ideas, I am a long-time book reviewer, member of the media, an Influencer, and a content provider. I received this book as a review copy from either the author, the publisher or a publicist. I have not been compensated for this recommendation. I have selected it as Stevo's Business Book of the Week for the week of 3/17, as it stands heads above other recently published books on this topic.
Holy Moses! What a thought provoking book. For IT Profesionals and HR managers, this is a must read. It's a cautionary tail of unfounded trust and risk from blindly trusting AI technology. This book would also be good for lawyers in employment law. This was a library book for me, but I'm considering purchasing this book for future reference. The book potentially deserves 5 stars, but it is so niche and dry in parts.
I wanted a blistering exposé on AI; I got a few pages of exposé, some speculating about how various AI tools might have bias if we knew more about them, and a few interviews with people who had hiring decisions made based on AI and felt weird about it but aren't sure if there was actually bias involved. Those pages of exposé were great, I just wish there had been more of interviewing insiders and testing out various AI tools herself with an eye toward finding bias, and less of the rest of it.