Over the course of a generation, algorithms have gone from mathematical abstractions to powerful mediators of daily life. Algorithms have made our lives more efficient, more entertaining, and, sometimes, better informed. At the same time, complex algorithms are increasingly violating the basic rights of individual citizens. Allegedly anonymized datasets routinely leak our most sensitive personal information; statistical models for everything from mortgages to college admissions reflect racial and gender bias. Meanwhile, users manipulate algorithms to "game" search engines, spam filters, online reviewing services, and navigation apps.
Understanding and improving the science behind the algorithms that run our lives is rapidly becoming one of the most pressing issues of this century. Traditional fixes, such as laws, regulations and watchdog groups, have proven woefully inadequate. Reporting from the cutting edge of scientific research, The Ethical Algorithm offers a new approach: a set of principled solutions based on the emerging and exciting science of socially aware algorithm design. Michael Kearns and Aaron Roth explain how we can better embed human principles into machine code - without halting the advance of data-driven scientific exploration. Weaving together innovative research with stories of citizens, scientists, and activists on the front lines, The Ethical Algorithm offers a compelling vision for a future, one in which we can better protect humans from the unintended impacts of algorithms while continuing to inspire wondrous advances in technology.
My major takeaways from this book: 1) if we design algorithms to optimize for predictive accuracy, that is what they are going to do; if we want algorithms also to take into account privacy and fairness and other social values, we must define and specify those goals and explicitly design algorithms to achieve them; and 2) ethics must be incorporated into the study of computer science — both theoretical and practical. Honestly this book was an ambitious undertaking for me. Kearns and Roth seem to have intended this book for a diverse audience - not a small group of in the weeds comp sci folks. However, if you have absolutely no formal education in the areas of calculus, game theory, statistics, probability, or computer science (which I do not) be prepared for an uphill climb. (In other words, if words and phrases like the Pareto Frontier, backpropagation for neural networks, the Bonferroni correction, and basically any equations involving numerous variables make your eyes glaze over, this will be a little difficult.) But all those complications aside, the authors make a clear and persuasive and relatively understandable high-level argument for the crucial role that ethics should play in our use of algorithms for a vast array of purposes. The discussion of differential privacy and its benefits was accessible and helpful. And the authors do provide a number of real world examples of the shortcomings of algorithms designed only to optimize for predictive accuracy. In short, there was plenty in this book I cannot claim to understand, but I did grasp a good amount and am glad I read it. Also I think the work of computer scientists like Kearns and Roth is absolutely crucial to our ability as a society to continue making decisions (automated and otherwise) informed by values we care about - like privacy, fairness, and morality. I hope that their approach will become the norm in computer science programs at universities around the world and that the kind of deliberative thinking about values they discuss will become standard practice for computer scientists across the board.
This is a good overview of some of the ways in which we can address privacy and fairness constraints in machine learning algorithm design. It covers a lot of the tension between prioritizing accuracy versus prioritizing ethics when designing models, as well as some of the limitations and unexpected consequences of not specifically integrating ethical constraints into models. It introduces some promising methods for improving algorithms such as differential privacy and correlated equilibrium, which were fascinating to read about. The level of this book was a little confusing, though. I think it's too technical for a general audience, but also a little too general for a technical audience. I'm not entirely sure who it's aimed at, although I suspect it's for policy makers. I think as an introduction to the topic for technical readers who are unfamiliar with the topic it can also work.
very well-written! perhaps marginally more difficult to grasp the technical references for someone not familiar w general algorithm theory or comfortable with statistics & discrete math; even though those explanations were redundant w my knowledge, the discussion & applications were pretty fresh and interesting. i think this was a pretty accessible yet rigorous explanation of the delicate interplay of fairness, privacy, and accuracy optimization w/o opinionated social discourse or arbitrary speculation
Grateful to the authors and the editor for writing an expansive summary of ethical approaches to algorithm design and the trade offs involved. The scope was tight and the tone consistent and eminently readable despite the technical discussions within. A book that delivers exactly what it aims to.
Machine learning algorithms are practically everywhere. But their failure cases have led to skepticism about their usage and concerns about privacy and fairness among other issues. This book gives a high-level peek into how algorithm design is being made "socially aware".
Kearns and Roth (K&R) argue that unless ethical goals are incorporated directly into algorithm design, an algorithm can't be expected to do so on its own since its merely optimizing some objective function. Tackling privacy first, the book outlines a desiderata of privacy definitions that ultimately build up to the concept of differential privacy. It then moves to incorporating fairness in algorithm design and shows how different reasonable fairness definitions can often conflict with each other. Finally the book ends by looking at how, in game-theoretic scenarios such as news filtering and dating apps, algorithms can nudge away from a bad equilibrium.
Having had a glimpse of the research before, I found myself appreciating how well the authors managed to keep the technical part understandable and well-motivated for a general audience. This is one of the few books, intended for a general audience, that puts emphasis on trying to precisely define ethical goals. That's the only way these can be incorporated in algorithm design and more generally, be achieved at scale.
We're accomplishing more and more tasks via algorithms trained on data. To do it right, we must be clear about our objectives and the tradeoffs between them (and let the algorithm know about them too), and practice good data privacy. This crisp and thin volume manages to cover a lot of ground: from differential privacy, to algorithmic fairness, all the way to the singularity. It starts pretty elementary but discusses enough specifics to give a concrete and clear understanding of how algorithms are trained. It is also nice that it is self-contained, in that it explains all the concepts it uses. I appreciated its calm and practical style, and its use of carefully chosen examples to illustrate problems, approaches, and tradeoffs very clearly.
4.5 stars. This is an overview of different ethical challenges that machine learning brings. From privacy, to justice, there are several problems with the adoption of multiple algorithms and models used in every day situations.
The only two issues for me was that I feel that it lacked a bit of detail, but it may be just me. Maybe for policy makers it makes more sense to have a more general view of the problems described. Also, chapter 4 was a bit erratic in the writing.
Despite those points, I think it is an excellent reading and highly recommended.
It’s a level-headed science-based approach to an issue that should concern all professionals in the AI industry: how to reap the benefits of progress in data science without crushing the unfortunate individuals in society under its wheels. Unlike "weapons of math destruction" by Cathy O'Neil it takes a problem solving approach rather than just listing a series of injustices.
Such a good book! Takes a grounded, pragmatic, and realistic take on fairness, privacy, and AI from a computer science perspective. A welcoming change from the typically alarmist and negative narrative more commonly seen around this particular combination of topics.
AI is not something that happens to us, it's something we're in control of (even if both the tools and our understanding to make algorithms more fair, private, are in their infancy).
For so long machine learning has been taught independently of ethical and privacy matters. Word privacy is rarely used outside of cyber security courses. I have taken machine learning courses where I studied about the collaborative filtering and the Netflix Prize. But little did I know it was cancelled because of privacy concerns. Being a computer science grad student focusing on machine learning the topic of this books was very relevant for me.
In this book Prof. Kearns and Prof. Roth provide a nice layman overview of some of the most emerging topics of research in computer science like privacy, fairness, interpretability, reproducibility, etc in Machine Learning. The book brings forward some events which had negative impact on society and talks about how the field moved forward to avoid such event in future. Like the release of (pseudo-)anonymized employee health records, (pseudo-)anonymized Netflix recommendations, Baidu’s Imagenet fiasco, Wonder Woman pose, waze recommending to drive through fire, Cornell Prof. Brian Wansink’s p-hacking, etc. It is a good read for someone outside computer science but was oversimplified version for me.
I personally enjoyed the chapter 4 of this book - lost in the garden. It goes over some of the unethical research practices (deliberate and unconscious). P-hacking, torturing the data, etc. It also shed light on how the current academic system provides incentives to the researchers for practicing such hacks. It is encouraging to know that the academics everywhere recognize this problem. Hope a solution is proposed soon.
This is a really good introduction to the problems of fairness and privacy especially in the context of machine learning. It also covers related subjects such as algorithmic game theory. The authors provide great real world examples of how things can go wrong when both designing and applying algorithms.
Summary: 3.5, I'm going to round up to 4 cause I think book writing in this field is really hard and he's going to really appeal to Comp sci people who are slightly outside of AI and have not read Godel Escher Bach (GEB) or been exposed to it.
If you like my reviews, I would so appreciate you joining my vlog: IG: WhereIsMayLing or Youtube: Diary of a Speed Reader
The issue for this guy is mainly that have not read the G.E.B. or been surrounded by it, he has to explain his way backward into it. Also, he seems young or focused on his comp field without knowledge outside of it to realize that many professionals have both the technical skills and far deeper knowledge of what they are programming to realize. So like if a algo makes a mistake the actual job of such an individual is to stop it. That said, a lot of young companies (particular on the HR side) did not have the right person in the seat to understand what this analysis job actually is. So that happened and it sucks. Truly. I mean, I am a victim of it for sure!!
Trying to use the Kindle Notes feature. That's where I'm posting.
Excellent summary on some of the current topics that currently permeates in the burgeoning ethics of AI field. As the authors point out in the first few pages, they realize the topic choices made here may not be timeless, and that in just a few years, the field will have moved on to tackle other topics, either because the current crop of of areas have been mastered (or it’s been shown to be intractable), and/or society has not focused its attention on some other topic that may become more timely at that time (e.g. the ethics of automatic warfare etc.). The purpose of this book is not be a “textbook” for the field, but to summarize current trends, and to quickly educate other practitioners in the broader ML/AI/DS space (what I’ll call the ‘machine intelligence fields’), on these topics.
Although the topic of “ethics” in AI is not new, and in some ways, almost all practitioners in the broader machine intelligence fields will have been exposed to a facet of the topic when they grapple with the precision/recall conundrum of classification in either their work or education, this book is possibly the first text in the non-specialist market that directly speak to the designers of the algorithms or machine learning pipeline. To be sure other books have been published in this domain, especially after 2016. Two that come to mind that I’ve read recently include “Automating Inequality” and “Weapons of Math Destruction”, yet again these were mostly aimed at either a general reader, or a policy specialist/thinker. Not that “The Ethical Algorithm” is overly technical and opaque to non-specialist, just that it really feels like it’s speaking to a more technically-oriented crowd.
5 topics are covered in the book, algorithmic privacy (or more commonly known in the field as differential privacy), the notion of fairness in machine intelligence, biases that occur from machine learning algorithms in a variety of domains (travel route-recommendation/optimization, dating, etc.), a chapter on ‘p-hacking’ and ‘hill-climbing’ in experimentation and model-space exploration, and a sort of generic “didn’t have time to write a full chapter off” last chapter on interpretation in ML and “future” topics like the singularity.
A few months ago I previously attended a brown-bag lunch where the census’s chief scientist discussed their application on differential privacy and I found the topic fascinating. In my opinion, this may be the most “low-key” impactful use-case of ethical AI or ML because it seeks to preserve a level of privacy (which is tunable) by “masking” samples that go out into the ether. These samples can then be used as input for ML algorithms or experimentation fodder, but the key here is that there is a provable way to ensure that reconstruction, or a mapping, of the critical ground-truth is close to impossible. Given how so many of the “first wave” of ethical dilemmas in “big tech” have centered on how personal data could be used to detriment individuals (e.g. healthcare providers increasing premiums because of health or behavioral data etc.), the implementation of this layer to all machine intelligence processes in the enterprise will be critical over the next few years.
The two most practical chapters in the book are probably the 2nd and 4th however, which deals with the algorithmic notions of ‘fairness’ and hill-climbing/p-hacking respectively. Parts of chapter 2 may be viewed as remedial for anyone but the most junior data scientist or ML practitioner, but it's a good review. I was disappointed a bit in the chapter as there was a real opportunity here to discuss the latest/greatest fairness-preserving algorithmic procedures being worked on currently, which ranges from re-weighting by stratifying protected classes, to something a bit more exotic like adversarial de-biasing. Much of those topics would be covered in a standard graduate school course in AI Ethics. I think this chapter would be a great “prelude” chapter in such a course before discussing the nitty-gritty of those technical procedures.
Likewise, the chapter on hill-climbing and p-hacking was also instructive and well-written. This chapter uses the guise of cheating in machine learning competitions to discuss the perils of naive model-exploration, often referred to as hill-climbing to practitioners. The problem here is that one wishes to build an algorithm that is generalizable, and there are well-known validation pipelines one can set up to measure/observe how generalizable your model may be as it’s being trained. However, in a context of a Kaggle competition, where the purpose is not to build the best performing algorithm to solve a business or other objective, but to increase your score, this process can be perverted (since it is known that there is a ‘max-score’ one can achieve).
When this occurs, naive feature-engineering techniques can be leveraged to “hill-climb” (or increase the out-of-sample performance score) iteratively. Yet, because the methodology is in some ways “backwards”, the resultant model could end up being a sort of “Frankenstein” algorithm that is poorly generalizable, but is efficacious in the narrow case of the competition. The chapter goes through topics relevant to both repeated experimentation and multiple linear regression, like the Bonferroni correction of the critical value for hypothesis tests and applies it to exploration in the model-space (or “the Garden of the Forking Path”). Pretty good read here.
Chapter 3 was interesting, but I suspect this will be the one that is expanded in the future as it deals mostly unforeseen consequences of algorithms to human-activity, like driving recommendation, online-dating, and things that in general can be characterized as matching problems. Alvin Roth is mentioned and discussed in this chapter, with his famous algorithm given some spotlight as well, but I felt it really was mostly a chapter on matching as opposed to a chapter on general ethical issues emanating from algorithmic processes. As algorithms penetrate other parts of human activity (like warfare), those topics will likely be appended to this chapter. This chapter also felt the closest to traditional social science to me, as many of the adversarial/gaming processes described within the matching domain leverages the traditional notion of utility as well as the traditional notion of characterizing multi-agent behavior as a non-cooperative game.
Overall, I am very satisfied with this book, it educates well as well as informs, and the narrative provokes thoughts on this topic that could lead to fruitful research or ideas in this field. I believe both the current practitioner, and even adjacent researchers in the AI/ML domains could benefit from this reading. Students of the subject could also benefit from the text both as ancillary reading of quasi-instruction to supplement a main-text or notes, but to give them advanced notice of issues that will soon become standard in all design in the machine intelligence fields shortly. Highly recommended.
The BEST book I have read on data / AI ethics thus far. Written so clearly and well thought out with each proposed theory or methodology paired with examples and use cases. Highly recommended for all data practitioners, and even for non-practitioners it is mostly accessible. Data ethics falls at the intersection of technology, society, ethics and algorithm design. The book dedicates a chapter each to the algorithmic problems and proposed ways to solve for: privacy, fairness, game theory, p-hacking, and finally a combined touch on more difficult to resolve interpretability, morality and the risks of a singularity. Many incredibly relevant notions are covered including but not limited to: differential privacy (centralized vs local), Pareto curves balancing predictive accuracy and fairness, fairness gerrymandering, Maxwell equations, the echo chamber equilibrium, the Bonferroni correction and the reproducibility crisis. On a side note, I love how the authors relate the famous data science phrase 'garbage in, garbage out' to their own 'bias in, bias out.' I know I'll be referencing this book for years to come, great work!
Rather than the soothayers claiming that some kind of future AI may be bad, etc etc etc... This book illustrates how existing algorithms are doing harm, right now, and they elucidate scientific approaches to fixing these algorithms.
The issues sureveyed are; privacy, fairness, inadequate equilibria, (to a lesser extent) interpretability... All of which I was already familiar(ish) with, but I enjoyed the authors take on the topics.
I especially liked their thought process / approach to problem solving / the computer scienctific method. Abstract and simplify, find efficient algorithms, characterise trade-offs, provide guarantees, ...
This seems like an important book for anyone looking to apply machine learning (or algorithms in general) within society.
Algorithms do what we tell them, and often not what we actually want. This book expands on that thesis in several relevant and highly interesting directions (privacy, fairness) and a few that are worth discussing but feel like tangents in this context (p-hacking, the role of game theory). I'm not sure who it's written for: particularly in the second group of chapters, it's not technical enough for a statistician, but struck me as too technical for general interest. I would, however, happily recommend chapters 1, 2, and 5, so it gets 3/5 stars :)
Good primer on the ethics problems AI faces with interesting real-world examples and ML concepts explained in layman’s terms. The key takeaway is that algorithms are designed to do exactly as instructed and ML scientists need to be conscious about internalizing precise definitions of fairness and privacy.
A rigorous and concise examination of privacy, discrimination, gaming (in the game theory sense), and overstating of results as they pertain to the use of algorithms. Highly recommended for those wanting a change from the plethora of “popular science” books filled with buzzwords and misconceptions. The Ethical Algorithm is a must read for any casual bystander, practitioner, and policy maker.
This book is a great informative review of the nascent field of ethics in AI algorithms. The authors ask deep questions that are usually ignored by engineers, and propose solutions from the literature and industry cases. The style is and accessible, despite the complexity of the topic. I would suggest to every data scientist to read this book.
This book is fantastic, and a vital introduction to several extremely important topics in modern machine learning and algorithm design, including privacy, fairness, the gaming of complex systems, and feedback loops / spurious scientific results. (With a tiny detour into the sexier sci-fi stuff like the AI apocalypse). None of these concepts are new, and plenty of ink has been spilled in books and newspaper articles and on Twitter over how algorithms can perpetuate societal inequalities or undermine user safety. Kearns and Roth very much acknowledge the need for regulation and keeping a human in the loop, and that there are areas where human judgement might still reign supreme (i.e. should a less accurate human or a more accurate algorithm choose whether to murder an enemy combatant). Instead, what is proposed here is an actionable, scalable means of incorporating those concepts directly in to ML and systems design. Tying together the many examples in the book is a single theme: that algorithms do want we tell them to do, not want we want them to do, and that being precise in our instructions and intentions should be paramount. I'm not sure I'd recommend this book to a generalist audience (or at least as an "easy" read) - while the descriptions are very clear and straightforward, there are a lot of abstract mathematical concepts being quickly thrown around. But for engineers and statistics/ML practitioners or people just getting into related fields, this is a fantastic and thought-provoking book which will make you think harder about the unintended side effects of the algorithms that you're building.
Excelente libro sobre la importancia y necesidad de incluir consideraciones éticas en el diseño de los algoritmos que cada vez más van guiando, e inclusive definiendo, nuestro comportamiento individual y social. El argumento central es que dichas consideraciones no solo deben ser hechas a nivel regulatorio o al momento de usar de los algoritmos en la toma decisiones, sino que deben ser codificadas en el diseño mismo de los algoritmos. Al ser un area de desarrollo tan reciente, muchos de los ejemplos son incipientes (aunque no dejan de ser fascinantes) y están centrados fundamentalmente en temas de protección de la privacidad y de la equidad, así como en consideraciones de la tensión que puede haber entre la optimización individual y colectiva (tema muy explorado en la teoría de juegos) y en la facilidad como nos podemos dejar engañar por correlaciones espurias relacionadas con el análisis de enormes cantidades de datos. Aunque la redacción del libro es extraordinariamente ágil y atractiva y está dirigido a un público no especializado, la complejidad del tema hace que no sea un libro fácil. De cualquier modo, lectura obligada sobre un tema fundamental.
A short and straightforward overview of the emerging subfield of "ethical" machine learning, covering, at a detailed but nontechnical level, privacy, fairness, algorithmic game theory, and generalization, with a little epilogue on other topics. These are all treated in terms of the formal mathematical definitions of the concepts currently used in the ML community: "privacy" means "epsilon-delta differential privacy", "fairness" means classifiers with outcomes constrained across groups in some way, etc. The authors are up front about this and recognize the limitations of formal definitions, but the book is stronger for it, as covering the approaches actually used by machine learning researchers when they say these words is helpful for anyone looking to understand how the community is addressing such social critiques, and foundational ethical theory or detailed social science on the actual impact of algorithms in society are not (yet?) well enough understood to be covered by more than speculation, particularly by computer scientists. At the same time, there is at least some appeal to the virtue of formalizing concepts in a way concrete enough to implement in code, albeit with heavy caveats about what may be lost by doing so. Much of the popular literature on algorithms in society consists of stories which offend our sensibilities for reasons which are not quite clear, such that it may be hard to articulate a principle which should be contingent on the algorithm designer or user as opposed, perhaps, to other social roles. (That's in the best case; the worst of it, consisting of the majority of the genre, is just outright science fiction.) Outlining choices which are currently available in the limited context at least provides a piece which can be fit into a broader social discussion.
In terms of coverage and depth, the chapters on privacy and generalization, to which literatures the authors have made substantial contributions, are the strongest. The others are clear and succinct, but examples used are by now mostly commonplace and widely covered elsewhere. The chapter on generalization, centered around the "reproducibility crisis" in experimental science, offers a somewhat unique perspective, building on Gelman's "garden of forking paths" metaphor quite literally to discuss the issues of adaptive data anlysis in terms of the combinatorial complexity of trees, with connections back to the privacy chapter as a way of managing this complexity. This is a particularly clear, if informal treatment, of a topic which is largely buried in technical literature and so not widely known. More formal analyses in recent years have made it even more precise. It turns out that classifiers which are learnable in a way which preserves privacy of the data subjects in the sense described in the first chapter are precisely those which do not permit an unrestricted path through the "garden of forking paths" leading to unreliable results described in the generalization chapter (formally, they have "finite Littlestone dimension") and vice versa. While eschewing technical detail, the examples in this book make the concepts involved more intuitive than other coverage I've seen of these issues.
Overall, the focus is a bit narrow for a general audience introduction to algorithms in society, but I would recommend this book as a supplementary reading in a machine learning class for those looking to build and use the emerging tools in machine learning that attempt to mitigate certain possible harms.
I was given a free copy of this book at a book-signing session by Aaron Roth during the TEC2019 conference of NABE (check out my blog post about that conference here). The book discusses the latest developments in the emerging field of “Fairness in Machine Learning” from an algorithmic perspective (for more information about this field see my resource – Artificial Intelligence for Economists). I finally got a chance to read it and I thoroughly enjoyed doing so. Why might anyone want to give this book a shot?
First, the topic is of great social importance as algorithms are increasingly implemented in both policy and day-to-day decision making. I have conducted research on fairness in machine learning myself during my CS MS degree (see Research) and I can attest that there is growing interest both from policymakers and academic researchers to design more “ethical” algorithms that can then be more safely utilized in decisions like school admissions, lending, employee evaluation, etc. (check out these societies and conferences). Second, the materials and examples that the authors draw from are fascinating, very relevant — they may even hit close to home! In this book you will read about dating and navigation apps, how your Netflix profile might not be so private, why scientific research may not be so scientific, how you may be contributing to algorithmic discrimination, and whether you should worry about superintelligence spelling the end of the human race, among many others. Third, unlike other similar books (check out this other favorite book of mine, weapons of math destruction), this book is written by two theoretical computer scientists and as such it focuses on algorithmic solutions to the ethical issues in machine learning, instead of engaging in fruitless discussions about what the morals of our society should be (although it touches on this too). By focusing on algorithms, the authors find a clever niche within this nascent field to provide great value to readers – whether advanced experts in the field or laypersons. As the authors clearly state, this book does not focus on the social, economic, moral impacts of biased algorithms, nor it is about regulation like limits on data collection and switching power from algorithms to humans. Rather, this book is about the ways (what the authors perceive to be the right approach) in which we can make algorithms behave more like how we want our society to operate. Instead of restricting the use of potentially biased machine learning, the authors urge us to instead focus our efforts on doing a better job at explaining our societal goals to the models that increasingly govern important decisions like employment, college admissions, loan approvals, criminal sentencing and so much more.
So.. yeah you might need to be a little bit of a nerd to want to read this, but as a nerd myself, I encourage you to do so. Or at the very least, read my chapter-by-chapter summary of the book here:
The Ethical Algorithm starts off wanting to discuss concerns and consideration in implementing algorithms and how designers can potentially make them better. This book feels like it's caught between two worlds as it goes between basic statistical and computer science questions and issues with technologies. It purports to be about the latter but spends most of its page count on the former. The book also explicitly says it's not going to discuss political or legal recommendations which to me vastly reduces the audiences in question. Either you're a professional in the field and hopefully you already know about Bonferroni corrections or you're a curious member of the public who gets alerted to some interesting statistical concepts. Many of the concepts stay as curiosities and I feel would be improved with more real world examples. Each concept gets around one but I think to establish how wide-spread poor algorithm design more examples would be useful. The book also discusses AGI in a very high level and cursory way that I feel was too shallow and should have been skipped.
The information is finely written and reasonably explained. The examples are not so contrived to be unfollowable but seemingly in the goal of not talking about legislative recourses we're often talking about discrimination between squares and circles without discussions of how statistical tests would be used in the real world except for periodic notes of "hey, this could happen to people". The discussion of tradeoffs was light and just kind of raised rather than having any value assessment guidelines or even where to look for it.
In a world where Weapons of Math Destruction lives, I'm not sure where this book should go except to say if you read that or something like and want more background, this book will give a start. I feel there's an expanded version that focuses on simple pitfalls and another section that focuses on ethical questions.
4 stars - a good overview of the many layers and challenges that go into designing ethically sound algorithms. It honestly feels like a professor took an introductory seminar and tried to convert it to something the general public could understand.
It’s generally very accessible, although there are times, where are the authors slip into more technical language, and often times the change is very abrupt so while I could generally recommend it people, there are parts that would be very confusing to them.
The one thing I really didn’t like about this book was that in the introduction, the authors claim that they are going to present some best practices and technical solutions, but if you actually read each chapter, the other spend a lot of time discussing the various challenges and historical roadblocks, but they don’t actually discuss a lot of future looking solutions. My peers during book club pointed out this could be because they don’t want the book to become outdated very quickly, but I think it is just rather unfortunate that the book sticks to merely demonstrate complexity, without actually demonstrating any solutions, as promised in the introduction.
I’ve often been a poor student of mathematics, but this text does a great job of outlining both the trajectories and general pitfalls of machine learning and algorithmic learning that will be necessary to legislate for, teach around, and plan for in order to have true artificial intelligence in our community. To be clear, this is neither a Cassandra nor a fawning utopian vision, but it outlines how we’ve gotten to where we are in a way that the lay person could make local, school district, state government or even federal legislative choices.
A quick read with dense bits, I highly recommend it.
I learned a lot about ethics and privacy in machine learning. I knew that something was wrong with algorithms and machine learning. Algorithms put us in boxes that are hard to escape from. This book explains why this is happening. Why your social network feeds keep showing the same type of news to you and why it participates into polarizing our societies. As if they needed this! The chapter on privacy goes deep into how a simple data about something you have done in the past can lead to identify you. Very scary! The book gives options to tune your algorithms to remove bias and bring fairness to its outputs. Something that data scientists can leverage in their daily working life.
A relevant and timely discussion on how we can improve our decision-making algorithms with appropriate technology to ambition fairness in the overall results while minimising the accuracy potential. It brings some real illustrative examples, like differential privacy, currently used by Google or Apple in some of their algorithms that use randomness to add noise to their computation to avoid the possibility of anyone personal data being reverse-engineered from the results. Albeit most of the other mentioned cases are still in infancy, the book is a great starting point for tech professional pushing this necessary utilitarian journey.