Political Philosophy and Ethics discussion
Political Philosophy and Law
>
The Tyranny of the Algorithm
date
newest »

message 1:
by
Alan, Founding Moderator and Author
(new)
Mar 22, 2019 12:20PM

reply
|
flag

the topic would cover the side effects (mostly negative) of the widespread of algorithmic decision making on democracy and public opinion;
does transferring the responsibilities of our choices to the machines really represent a progress, as tech pundits claim, for algorithms are supposed to be more "objective and unbiased" than humans?
Modern wrote: "the topic would cover the side effects (mostly negative) of the widespread of algorithmic decision making on democracy and public opinion;
does transferring the responsibilities of our..."
I'm not sure whether this is relevant to what you have in mind for this topic, but I became aware of the following online article by the Future of Life Institute a couple of days ago: "Benefits & Risks of Artificial Intelligence".
does transferring the responsibilities of our..."
I'm not sure whether this is relevant to what you have in mind for this topic, but I became aware of the following online article by the Future of Life Institute a couple of days ago: "Benefits & Risks of Artificial Intelligence".

"Automate This: How Algorithms Came to Rule Our World" by Christopher Steiner
"Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy" by Cathy O'Neil

The biggest dangers the book points out, and what others have as well, is twofold:
(1) The lack of transparency available given that many algorithms are built and lay in the hands of private corporations. In this way they aren't auditable by external researchers to make sure they are not biased.
(2) Biased data leads to biased outcomes. Most algorithms are trained on data gathered by society, and, as I'm sure many posters here are aware, society is anything but unbiased. When algorithms are trained on data we generate, it can create outcomes that further perpetuate certain types of adverse behaviours as a computer doesn't know better. This heightens the need for what I outlined in point (1), we need openness to have feedback systems that let people make sure we aren't perpetuating bias in the systems we are creating.
I'm not sure how technical we should be getting into on this forum, but Bolukbasi et. al. had a great paper examining gender bias in word embeddings (an algorithm to represent text relationships) and proposed a way to correct them. This is exactly the type of diligence we need to be having, but unfortunately we don't in most cases since most things aren't done out in the open.
But getting back to Modern's initial question: algorithms are only as objective and unbiased as we let them be given the right data and the proper guidance by civil society to ensure fairness. I genuinely believe with the right political framework, they can be a hugely beneficial. Whether or not that framework comes to be, and whether or not companies with huge stakes in this let that come to be, is very unclear to me.

I think it was published in 2017.

Discussing the menace of data companies harvesting voter data and creating 'data profiles' for millions of Americans. All for the sake of swaying elections.
https://tinyurl.com/y6mkghck
https://tinyurl.com/yajgsopt
https://tinyurl.com/y8j9j6z8
https://tinyurl.com/yaoo5a9t
https://tinyurl.com/yca6caod
https://tinyurl.com/yb6jojx8
https://tinyurl.com/yc7lkysu
https://tinyurl.com/yxb8v7uq
https://tinyurl.com/y7rjb5zc
https://tinyurl.com/y3b9xjep
https://tinyurl.com/yygmagks
https://tinyurl.com/y3fhymh6
https://tinyurl.com/y5yyba6f
https://tinyurl.com/yccxeewf
https://tinyurl.com/y89zxnnz
The following quote from a book on consciousness and free will by a neuroscientist may or may not be on point:
It's hard for biologists to argue with physicists. Often physicists listen with detached bemusement because biologists can't explain life with mathematics. Physics could not exist without math. Sometimes I think physicists get too enamored with math. I get the impression that they think that describing and predicting phenomena with equations is the same as explaining why and how such phenomena occur. Take the most famous equation of all, E = mc2. Just what does that equal sign mean? It implies that the variables on each side are the same. But is mass really identical to energy? True, mass can be converted to energy, as atom bombs prove, and energy can even be turned into mass. Still, they are not the same things. Not only are the units of measurement different, but the equation is only descriptive and predictive. It does not explain how mass converts to energy or vice versa.W. R. Klemm, Mental Biology: The New Science of How the Brain and Mind Relate (Amherst, NY: Prometheus, 2014), 242-43.

Feliks wrote: "Alan --on your profile page--is the 'currently reading' list kept accurate with what books you are presently making your way through? I believe you also maintain separate bibliographical lists as w..."
These are books that I am trying to get through at the present. Often I start one book and then go to another and then to another and so on. Eventually, I'll finish one book and then go back to another on that list. And then I'll add some others. It's crazy, I know, but eventually I'll get through reading all of these books, even though some of them will be only in part. All, or almost all, of them relate to my present book project on ethics, including but not limited to free will.
These are books that I am trying to get through at the present. Often I start one book and then go to another and then to another and so on. Eventually, I'll finish one book and then go back to another on that list. And then I'll add some others. It's crazy, I know, but eventually I'll get through reading all of these books, even though some of them will be only in part. All, or almost all, of them relate to my present book project on ethics, including but not limited to free will.

Part 5 of Prediction Machines: The Simple Economics of Artificial Intelligence by Agrawal et al explores the impacts of algorithms and artificial intelligence on society at large.


https://www.govtech.com/biz/data/nycs...
In sum: they are ever-increasing. The highest usage is found in health & human services.
In my own branch they are kept under rigid control; used in highly limited fashion. They are not used in Operations except under the strictest scrutiny. More of a tool "for the back end" when rote mathematic calculations taking inordinate amounts of time make manpower alone, untenable.
Feliks wrote: "Fresh article here [from govtech.com] on use of algorithms in New York public agencies.
https://www.govtech.com/biz/data/nycs...
In sum: they are ever-..."
For a person who presents himself as a Luddite (HaHa), Feliks, you certainly are up to date on this stuff.
Algorithms are a mystery to me. Fortunately, I will pass before they take over the world. Someone should write a science fiction movie about algorithms controlling people. But perhaps they already have. All my interactions with online providers these days are being conducted, on the other end, by algorithms. They're not even allowing me to communicate with a human being on the other side of the world. If the algorithm (which I am, perhaps incorrectly, conflating with AI) can't answer the question (which has happened to me), I'm left all on my own. Warning: don't change from classic Outlook to new Outlook. AI disabled my classic Outlook and wouldn't let me go back to it. And the new Outlook can't do all the things that the classic Outlook (which I had used since the 1990s) can.
I'm going to be incommunicado for the next couple of hours or more, starting right now.
https://www.govtech.com/biz/data/nycs...
In sum: they are ever-..."
For a person who presents himself as a Luddite (HaHa), Feliks, you certainly are up to date on this stuff.
Algorithms are a mystery to me. Fortunately, I will pass before they take over the world. Someone should write a science fiction movie about algorithms controlling people. But perhaps they already have. All my interactions with online providers these days are being conducted, on the other end, by algorithms. They're not even allowing me to communicate with a human being on the other side of the world. If the algorithm (which I am, perhaps incorrectly, conflating with AI) can't answer the question (which has happened to me), I'm left all on my own. Warning: don't change from classic Outlook to new Outlook. AI disabled my classic Outlook and wouldn't let me go back to it. And the new Outlook can't do all the things that the classic Outlook (which I had used since the 1990s) can.
I'm going to be incommunicado for the next couple of hours or more, starting right now.

But, I'm not completely alone. Longtime technology vets are usually the last to adopt fads. It's precisely because we see how 'wobbly' all these latest trends are.
Look no further than the recent 'global meltdown' of Microsoft Windows. Can you believe what happened this past weekend? Truly, the dizzy limit.
Anyway --with regard to technology and government --I've seen some frightening things during my tenure so far in Gotham.
In my day-to-day workflow, it is not algorithms which concern me as much as good ole-fashioned human recklessness. 'Groupthink' prevails.
The human factor is still the scariest monster in the room. There is absolutely no 'critical thinking' in information technology.
And besides that, there's also a wide array of just plain "bad actors" --miscreants who would better suit the reign of the Emperor Justinian, --than serve today's New York City.
Feliks wrote: "Ahaha. My Luddism is firmly entrenched and --if anything --increasing lately. I still don't own a smartphone in the face of my bosses' insistence.
But, I'm not completely alone. Longtime technolog..."
Thank you for your "inside" info. It sounds scary, though perhaps similar to what I observed during my decades in law practice.
But, I'm not completely alone. Longtime technolog..."
Thank you for your "inside" info. It sounds scary, though perhaps similar to what I observed during my decades in law practice.

For now, only AI tools which come embedded by the manufacturers of browsers themselves (e.g. 'Copilot' in Microsoft) are still permitted.
Otherwise, they must not touch any citizen data. Similar to HIPAA policy.
Reason: as usual: security issues. Any new-fangled computing like this, is always riddled with leaks and exploits.
So much for AI saving civilization. This suite of tools can be considered dead on arrival. Failure to make-the-grade at the municipal level is a death-knell for this kind of algorithm.
Feliks wrote: "Update: all 'AI' tools are being completely banned from New York City government.
For now, only AI tools which come embedded by the manufacturers of browsers themselves (e.g. 'Copilot' in Microso..."
Seems logical.
For now, only AI tools which come embedded by the manufacturers of browsers themselves (e.g. 'Copilot' in Microso..."
Seems logical.
Books mentioned in this topic
Consent of the Networked: The Worldwide Struggle For Internet Freedom (other topics)Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (other topics)
Automate This: How Algorithms Came to Rule Our World (other topics)