More on this book
Community
Kindle Notes & Highlights
Read between
October 11 - December 27, 2022
Since the dawn of the digital age, decision-making in finance, employment, politics, health, and human services has undergone revolutionary change. Forty years ago, nearly all of the major decisions that shape our lives—whether or not we are offered employment, a mortgage, insurance, credit, or a government service—were made by human beings. They often used actuarial processes that made them think more like computers than people, but human discretion still ruled the day. Today, we have ceded much of that decision-making power to sophisticated machines. Automated eligibility systems, ranking
...more
Digital security guards collect information about us, make inferences about our behavior, and control access to resources. Some are obvious and visible: closed-circuit cameras bristle on our street corners, our cell phones’ global positioning devices record our movements, police drones fly over political protests. But many of the devices that collect our information and monitor our actions are inscrutable, invisible pieces of code.
Technologies of poverty management are not neutral. They are shaped by our nation’s fear of economic insecurity and hatred of the poor; they in turn shape the politics and experience of poverty.
Like earlier technological innovations in poverty management, digital tracking and automated decision-making hide poverty from the professional middle-class public and give the nation the ethical distance it needs to make inhuman choices: who gets food and who starves, who has housing and who remains homeless, and which families are broken up by the state. The digital poorhouse is part of a long American tradition.
The advocates of automated and algorithmic approaches to public services often describe the new generation of digital tools as “disruptive.” They tell us that big data shakes up hidebound bureaucracies, stimulates innovative solutions, and increases transparency. But when we focus on programs specifically targeted at poor and working-class people, the new regime of data analytics is more evolution than revolution. It is simply an expansion and continuation of moralistic and punitive poverty management strategies that have been with us since the 1820s.
I’m shocked that Krzysztof received a score nearly three times as high as Stephen’s. Krzysztof is in his teens, while Stephen is only 6. The hotline report shows no harm beyond the crowded conditions and poor housing stock common to being poor. Why was he rated so highly? Pat tries to explain. His family’s record with public services stretches back to when his mother was a child.
We all tend to defer to machines, which can seem more neutral, more objective. But it is troubling that managers believe that if the intake screener and the computer’s assessments conflict, the human should learn from the model. The AFST, like all risk models, offers only probabilities, not perfect prediction. Though it might be able to identify patterns and trends, it is routinely wrong about individual cases.
To sum up: the AFST has inherent design flaws that limit its accuracy. It predicts referrals to the child abuse and neglect hotline and removal of children from their families—hypothetical proxies for child harm—not actual child maltreatment. The data set it utilizes contains only information about families who access public services, so it may be missing key factors that influence abuse and neglect. Finally, its accuracy is only average. It is guaranteed to produce thousands of false negatives and positives annually.
The parallels between the county poorhouse and the digital poorhouse are striking. Both divert the poor from public benefits, contain their mobility, enforce work, split up families, lead to a loss of political rights, use the poor as experimental subjects, criminalize survival, construct suspect moral classifications, create ethical distance for the middle class, and reproduce racist and classist hierarchies of human value and worth.
While they are close kin, the differences between the poorhouse of yesterday and the digital poorhouse today are significant. Containment in the physical institution of a county poorhouse had the unintentional result of creating class solidarity across race, gender, and national origin. When we sit at a common table, we might see similarities in our experiences, even if we are forced to eat gruel. Surveillance and digital social sorting drive us apart as smaller and smaller microgroups are targeted for different kinds of aggression and control. When we inhabit an invisible poorhouse, we become
...more
being followed for life by a mental health diagnosis, an accusation of child neglect, or a criminal record diminishes life chances, limits autonomy, and damages self-determination. Additionally, retaining public service data ad infinitum intensifies the risk of inappropriate disclosure and data breaches.
Think of the digital poorhouse as an invisible spider web woven of fiber optic strands. Each strand functions as a microphone, a camera, a fingerprint scanner, a GPS tracker, an alarm trip wire, and a crystal ball. Some of the strands are sticky. They are interconnected, creating a network that moves petabytes of data. Our movements vibrate the web, disclosing our location and direction.
Once we fall into the stickier levels of the digital poorhouse, its web of threads will make it difficult for us to recover from the bad luck or poor choices that put us there. Or, the system may come to us. The strands at the top of the web are only widely spaced and switched off for now.
Because the digital poorhouse is networked, whole areas of professional middle-class life might suddenly be “switched on” for scrutiny. Because the digital poorhouse persists, a behavior that is perfectly legal today but becomes criminal in the future can be used to persecute retroactively.
Oscar Gandy’s concept of “rational discrimination.”5 Rational discrimination does not require class or racial hatred, or even unconscious bias, to operate. It only requires ignoring bias that already exists. When automated decision-making tools are not built to explicitly dismantle structural inequities, their speed and scale intensify them.
The digital poorhouse replaces the sometimes-biased decision-making of frontline social workers with the rational discrimination of high-tech tools. Administrators and data scientists focus public attention on the bias that enters decision-making systems through caseworkers, property managers, service providers, and intake center workers. They obliquely accuse their subordinates, often working-class people, of being the primary source of racist and classist outcomes in their organizations. Then, managers and technocrats hire economists and engineers to build more “objective” systems to root
...more
Classifying and targeting marginalized groups for “special attention” might offer helpful personalization. But it also leads to persecution.
Despite our unparalleled communications capabilities, we are in the midst of a violent retrenchment on equity and pluralism. Rather than achieving a basic standard of “jobs and income now” for all, we face economic inequity of history-shattering proportions.
The most important step in dismantling the digital poorhouse is changing how we think, talk, and feel about poverty. As counterintuitive as it may sound, the best cure for the misuse of big data is telling better stories.
But poverty is not an island; it is a borderland. There’s quite a lot of movement in the economic fringes, especially across the fuzzy boundary between the poor and the working class. Those who live in the economic borderlands are pitted against one another by policies that squeeze every possible dime from the wallets of the working class at the same time that they cut social programs for the poor and absolve the professional middle class and wealthy of their social obligations.

