Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
Rate it:
Open Preview
1%
Flag icon
Life can only be understood backwards; but it must be lived forwards. —SØREN KIERKEGAARD
1%
Flag icon
These EHRs were designed for billing, not for ease of use by physicians and nurses. They have affected physician well-being and are responsible for burnout and attrition; moreover, they have forced an inattentiveness to the patient by virtue of an intruder in the room: the screen that detracts from the person before us.
3%
Flag icon
What’s remarkable about this story is that a computer algorithm would have missed it. For all the hype about the use of AI to improve healthcare, had it been applied to this patient’s data and the complete corpus of medical literature, it would have concluded not to do the procedure because there’s no evidence that indicates the opening of a right coronary artery will alleviate symptoms of fatigue—and AI is capable of learning what to do only by examining existing evidence.
4%
Flag icon
We’re well into the era of Big Data now: the world produces zettabytes (sextillion bytes, or enough data to fill roughly a trillion smartphones) of data each year.
4%
Flag icon
The number of new deep learning AI algorithms and publications has exploded (Figure 1.1), with exponential growth of machine recognition of patterns from enormous datasets. The 300,000-fold increase in petaflops (computing speed equal to one thousand million million [1015] floating-point operations per second) per day of computing used in AI training further reflects the change since 2012
5%
Flag icon
We’re early in the AI medicine era; it’s not routine medical practice, and some call it “Silicon Valley–dation.” Such dismissive attitudes are common in medicine, making change in the field glacial.
5%
Flag icon
“The secret of the care of the patient is caring for the patient.”5 The greatest opportunity offered by AI is not reducing errors or workloads, or even curing cancer: it is the opportunity to restore the precious and time-honored connection and trust—the human touch—between patients and doctors.
6%
Flag icon
We’ll assess how machine pattern recognition will affect the practice of radiologists, pathologists, and dermatologists—the doctors with patterns.
6%
Flag icon
It will also extract insights from complex datasets, such as millions of whole genome sequences, the intricacies of the human brain, or the integrated streaming of real-time analytics from multiple biosensor outputs.
6%
Flag icon
All this potential, however, could be spoiled by misuse of your data.
6%
Flag icon
patient care, as has been the case all too often in the past. The rise of machines has to be accompanied by heightened humaneness—with more time together, compassion, and tenderness—to make the “care” in healthcare real.
7%
Flag icon
A review of three very large studies concluded that there are about 12 million significant misdiagnoses a year.
7%
Flag icon
David Epstein of ProPublica wrote a masterful 2017 essay, “When Evidence Says No, But Doctors Say Yes,”
8%
Flag icon
“The electronic medical record has turned physicians into data entry technicians.”9 Attending to the keyboard, instead of the patient, is ascribed as a principal reason for the medical profession’s high rates of depression and burnout.
8%
Flag icon
The use of electronic healthcare records leads to other problems. The information that they contain is often remarkably incomplete and inaccurate.
8%
Flag icon
This is where we are today: patients exist in a world of insufficient data, insufficient time, insufficient context, and insufficient presence. Or, as I say, a world of shallow medicine.
8%
Flag icon
In the United States, mammography is recommended annually for women in their fifties. The total cost of the screening alone is more than $10 billion per year. Worse, if we consider 10,000 women in their fifties who have mammography each year for ten years, only five (0.05 percent) avoid a breast cancer death, while more than 6,000 (60 percent) will have at least one false positive result.
11%
Flag icon
“You need to be so careful when there is one simple diagnosis that instantly pops into your mind that beautifully explains everything all at once. That’s when you need to stop and check your thinking.”
11%
Flag icon
Redelmeier called this error an example of the representativeness heuristic, which is a shortcut in decision making based on past experiences (first described by Tversky and Kahneman). Patterns of thinking such as the representativeness heuristic are an example of the widespread problem of cognitive bias among physicians. Humans in general are beset by many biases
13%
Flag icon
By 2015 IBM claimed that Watson had ingested 15 million pages of medical content, more than two hundred medical textbooks, and three hundred medical journals.
13%
Flag icon
I’ve talked a lot about human biases. But those same biases, as part of human culture, can become embedded into AI tools.
15%
Flag icon
“Don’t filter the data too early.…
15%
Flag icon
Machine learning tends to work best if you give it enough data and the rawest data you can. Because if you have enough of it, then it should be able to filter out the noise by itself.”
15%
Flag icon
For that, there’s an exceptional textbook called Deep Learning by Ian Goodfellow,
16%
Flag icon
and be conscious of its existence.” Just a bit later, in 1959, Arthur Samuel used the term “machine learning” for the first time.
17%
Flag icon
1936—Turing paper (Alan Turing) 1943—Artificial neural network (Warren McCullogh, Walter Pitts) 1955—Term “artificial intelligence” coined (John McCarthy), 1957—Predicted ten years for AI to beat human at chess (Herbert Simon) 1958—Perceptron (single-layer neural network) (Frank Rosenblatt) 1959—Machine learning described (Arthur Samuel) 1964—ELIZA, the first chatbot 1964—We know more than we can tell (Michael Polany’s paradox) 1969—Question AI viability (Marvin Minsky) 1986—Multilayer neural network (NN) (Geoffrey Hinton) 1989—Convolutional NN (Yann LeCun) 1991—Natural-language processing NN ...more
This highlight has been truncated due to consecutive passage length restrictions.
17%
Flag icon
Deep learning got turbocharged in 2012 with the publication of research by Hinton and his University of Toronto colleagues that showed remarkable progress in image recognition at scale.
17%
Flag icon
Much of AI’s momentum today—a change as dramatic as evolution’s Cambrian explosion 500 million years ago—is tied to the success of deep neural
17%
Flag icon
The DNN era, in many ways, would not have come about without a perfect storm of four components. First are the enormous (a.k.a. “big”) datasets for training, such as ImageNet’s 15 million labeled images;
17%
Flag icon
Second are the dedicated graphic processing units (GPUs) to run computationally intensive functions with massive parallel architecture,
17%
Flag icon
Third are cloud computing and its ability to store massive data economically. And fourth are the open-source algorithmic development modules like Google’s TensorFlow, Microsoft’s Cognitive Kit, UC Berkeley’s Caffe, Facebook’s PyTorch, and Baidu’s Paddle that make working with AI accessible.
19%
Flag icon
apart. It’s one thing to have a machine beat humans in a game; it’s another to put one’s health on the line with machine medicine.
19%
Flag icon
Image segmentation refers to breaking down a digital image into multiple segments, or sets of pixels, which has relied on traditional algorithms and human expert oversight. Deep learning is now having a significant impact on automating this process, improving both its accuracy and clinical workflow.
20%
Flag icon
But, for medicine as a whole, we will never tolerate lack of oversight by human doctors and clinicians across all conditions, all the time.
20%
Flag icon
Most AI work to date has been with structured data (such as images, speech, and games) that are highly organized, in a defined format, readily searchable, simple to deal with, store and query, and fully analyzable. Unfortunately, much data is not labeled or annotated, “clean,” or structured.
20%
Flag icon
AI, to date, has used supervised learning, which strongly requires establishing “ground truths” for training. Any inaccurate labeling or truths can render the network output nonsensical.
21%
Flag icon
There’s the dimension of time to consider: data can drift with models dropping performance as the data change over time.
21%
Flag icon
Overall, there has to be enough data to override noise-to-signal issues, to make accurate predictions, and to avoid overfitting, which is essentially when a neural network comes to mirror a limited dataset.
21%
Flag icon
“There is no evidence that the brain implements anything like the learning mechanisms in use in modern deep learning models.”
21%
Flag icon
“Computers today can perform specific tasks very well, but when it comes to general tasks, AI cannot compete with a human child.”
22%
Flag icon
The same phenomenon comes up in medical AI. One example is the capacity for deep learning to match the diagnostic capacities of a team of twenty-one board-certified dermatologists in classifying skin lesions as cancerous or benign. The Stanford computer science creators of that algorithm still don’t know exactly what features account for its success.
22%
Flag icon
15
22%
Flag icon
In 2018, the European Union General Data Protection Regulation went into effect, requiring companies to give users an explanation for decisions that automated systems make.19
22%
Flag icon
There’s even an initiative called explainable artificial intelligence that seeks to understand why an algorithm reaches the conclusions that it does.
22%
Flag icon
Even though we are used to accepting trade-offs in medicine for net benefit, weighing the therapeutic efficacy and the risks, a machine black box is not one that most will accept yet as AI becomes an integral part of medicine.
22%
Flag icon
Our tolerance for machines with black boxes will undoubtedly be put to the test.
22%
Flag icon
Bias is embedded in our algorithmic world; it pervasively affects perceptions of gender, race, ethnicity, socioeconomic class, and sexual orientation.
22%
Flag icon
Worse is the problem that image recognition trained on this basis amplifies the bias. A method for reducing such bias in training was introduced, but it requires the code writer to be looking for the bias and to specify what needs to be corrected.
23%
Flag icon
The AI Now Institute has addressed bias, recommending that “rigorous pre-release trials” are necessary for AI systems “to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.”
23%
Flag icon
Facial reading DNN algorithms like Google’s FaceNet, Apple’s Face ID, and Facebook’s DeepFace can readily recognize one face from a million
« Prev 1 3