More on this book
Community
Kindle Notes & Highlights
Read between
June 8 - June 19, 2020
Life can only be understood backwards; but it must be lived forwards. —SØREN KIERKEGAARD
These EHRs were designed for billing, not for ease of use by physicians and nurses. They have affected physician well-being and are responsible for burnout and attrition; moreover, they have forced an inattentiveness to the patient by virtue of an intruder in the room: the screen that detracts from the person before us.
What’s remarkable about this story is that a computer algorithm would have missed it. For all the hype about the use of AI to improve healthcare, had it been applied to this patient’s data and the complete corpus of medical literature, it would have concluded not to do the procedure because there’s no evidence that indicates the opening of a right coronary artery will alleviate symptoms of fatigue—and AI is capable of learning what to do only by examining existing evidence.
We’re well into the era of Big Data now: the world produces zettabytes (sextillion bytes, or enough data to fill roughly a trillion smartphones) of data each year.
The number of new deep learning AI algorithms and publications has exploded (Figure 1.1), with exponential growth of machine recognition of patterns from enormous datasets. The 300,000-fold increase in petaflops (computing speed equal to one thousand million million [1015] floating-point operations per second) per day of computing used in AI training further reflects the change since 2012
We’re early in the AI medicine era; it’s not routine medical practice, and some call it “Silicon Valley–dation.” Such dismissive attitudes are common in medicine, making change in the field glacial.
“The secret of the care of the patient is caring for the patient.”5 The greatest opportunity offered by AI is not reducing errors or workloads, or even curing cancer: it is the opportunity to restore the precious and time-honored connection and trust—the human touch—between patients and doctors.
We’ll assess how machine pattern recognition will affect the practice of radiologists, pathologists, and dermatologists—the doctors with patterns.
It will also extract insights from complex datasets, such as millions of whole genome sequences, the intricacies of the human brain, or the integrated streaming of real-time analytics from multiple biosensor outputs.
All this potential, however, could be spoiled by misuse of your data.
patient care, as has been the case all too often in the past. The rise of machines has to be accompanied by heightened humaneness—with more time together, compassion, and tenderness—to make the “care” in healthcare real.
A review of three very large studies concluded that there are about 12 million significant misdiagnoses a year.
David Epstein of ProPublica wrote a masterful 2017 essay, “When Evidence Says No, But Doctors Say Yes,”
“The electronic medical record has turned physicians into data entry technicians.”9 Attending to the keyboard, instead of the patient, is ascribed as a principal reason for the medical profession’s high rates of depression and burnout.
The use of electronic healthcare records leads to other problems. The information that they contain is often remarkably incomplete and inaccurate.
This is where we are today: patients exist in a world of insufficient data, insufficient time, insufficient context, and insufficient presence. Or, as I say, a world of shallow medicine.
In the United States, mammography is recommended annually for women in their fifties. The total cost of the screening alone is more than $10 billion per year. Worse, if we consider 10,000 women in their fifties who have mammography each year for ten years, only five (0.05 percent) avoid a breast cancer death, while more than 6,000 (60 percent) will have at least one false positive result.
“You need to be so careful when there is one simple diagnosis that instantly pops into your mind that beautifully explains everything all at once. That’s when you need to stop and check your thinking.”
Redelmeier called this error an example of the representativeness heuristic, which is a shortcut in decision making based on past experiences (first described by Tversky and Kahneman). Patterns of thinking such as the representativeness heuristic are an example of the widespread problem of cognitive bias among physicians. Humans in general are beset by many biases
By 2015 IBM claimed that Watson had ingested 15 million pages of medical content, more than two hundred medical textbooks, and three hundred medical journals.
I’ve talked a lot about human biases. But those same biases, as part of human culture, can become embedded into AI tools.
“Don’t filter the data too early.…
Machine learning tends to work best if you give it enough data and the rawest data you can. Because if you have enough of it, then it should be able to filter out the noise by itself.”
For that, there’s an exceptional textbook called Deep Learning by Ian Goodfellow,
and be conscious of its existence.” Just a bit later, in 1959, Arthur Samuel used the term “machine learning” for the first time.
1936—Turing paper (Alan Turing) 1943—Artificial neural network (Warren McCullogh, Walter Pitts) 1955—Term “artificial intelligence” coined (John McCarthy), 1957—Predicted ten years for AI to beat human at chess (Herbert Simon) 1958—Perceptron (single-layer neural network) (Frank Rosenblatt) 1959—Machine learning described (Arthur Samuel) 1964—ELIZA, the first chatbot 1964—We know more than we can tell (Michael Polany’s paradox) 1969—Question AI viability (Marvin Minsky) 1986—Multilayer neural network (NN) (Geoffrey Hinton) 1989—Convolutional NN (Yann LeCun) 1991—Natural-language processing NN
...more
This highlight has been truncated due to consecutive passage length restrictions.
Deep learning got turbocharged in 2012 with the publication of research by Hinton and his University of Toronto colleagues that showed remarkable progress in image recognition at scale.
Much of AI’s momentum today—a change as dramatic as evolution’s Cambrian explosion 500 million years ago—is tied to the success of deep neural
The DNN era, in many ways, would not have come about without a perfect storm of four components. First are the enormous (a.k.a. “big”) datasets for training, such as ImageNet’s 15 million labeled images;
Second are the dedicated graphic processing units (GPUs) to run computationally intensive functions with massive parallel architecture,
Third are cloud computing and its ability to store massive data economically. And fourth are the open-source algorithmic development modules like Google’s TensorFlow, Microsoft’s Cognitive Kit, UC Berkeley’s Caffe, Facebook’s PyTorch, and Baidu’s Paddle that make working with AI accessible.
apart. It’s one thing to have a machine beat humans in a game; it’s another to put one’s health on the line with machine medicine.
Image segmentation refers to breaking down a digital image into multiple segments, or sets of pixels, which has relied on traditional algorithms and human expert oversight. Deep learning is now having a significant impact on automating this process, improving both its accuracy and clinical workflow.
But, for medicine as a whole, we will never tolerate lack of oversight by human doctors and clinicians across all conditions, all the time.
Most AI work to date has been with structured data (such as images, speech, and games) that are highly organized, in a defined format, readily searchable, simple to deal with, store and query, and fully analyzable. Unfortunately, much data is not labeled or annotated, “clean,” or structured.
AI, to date, has used supervised learning, which strongly requires establishing “ground truths” for training. Any inaccurate labeling or truths can render the network output nonsensical.
There’s the dimension of time to consider: data can drift with models dropping performance as the data change over time.
Overall, there has to be enough data to override noise-to-signal issues, to make accurate predictions, and to avoid overfitting, which is essentially when a neural network comes to mirror a limited dataset.
“There is no evidence that the brain implements anything like the learning mechanisms in use in modern deep learning models.”
“Computers today can perform specific tasks very well, but when it comes to general tasks, AI cannot compete with a human child.”
The same phenomenon comes up in medical AI. One example is the capacity for deep learning to match the diagnostic capacities of a team of twenty-one board-certified dermatologists in classifying skin lesions as cancerous or benign. The Stanford computer science creators of that algorithm still don’t know exactly what features account for its success.
15
In 2018, the European Union General Data Protection Regulation went into effect, requiring companies to give users an explanation for decisions that automated systems make.19
There’s even an initiative called explainable artificial intelligence that seeks to understand why an algorithm reaches the conclusions that it does.
Even though we are used to accepting trade-offs in medicine for net benefit, weighing the therapeutic efficacy and the risks, a machine black box is not one that most will accept yet as AI becomes an integral part of medicine.
Our tolerance for machines with black boxes will undoubtedly be put to the test.
Bias is embedded in our algorithmic world; it pervasively affects perceptions of gender, race, ethnicity, socioeconomic class, and sexual orientation.
Worse is the problem that image recognition trained on this basis amplifies the bias. A method for reducing such bias in training was introduced, but it requires the code writer to be looking for the bias and to specify what needs to be corrected.
The AI Now Institute has addressed bias, recommending that “rigorous pre-release trials” are necessary for AI systems “to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.”
Facial reading DNN algorithms like Google’s FaceNet, Apple’s Face ID, and Facebook’s DeepFace can readily recognize one face from a million