Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again
Rate it:
Open Preview
23%
Flag icon
that half of US adults have their facial images stored in at least one database that is searchable by police. And AI facial data is only one way to identify a person,
24%
Flag icon
Regulatory AI issues are especially pertinent in medicine. We are in the early days of regulatory oversight of medical algorithms, and only a limited number have been approved.
24%
Flag icon
These tools are and will be constantly evolving with larger datasets and autodidactic potential. This will require developing new ground rules for review and approval, conducting post-market surveillance, and bringing on new personnel with AI expertise at regulatory agencies.
28%
Flag icon
“there literally have to be thousands of algorithms to even come close to replicating what a radiologist can do on a given day. It’s not going to be all solved tomorrow.”35
28%
Flag icon
Radiologists can provide a more holistic assessment than machines can. Each scan is supposed to have a reason for being ordered, such as “rule out lung cancer” for a chest X-ray. A narrow AI algorithm could prove to be exceptionally accurate for ruling out or pointing toward the diagnosis of lung cancer. But, in contrast, the radiologist not only scours the film for evidence of a lung nodule or lymph node enlargement but also looks for other abnormalities such as rib fractures, calcium deposits, heart enlargement, and fluid collections.
28%
Flag icon
“We believe that machine learning and AI will enhance both the value and the professional satisfaction of radiologists by allowing us to spend more time performing functions that add value and influence patient care and less time doing rote tasks that we neither enjoy nor perform as well as machines.”
29%
Flag icon
official. Ideally, this will include data mining of the comprehensive medical information for each patient and its integration with the scan interpretation.
29%
Flag icon
deep learning can markedly improve the quality of microscope images, sidestepping the problems of out-of-focus or lower-quality slides.53 And as is the case with medical imaging, algorithms can enhance, rather than replace, the human pathologist.
30%
Flag icon
Further, their neural network was trained to recognize the pattern of ten common genomic mutations and predicted these from the slides with reasonable accuracy (0.73–0.86), especially for one of the early attempts to do so.56 This finding is noteworthy because it exemplifies the ability of machine algorithms to see patterns not easily discernible by humans.
31%
Flag icon
it isn’t just a look at a lesion that is telling. The history of the lesion, the individual’s risk factors, a more extensive assessment of the whole skin of the patient are all in the domain of information the dermatologists acquire during a visit. Furthermore,
31%
Flag icon
Accordingly, we can consider the algorithm as a contrived, narrow way to make a diagnosis and biopsy plan compared to the real, clinical world.
31%
Flag icon
Even then, these data are unstructured, so there’s no way that simple ingestion of all the text automatically translates into an augmented knowledge base.
32%
Flag icon
Even if clinicians could work well with a patient’s EHR, it still provides a very narrow, incomplete view.
32%
Flag icon
Pecking away at a keyboard distracts the doctor and disengages the patient. Face-to-face contact, the opportunity to take in body language, and the essence of interpersonal communication are all lost.
32%
Flag icon
compared with our current state may well be beneficial. We’ll see: a digital scribe pilot combining natural-language processing (to transcribe the speech from the visit) and machine learning (to synthesize the
32%
Flag icon
Beyond the technical challenge of transcribing unstructured language into a succinct but complete note, there are the missing pieces. All the nonverbal communication would be lost, for example.
33%
Flag icon
We have trapped ourselves in a binary world of data interpretation—normal or abnormal—and are ignoring rich, granular, and continuous data that we could be taking advantage of. That’s where deep learning about an individual’s comprehensive, seamlessly updated information could play an important role in telling doctors what they want to know.
35%
Flag icon
Similarly, instead of the classic Framingham clinic risk factors that have been used to predict heart disease for several decades, a group at Boston University used machine algorithmic processing of EHR to achieve over 80 percent accuracy as compared with Framingham’s 56 percent accuracy—scarcely better than a coin flip.33
37%
Flag icon
machine learning to determine best surgical practice.58 Calling it “Surgery 4.0,” Verb’s concept of cloud-connected surgeons sharing experiences and access to data is akin to democratizing surgical practice. In particular, machine learning that draws upon intraoperative imaging as well as all the relevant data from each patient could help redefine past practice and improve outcomes.
37%
Flag icon
62 While deep learning AI is still taking on only narrow tasks, these two reports show widening beyond a single clinical diagnosis to suggesting an urgent referral for tens of potential diagnoses.
37%
Flag icon
the essence of human-to-human support. But AI could ultimately reduce the need for nurses, both in hospitals and in outpatient clinics and medical offices. Using AI algorithms to process data from the remote monitoring of patients at home will mean that there is a dramatically reduced role for hospitals to simply observe patients, either to collect data or to see whether symptoms get worse or reappear.
37%
Flag icon
By every measure, participants were willing to disclose much more when they thought they were communicating with a virtual human rather than a real one. A couple of the participants who interacted with the virtual human conveyed this well: “This is way better than talking to a person. I don’t really feel comfortable talking about personal stuff to other people.” And “A human being would be judgmental. I shared a lot of personal things, and it was because of that.”4
38%
Flag icon
2017, 8 million people in the United States talked to Cleverbot just to have something to chat with, and researchers project that by 2025 more than 1 billion people will be having regular encounters.
38%
Flag icon
The term “digital phenotyping” conveys the point that each feature can be digitized and produce a variety of metrics.
38%
Flag icon
With the addition of connected sensors, many of the physiological parameters can be unobtrusively gathered, often on a continuous basis. This would mean a large body of data for each individual that AI could process. As Tom Insel, former head of the National Institute for Mental Health, said,
39%
Flag icon
There is considerable recent interest in using AI to predict and prevent suicide. The suicide rate has been increasing in the United States over the past 30 years, accounting for more than 44,000 deaths in 2017,32 or over 120 suicides per day.33 That’s more than homicide, AIDS, car accidents, and
40%
Flag icon
researchers at Vanderbilt and Florida State Universities did just that. After reviewing 2 million de-identified electronic medical records from Tennessee hospitalized patients, the researchers found more than 3,000 patients with suicide attempts. Applying an unsupervised learning algorithm to the data accurately predicted suicide attempts nearly 80 percent of the time (up to a six-month window), which compares quite favorably to the 60 percent from logistic regression of traditional risk factors.
40%
Flag icon
Researchers at Carnegie Mellon did a very small but provocative study with functional MRI brain images of seventeen suicidal ideators and seventeen controls.40 Machine learning algorithms could accurately detect “neurosemantic” signatures associated with suicide attempts. Each individual, while undergoing the MRI, was presented with three sets of ten words (like “death” or “gloom”). Six words and five brain locations determined a differentiating pattern. Machine learning classified the brain image response correctly in fifteen of the seventeen patients in the suicide group and sixteen of the ...more
40%
Flag icon
demonstrate improved outcomes at scale, the CBT and chatbot offerings may do well in the mental health arena, for which there is a woeful shortage of health professionals.
41%
Flag icon
The seamless capturing and processing of such data may prove to be helpful in understanding the relationship of stress to common medical conditions like high blood pressure and diabetes.
41%
Flag icon
When data from medical records, genomic screens, and sensors are integrated and processed by AI, pharmacists will be able to offer
41%
Flag icon
As we think about each and every type of clinician, it becomes increasingly clear that AI has a potential transformative impact. But it’s not just singularly at the level of clinicians; it’s also at the level of the sum of the parts—the people—that make up a health system.
42%
Flag icon
new tools are in development using the data in electronic health records to predict time to death with unprecedented accuracy while providing the doctor with a report that details the factors that led to the prediction.
42%
Flag icon
Less than half of the patients admitted to hospitals needing palliative care actually receive it.2 Meanwhile, of the Americans facing end-of-life care, 80 percent would prefer to die at home, but only a small fraction get to do so—60 percent die in the hospital.
42%
Flag icon
An eighteen-layer DNN learning from the electronic health records of almost 160,000 patients was able to predict the time until death on a test population of 40,000 patient records, with remarkable accuracy. The algorithm picked up predictive features that doctors wouldn’t, including the number of scans, particularly of the spine or the urinary system, which turned out to be as statistically powerful, in terms of probability, as the person’s age.
44%
Flag icon
The typical ER patient has about sixty documents in his or her medical history, which takes considerable time for clinicians to review and ingest. MedStar developed a machine learning system that rapidly scans the complete patient record and provides recommendations regarding the patient’s presenting symptoms, freeing doctors and nurses to render care for their patients.
44%
Flag icon
The FDA-approved Arterys algorithm called Deep Ventricle enables rapid analysis of the heart’s blood flow, reducing a task that can take an hour as blood is drawn and measured by hand to a fifteen-second scan.
44%
Flag icon
Albert Haque and colleagues at Stanford University used deep learning and machine vision to unobtrusively track the hand hygiene of clinicians and surgeons at Stanford University hospital with video footage and depth sensors. The technology was able to quantify how clean their hands were with accuracy levels exceeding 95 percent (Figure 9.1
45%
Flag icon
Although we clearly need ICUs, operating rooms, and emergency rooms, the regular hospital room, which makes up the bulk of hospitals today, is highly vulnerable to replacement. Mercy Hospital’s Virtual Care Center in St. Louis gives a glimpse of the future.
45%
Flag icon
The patients may be in intensive care units or in their own bedroom, under simple, careful observation or intense scrutiny, but they’re all monitored remotely. Even if a patient isn’t having any symptoms, the AI surveillance algorithms can pick up a warning and alert the clinician.
45%
Flag icon
Until we have FDA devices approved for home use that are automatic, accurate, inexpensive, and integrate with remote monitoring facilities, we’ve got an obstacle.
45%
Flag icon
What is particularly worrisome is the potential use of AI analytics to partition populations of patients according to the health risk of each individual and raise individual rates for coverage. In the era of improved prediction of health, there will need to be regulation to avoid discrimination against individuals based on risk.
46%
Flag icon
readily outside the United States, and countries like India and China are particularly likely to be prominent first movers. India has a doctor-to-patient ratio of only 0.7 per 1,000 people, which is less than half that of China (1.5) and substantially less than that of the United States (at 2.5).
47%
Flag icon
recognition of the possibilities will help make those odds better. As soon as patient outcomes are shown to be unequivocally improved by having digital twins inform best treatment, it is likely there will be substantial commitments across health systems to develop and prioritize such infrastructure.
47%
Flag icon
humanly possible. The data-rich field of genomics is well suited for machine help. Every one of us is a treasure trove of genetic data, as we all have 6 billion letters—A, C, G, and T—in our diploid (maternal and paternal copies) genome, 98.5 percent of which doesn’t code for proteins.
47%
Flag icon
Another early proof of this concept was an investigation of the genomics of autism spectrum disorder. Before the work was undertaken, only sixty-five genes had been linked to autism with strong evidence. The algorithms identified 2,500 genes that were likely contributory to or even causative of the symptoms of the autism spectrum.
48%
Flag icon
Perhaps unsurprisingly, given that it is one of machine learning’s core strengths, image recognition is playing a critical role in cell analysis: to sort shape, classify type, determine lineage, identify rare cells in the blood, or distinguish whether cells are alive or
49%
Flag icon
“The concept of automated drug discovery could help to considerably reduce the number of compounds to be tested in a medicinal chemistry project and, at the same time, establish a rational unbiased foundation of adaptive molecular design.”42
49%
Flag icon
The goal of ATOM is to reduce how long it takes to go from identifying a potential drug target to developing a drug candidate that hits the target.44 That’s normally a four-year bottleneck. ATOM seeks to turn it into a one-year lag.
50%
Flag icon
Another important difference between computers and humans is that machines don’t generally know how to update their memories and overwrite information that isn’t useful. The approach our brains take is called Hebbian learning, following Donald Hebb’s maxim that “cells that fire together wire together.”