More on this book
Community
Kindle Notes & Highlights
by
Kai-Fu Lee
Read between
July 26 - August 8, 2020
Top researchers in the United States like Andrew Ng and Sebastian Thrun have demonstrated excellent algorithms that are on par with doctors at diagnosing specific illnesses based on images—pneumonia through chest x-rays and skin cancer through photos. But a broader business AI application for medicine will look to handle the entire diagnosis process for a wide variety of illnesses.
Instead of replacing doctors with algorithms, RXThinking’s AI diagnosis app empowers them. It acts like a “navigation app” for the diagnosis process, drawing on all available knowledge to recommend the best route but still letting the doctors steer the car.
data from past cases to advise judges on both evidence and sentencing. An evidence cross-reference system uses speech recognition and natural-language processing to compare all evidence presented—testimony, documents, and background material—and seek out contradictory fact patterns. It then alerts the judge to these disputes, allowing for further investigation and clarification by court officers. Once a ruling is handed down, the judge can turn to yet another AI tool for advice on sentencing. The sentencing assistant starts with the fact pattern—defendant’s criminal record, age, damages
...more
tools that aid a real human in making informed decisions.
There’s no question that China will lag in the corporate world, but it may lead in public services and industries with the potential to leapfrog outdated systems.
Algorithms can now group the pixels from a photo or video into meaningful clusters and recognize objects in much the same way our brain does:
The same goes for audio data. Instead of merely storing audio files as collections of digital bits, algorithms can now both pick out words and often parse the meaning of full sentences.
Third-wave AI is all about extending and expanding this power throughout our lived environment, digitizing the world around us through the proliferation of sensors and smart devices. These devices are turning our physical world into digital data th...
This highlight has been truncated due to consecutive passage length restrictions.
As perception AI gets better at recognizing our faces, understanding our voices, and seeing the world around us, it will add millions of seamless points of contact between the online and offline worlds. Those nodes will be so pervasive that it no longer makes sense to think of oneself as “going online.”
When you order a full meal just by speaking a sentence from your couch, are you online or offline? When your refrigerator at home tells your shopping cart at the store that you’re out of milk, are you moving through a physical world or a digital one?
I call these new blended environments OMO: online-merge-offline. OMO is the next step in an evolution that already took us from pure e-commerce deliv...
This highlight has been truncated due to consecutive passage length restrictions.
quick trip just a few years into the future to see what a supermarket fully outfitted with perception AI devices might look like.
As I pull the cart back from the rack, visual sensors embedded in the handlebar have already completed a scan of my face and matched it to a rich, AI-driven profile of my habits, as a foodie, a shopper, and a husband to a fantastic cook of Chinese food.
“Based on what’s in your cart and your fridge at home, it looks like your diet will be short on fiber this week. Shall I add a bag of almonds or ingredients for a split-pea soup to correct that?”
add the ingredients for beef noodles that I don’t already have at home.”
The cart is speaking in Mandarin, but in the synthesized voice of my favorite actress, Jennifer Lawrence.
“Hi, Mr. Lee, how’ve you been?” he says. “We’ve just got in a shipment of some fantastic Napa wines. I understand that your wife’s birthday is coming up, and we wanted to offer you a 10 percent discount on your first purchase of the 2014 Opus One. Your wife normally goes for Overture, and this is the premium offering from that same winery.
All the concierges are knowledgeable, friendly, and trained in the art of the upsell. It’s far more socially engaged work than traditional supermarket jobs, with all employees ready to discuss recipes, farm-to-table sourcing, and how each product compares with what I’ve tried in the past.
Perception AI–powered shopping trips like this will capture one of the fundamental contradictions of the AI age before us: it will feel both completely ordinary and totally revolutionary. Much of our daily activity will still follow our everyday established patterns, but the digitization of the world will eliminate common points of friction and tailor services to each individual.
by understanding and predicting the habits of each shopper, these stores will make major improvements in their supply chains, reducing food waste and increasing profitability.
The AI-powered education experience takes place across four scenarios: in-class teaching, homework and drills, tests and grading, and customized tutoring.
the student profile. That profile contains a detailed accounting of everything that affects a student’s learning process, such as what concepts they already grasp well, what they struggle with, how they react to different teaching methods, how attentive they are during class, how quickly they answer questions, and what incentives drive them.
When students head home, the student profile combines with question-generating algorithms to create homework assignments precisely tailored to the students’ abilities.
students’ time and performance on different problems feed into their student profiles,
AI-powered speech recognition can bring top-flight English instruction to the most remote regions. High-performance speech recognition algorithms can be trained to assess students’ English pronunciation, helping them improve intonation and accent without the need for a native English speaker on site.
This AI-powered technology will save teachers’ time in correcting the basics, letting them shift that time to communicating with students about higher-level writing concepts.
Finally, for students who are falling behind, the AI-powered student profile will notify parents of their child’s situation, giving a clear and detailed explanation of what concepts the student is struggling with.
data on student engagement through expression and sentiment analysis. That data continually feeds into a student’s profile, helping the platforms filter for the kinds of teachers that keep students engaged.
accessing the power of the internet via voice commands requires technology that listens to our every word. That type of data collection may rub many Americans the wrong way. They don’t want Big Brother or corporate America to know too much about what they’re up to. But people in China are more accepting of having their faces, voices, and shopping choices captured and digitized. This is another example of the broader Chinese willingness to trade some degree of privacy for convenience.
There’s no right answer to questions about what level of social surveillance is a worthwhile price for greater convenience and safety, or what level of anonymity we should be guaranteed at airports or subway stations. But in terms of immediate impact, China’s relative openness with data collection in public places is giving it a massive head start on implementation of perception AI.
As we turn hospitals, cars, and kitchens into OMO environments, we will need a diverse array of sensor-enabled hardware devices to sync up the physical and digital worlds.
FOURTH WAVE: AUTONOMOUS AI Once machines can see and hear the world around them, they’ll be ready to move through it safely and work in it productively. Autonomous AI represents the integration and culmination of the three preceding waves, fusing machines’ ability to optimize from extremely complex data sets with their newfound sensory powers.
Early autonomous robotics applications will work only in highly structured environments where they can create immediate economic value.
Hasn’t heavy machinery already taken over many blue-collar line jobs? Yes, the developed world has largely replaced raw human muscle with high-powered machines. But while these machines are automated, they are not autonomous. While they can repeat an action, they can’t make decisions or improvise according to changing conditions.
They can perform repetitive tasks, but they can’t deal with any deviations or irregularities in the objects they manipulate. But by giving machines the power of sight, the sense of touch, and the ability to optimize from data, we can dramatically expand the number of tasks they can tackle.
autonomous AI will surface first in commercial settings because these robots create a tangible return on investment by doing the jobs of workers who are growing either more expensive or harder to find.
human-like robots for the home remain out of reach. Seemingly simple tasks like cleaning a room or babysitting a child are far beyond AI’s current capabilities, and our cluttered living environments constitute obstacle courses for clumsy robots.
Swarms of autonomous drones will work together to paint the exterior of your house in just a few hours. Heat-resistant drone swarms will fight forest fires with hundreds of times the current efficiency of traditional fire crews. Other drones will perform search-and-rescue operations in the aftermath of hurricanes and earthquakes, bringing food and water to the stranded and teaming up with nearby drones to airlift people out.
Shenzhen is home to DJI, the world’s premier drone maker and what renowned tech journalist Chris Anderson called “the best company I have ever encountered.”
Self-driving cars must be trained on millions, maybe billions, of miles of driving data so they can learn to identify objects and predict the movements of cars and pedestrians. That data draws from thousands of different vehicles on the road, and it all feeds into one central “brain,” the core collection of algorithms that powers decision-making across the fleet. It means that when any autonomous car encounters a new situation, all the cars running on those algorithms learn from it.
Google has taken a slow-and-steady approach to gathering that data, driving around its own small fleet of vehicles equipped with very expensive sensing technologies. Tesla instead began installing cheaper equipment on its commercial vehicles, letting Tesla owners gather the data for them when they use certain autonomous features.
Google is aiming for impeccable safety, but in the process it has delayed deployment of systems that could likely already save lives. Tesla takes a more techno-utilitarian approach, pushing their cars to market once they are an improvement over human drivers, hoping that the faster rates of data accumulation will train the systems earlier and save lives overall.
the Chinese mentality is that you can’t let the perfect be the enemy of the good.
Highway regulators in the Chinese province of Zhejiang have already announced plans to build the country’s first intelligent superhighway, infrastructure outfitted from the start for autonomous and electric vehicles.
The superhighway will have photovoltaic solar panels built into the road surface, energy that feeds into charging stations for electric vehicles. In the long term, the goal is to be able to continuously charge electric vehicles while they drive.
America’s current infrastructure means that autonomous AI must adapt to and conquer the cities around it. In China, the government’s proactive approach is to transform that conquest into coevolution.
safety issues and sheer complexity make autonomous vehicles a much tougher engineering nut to crack. It’s a problem that requires a core team of world-class engineers rather than just a broad base of good ones. This tilts the playing field back toward the United States, where the best engineers from around the globe still cluster at companies like Google.
despite the fact that the United States and China are the two largest economies in the world, the vast majority of AI’s future users still live in other countries, many of them in the developing world. Any company that wants to be the Facebook or Google of the AI age needs a strategy for reaching those users and winning those markets.
empower homegrown startups by marrying worldwide AI expertise to local data. It’s a model built more on cooperation than conquest, and it may prove better suited to globalizing a technology that requires both top-quality engineers and ground-up data collection.
AI has a much higher localization quotient than earlier internet services. Self-driving cars in India need to learn the way pedestrians navigate the streets of Bangalore, and micro-lending apps in Brazil need to absorb the spending habits of millennials in Rio de Janeiro.