More on this book
Community
Kindle Notes & Highlights
by
Fei-Fei Li
Read between
May 11, 2024 - February 4, 2025
“Precisely. Awareness is what it’s all about. It’s the single most precious resource in all of health care. And it’s the one thing we can’t scale.”
Seeing my field through her eyes instantly expanded my motivation beyond the curiosity that had propelled me for all these years, potent as it was. For the first time, I saw it as a tool for doing good, and maybe even lessening the hardships families like mine faced every day. I experienced my first run-in with the ethics of AI: a nascent idea for so many of us, but one that was quickly becoming inescapably real.
“Guys, I’m begging you—please don’t just download the latest preprints off arXiv every day. Read Russell and Norvig’s book. Read Minsky and McCarthy and Winograd. Read Hartley and Zisserman. Read Palmer. Read them because of their age, not in spite of it. This is timeless stuff. It’s important.”
But despite a faculty offer from Princeton straight out of the gate—a career fast track any one of our peers would have killed for—he was choosing to leave academia altogether to join a private research lab that no one had ever heard of. OpenAI was the brainchild of Silicon Valley tycoons Sam Altman, Elon Musk, and LinkedIn CEO Reid Hoffman, built with an astonishing initial investment of a billion dollars. It was a testament to how seriously Silicon Valley took the sudden rise of AI, and how eager its luminaries were to establish a foothold within it. Andrej would be joining its core team of
...more
Eight hundred GPUs. It was a dizzying increase, considering that AlexNet had required just two to change the world in 2012.
AI was becoming a privilege. An exceptionally exclusive one.
More and more, we found ourselves observing AI, empirically, as if it were emerging on its own. As if AI were something to be identified first and understood later, rather than engineered from first principles.
Innovation unfolded gradually in the hypercautious world of health care, and while that was an occasionally frustrating fact, it was also a comforting one.
Under the hood, a new type of machine learning model known as a “transformer”—easily the biggest evolutionary leap in the neural network’s design since AlexNet in 2012—makes LLMs possible by embodying all of the necessary qualities: mammoth scale, the ability to accelerate training time by processing the data in large, parallel swaths, and an attention mechanism of incredible sophistication.
in a span of only around ten years, algorithms have evolved from struggling to recognize the contents of photographs, to doing so at superhuman levels, to now, amazingly, creating entirely new images on their own, every bit as photographic, but entirely synthetic, and with an often unsettling level of realism and detail. Already, it seems, the era of deep learning is giving way to a new revolution, as the era of generative AI dawns.
For comparison, AlexNet debuted with a network of sixty million parameters—just enough to make reasonable sense of the ImageNet data set, at least in part—while transformers big enough to be trained on a world of text, photos, video, and more are growing well into hundreds of billions of parameters.
AI that generates as fluently as it recognizes.
In the real world, there’s one North Star—Polaris, the brightest in the Ursa Minor constellation. But in the mind, such navigational guides are limitless. Each new pursuit—each new obsession—hangs in the dark over its horizon, another gleaming trace of iridescence, beckoning. That’s why my greatest joy comes from knowing that this journey will never be complete. Neither will I. There will always be something new to chase. To a scientist, the imagination is a sky full of North Stars.

