Why Machines Learn Quotes

Rate this book
Clear rating
Why Machines Learn: The Elegant Math Behind Modern AI Why Machines Learn: The Elegant Math Behind Modern AI by Anil Ananthaswamy
1,070 ratings, 4.39 average rating, 155 reviews
Open Preview
Why Machines Learn Quotes Showing 1-7 of 7
“We cannot leave decisions about how AI will be built and deployed solely to its practitioners. If we are to effectively regulate this extremely useful, but disruptive and potentially threatening, technology, another layer of society—educators, politicians, policymakers, science communicators, or even interested consumers of AI—must come to grips with the basics of the mathematics of machine learning.”
Anil Ananthaswamy, Why Machines Learn: The Elegant Math Behind Modern AI
“The most practical thing in the world is a good theory,” Hart told me. “If you know the theoretical properties of a procedure, you can have confidence employing that without having the bother of conducting endless experiments to figure out what it does or when it works and when it doesn’t work.”
Anil Ananthaswamy, Why Machines Learn: The Elegant Math Behind Modern AI
“As exciting as these advances are, we should take all these correspondences between deep neural networks and biological brains with a huge dose of salt. These are early days. The convergences in structure and performance between deep nets and brains do not necessarily mean the two work in the same way; there are ways in which they demonstrably do not. For example, biological neurons “spike,” meaning the signals travel along axons as voltage spikes.”
Anil Ananthaswamy, Why Machines Learn: The Elegant Maths Behind Modern AI
“Once trained, the LLM is ready for inference. Now given some sequence of, say, 100 words, it predicts the most likely 101st word. (Note that the LLM doesn’t know or care about the meaning of those 100 words: To the LLM, they are just a sequence of text.) The predicted word is appended to the input, forming 101 input words, and the LLM then predicts the 102nd word. And so it goes, until the LLM outputs an end-of-text token, stopping the inference. That’s it!

An LLM is an example of generative AI. It has learned an extremely complex, ultra-high-dimensional probability distribution over words, and it is capable of sampling from this distribution, conditioned on the input sequence of words. There are other types of generative AI, but the basic idea behind them is the same: They learn the probability distribution over data and then sample from the distribution, either randomly or conditioned on some input, and produce an output that looks like the training data.”
Anil Ananthaswamy, Why Machines Learn: The Elegant Math Behind Modern AI
“Ponder this for a moment. Newborn ducklings, with the briefest of exposure to sensory stimuli, detect patterns in what they see, form abstract notions of similarity/dissimilarity, and then will recognize those abstractions in stimuli they see later and act upon them.”
Anil Ananthaswamy, Why Machines Learn: The Elegant Math Behind Modern AI
“Decades later, Widrow, recalling Wiener’s personality in a book, painted a particularly evocative picture of a man whose head was often, literally and metaphorically, “in the clouds” as he walked the corridors of MIT buildings: “We’d see him there every day, and he always had a cigar. He’d be walking down the hallway, puffing on the cigar, and the cigar was at angle theta—45 degrees above the ground. And he never looked where he was walking…But he’d be puffing away, his head encompassed in a cloud of smoke, and he was just in oblivion. Of course, he was deriving equations.”
Anil Ananthaswamy, Why Machines Learn: The Elegant Math Behind Modern AI
“What you do is you take the single value of the error, square it, swallow hard, because you are going to tell a lie, [and] you say that’s the mean squared error,”
Anil Ananthaswamy, Why Machines Learn: The Elegant Math Behind Modern AI