(?)
Quotes are added by the Goodreads community and are not verified by Goodreads. (Learn more)

“Once trained, the LLM is ready for inference. Now given some sequence of, say, 100 words, it predicts the most likely 101st word. (Note that the LLM doesn’t know or care about the meaning of those 100 words: To the LLM, they are just a sequence of text.) The predicted word is appended to the input, forming 101 input words, and the LLM then predicts the 102nd word. And so it goes, until the LLM outputs an end-of-text token, stopping the inference. That’s it!

An LLM is an example of generative AI. It has learned an extremely complex, ultra-high-dimensional probability distribution over words, and it is capable of sampling from this distribution, conditioned on the input sequence of words. There are other types of generative AI, but the basic idea behind them is the same: They learn the probability distribution over data and then sample from the distribution, either randomly or conditioned on some input, and produce an output that looks like the training data.”

Anil Ananthaswamy, Why Machines Learn: The Elegant Math Behind Modern AI
Read more quotes from Anil Ananthaswamy


Share this quote:
Share on Twitter

Friends Who Liked This Quote

To see what your friends thought of this quote, please sign up!

0 likes
All Members Who Liked This Quote

None yet!


This Quote Is From

Why Machines Learn: The Elegant Math Behind Modern AI Why Machines Learn: The Elegant Math Behind Modern AI by Anil Ananthaswamy
744 ratings, average rating, 109 reviews
Open Preview

Browse By Tag