Per

9%
Flag icon
In a transformer, such parameters are stored as weights between nodes in the neural net. And in practice, while they sometimes correspond to human-understandable concepts like “hairy body” or “trunk,” they often represent highly abstract statistical relationships that the model has discovered in its training data. Using these relationships, transformer-based large language models (LLMs) can predict which tokens would be most likely to follow a certain input prompt by a human. They then convert those back into text (or images, audio, or video) that humans can understand.
The Singularity is Nearer: When We Merge with AI
Rate this book
Clear rating
Open Preview