Umar Khan

14%
Flag icon
Over the last decade the amount of computation used to train the largest models has increased exponentially. Google’s PaLM uses so much that were you to have a drop of water for every floating-point operation (FLOP) it used during training, it would fill the Pacific.
The Coming Wave: AI, Power, and Our Future
Rate this book
Clear rating
Open Preview