Balint Erdi

12%
Flag icon
Still, the algorithmic techniques honed for the standard version of the multi-armed bandit problem are useful even in a restless world. Strategies like the Gittins index and Upper Confidence Bound provide reasonably good approximate solutions and rules of thumb, particularly if payoffs don’t change very much over time.
Algorithms to Live By: The Computer Science of Human Decisions
Rate this book
Clear rating
Open Preview