Doug Lautzenheiser

45%
Flag icon
Now that we know how to (more or less) solve the inference problem, we’re ready to learn Bayesian networks from data, because for Bayesians learning is just another kind of probabilistic inference. All you have to do is apply Bayes’ theorem with the hypotheses as the possible causes and the data as the observed effect: P(hypothesis | data) = P(hypothesis) × P(data | hypothesis) / P(data) The hypothesis can be as complex as a whole Bayesian network, or as simple as the probability that a coin will come up heads. In the latter case, the data is just the outcome of a series of coin flips. If, ...more
This highlight has been truncated due to consecutive passage length restrictions.
The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
Rate this book
Clear rating