Bayes’ rule combines priors and likelihoods to come up with posterior probabilities for each hypothesis. The rule itself is simple: the posterior is just the prior multiplied by the likelihood, and divided by a second prior (this is the “prior on the data”—which in this case is the prior probability of a wet lawn; we don’t need to worry about this here since it is the same for each hypothesis).