A more charitable way of putting it is that scientists will adopt positions that are theoretically insufficiently extreme, compared to the ideal positions that scientists would adopt, if they were Bayesian AIs and could trust themselves to reason clearly.