My colleagues Eric-Jan Wagenmakers and Jeff Rouder and I have a new manuscript in which we respond to Hoijtink, van Kooten, and Hulsker's
in press manuscript Why Bayesian Psychologists Should Change the Way they Use the Bayes Factor. They suggest a method for "calibrating" Bayes factor using error rates. We show that this method is fatally flawed, but also along the way we describe how we think about the subjective properties of the priors we use in our Bayes factors:
"...a particular researcher's subjective prior is of limited use in the context of a public scientific discussion. Statistical analysis is often used as part of an argument. Wielding a fully personal, subjective prior and concluding 'If you were me, you would believe this' might be useful in some contexts, but in others it is less useful. In the context of a scientific argument, it is much more useful to have priors that approximate what a reasonable, but somewhat-removed researcher would have in the situation. One could call this a 'consensus prior' approach. The need for broadly applicable arguments is not a unique property of statistics; it applies to all scientific arguments. We do not argue to convince ourselves; we should therefore make use of statistical arguments that are not pegged to our own beliefs...
It should now be obvious how we make our 'Bayesian omelet'; we break the eggs and cook the omelet for others in the hopes that it is something like what they would choose for themselves. With the right choice of ingredients, we think our Bayesian omelet can satisfy most people; others are free to make their own, and we would be happy to help them if we can. "
Our completely open, reproducible manuscript --- “Calibrated” Bayes factors should not be used: a reply to Hoijtink, van Kooten, and Hulsker --- along with a supplement and R code, is available on github (with DOI!).