tag:blogger.com,1999:blog-503117605532019012019-11-18T00:48:03.577-08:00BayesFactor: Software for Bayesian inferenceA blog devoted to the BayesFactor software for Bayesian data analysisRichard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.comBlogger45125tag:blogger.com,1999:blog-50311760553201901.post-37055801001161459092016-07-29T12:18:00.002-07:002016-07-29T12:19:41.010-07:00Stop saying confidence intervals are "better" than p valuesOne of the common tropes one hears from advocates of confidence intervals is that they are superior, or should be preferred, to p values. In our paper<a href="https://learnbayes.org/papers/confidenceIntervalsFallacy/" target="_blank"> "The Fallacy of Placing Confidence in Confidence Intervals"</a>, we outlined a number of interpretation problems in confidence interval theory. We did this from a mostly Bayesian perspective, but in the second section was an example that showed why, from a frequentist perspective, confidence intervals can fail. However, many people missed this because they assumed that the paper was all Bayesian advocacy. The purpose of this blog post is to expand on the frequentist example that many people missed; one doesn't have to be a Bayesian to see that confidence intervals can be less interpretable than the p values they are supposed to replace. <a href="http://andrewgelman.com/2011/08/25/why_it_doesnt_m/" target="_blank">Andrew Gelman briefly made this point previously</a>, but I want to expand on it so that people (hopefully) more clearly understand the point.<br /><br /><a name='more'></a>Understanding the argument I'm going to lay out here is critical to understanding <i>both</i> p values and confidence intervals. As we'll see, fallacies about one or the other are what lead advocates of confidence intervals to falsely believe that CIs are "better".<br /><br /><h3>p values and "surprise"</h3>First, we must define a p value properly and understand its role in frequentist inference. The p value is the <i>probability of obtaining a result at least as extreme as the one we observed, under some assumption about the true distribution of the data. </i>A low p value is taken as indicating that the result observed was very extreme under the assumptions, and hence calls the assumptions into doubt. One might say that a low p value is "surprising" under the assumptions. I will not question this mode of inference here.<br /><br />It is critical to keep in mind that a low p value can call an assumption into doubt, but a high p value does not "confirm" anything. This is consistent with falsificationist logic. We often see p values used in the context of null hypothesis significance testing (NHST), where a single p value is computed that indicates how extreme the data under the assumption of a null hypothesis; however, we can compute p values for any hypothesis we like. As an example, suppose we are interested in whether reading comprehension scores are affected by caffeine. We apply three different doses to N=10 people in each group in a between-subjects design, and test their reading comprehension. For the sake of the example, we assume normality, homogeneity of variance, etc. We apply a one-way ANOVA to the reading comprehension scores and obtain an F statistic of F(2,27)=8.<br /><br />If we were to assume that there was no relationship between the reading scores and caffeine dose, then the resulting p value for this F statistic is p=0.002. This indicates that we would only expect F statistics as extreme as this one .2% of the time, if there were no true relationship.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-sTeu9_WetwY/V5tlEoh6Y9I/AAAAAAAAKPU/RLf04NPEo5kZ4J3U_J8VZ7g_IYWWgmSjQCLcB/s1600/Screen%2BShot%2B2016-07-29%2Bat%2B15.15.21.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="195" src="https://3.bp.blogspot.com/-sTeu9_WetwY/V5tlEoh6Y9I/AAAAAAAAKPU/RLf04NPEo5kZ4J3U_J8VZ7g_IYWWgmSjQCLcB/s400/Screen%2BShot%2B2016-07-29%2Bat%2B15.15.21.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The curve shows the distribution of F(2,27) statistics when the null hypothesis is true. The area under the curve to the right of the observed F statistic is the p value.</td></tr></tbody></table>This low p value would typically be regarded as strong evidence against the null hypothesis, because -- as the graph above shows -- an F statistic as extreme as the observed on would be quite rare, if indeed there were no relationship between reading scores and caffeine.<br /><br />So far, this is all first-year statistics (though it is often misunderstood). Although we typically see p values computed for a single hypothesis, there is nothing stopping us from computing it for multiple hypotheses. Suppose we are interested in the true size of the effect between reading scores and caffeine dosage. One statistic that quantifies this relationship is <span style="background-color: white;"><span style="font-family: inherit;">ω<sup>2</sup></span></span>, the proportion of the total variance in the reading scores that is "accounted for" by caffeine (<a href="http://ww.w.statpower.net/Steiger%20Biblio/Steiger04.pdf" target="_blank">see Steiger, 2004 for details</a>). We won't get into the details of how this is computed; we need only know that:<br /><br /><ul><li>When <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2</sup>=0, there is no relationship between caffeine and reading scores. All variance is error; that is, knowing someone's reading score does not give any information about which dose group they were in.</li><li>When <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2</sup>=1, there is the strongest possible relationship between caffeine and readings scores. No variance is error; that is, by knowing someone's reading score one can know with certainty which does group they were in.</li><li>As <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>gets larger, larger and larger F statistics are predicted. </li></ul><div>We have computed the p value under the assumption that <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2</sup>=0, but what about all other <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>values? Try this <a href="https://richarddmorey.shinyapps.io/etasq/" target="_blank">shiny app</a> to find the predicted distribution of F statistics, and hence p values, for other values of <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2</sup>. Try to find the value of <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>that would yield a p value of exactly 0.05; it should be about <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2</sup>=0.108. </div><div><br /></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-BY7DS7m6xm4/V5tr_Bj8mXI/AAAAAAAAKPk/7_a88lb6AJMc_8DUFi_IIipvToCFVZ_pQCLcB/s1600/Screen%2BShot%2B2016-07-29%2Bat%2B15.44.45.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="191" src="https://3.bp.blogspot.com/-BY7DS7m6xm4/V5tr_Bj8mXI/AAAAAAAAKPk/7_a88lb6AJMc_8DUFi_IIipvToCFVZ_pQCLcB/s400/Screen%2BShot%2B2016-07-29%2Bat%2B15.44.45.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A Shiny app for finding p values in a one-way ANOVA with three groups.</td></tr></tbody></table><div><br /></div><div>All values of <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>less than 0.108 yield p values of less than 0.05. If we designate <span style="background-color: white;">p<0.05 as "surprising" p values, then F=8 would be surprising under the </span><span style="background-color: white;">assumption of any value of </span><span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup><span style="background-color: white;">between 0 and 0.108.</span></div><div><span style="background-color: white;"><br /></span></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-W2B-ZbPJMjU/V5uBFWe9MGI/AAAAAAAAKP4/x4KrOssZDkYSwd6jXVeUkesXPbXF0LlzgCLcB/s1600/Screen%2BShot%2B2016-07-29%2Bat%2B17.14.09.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="161" src="https://4.bp.blogspot.com/-W2B-ZbPJMjU/V5uBFWe9MGI/AAAAAAAAKP4/x4KrOssZDkYSwd6jXVeUkesXPbXF0LlzgCLcB/s400/Screen%2BShot%2B2016-07-29%2Bat%2B17.14.09.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Using the Shiny app, we can see that a F=8 yields a right-tailed p value of about 0.05 when <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit; text-align: start;">2</sup><span style="font-size: small; text-align: start;"> </span>is approximately 0.108<span style="font-size: small; text-align: start;">.</span> </td></tr></tbody></table><div><span style="background-color: white;"><br /></span></div><div><span style="background-color: white;"><br /></span></div><div><span style="background-color: white;">Notice that the p values we've computed thus far are "right-tailed" p values; that is, "extreme" is defined as "too big". We can also ask about whether the F statistic we've found is extreme in the other direction: that is, is it "too small". A p value used to indicate whether the F value is too small is called a "left-tailed" p value. Using the Shiny app, one can work out the value of </span><span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2</sup> such that F=8 would be "surprisingly" small at the p=0.05 level; that value is <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2</sup>=0.523. Under any true value of <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>greater than 0.523, F=8 would be surprisingly small.</div><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-KovogZp0WZk/V5uC31ws5UI/AAAAAAAAKQE/FRI5bhhE3aQZnEay-axNVHyUk6vG0cXdgCLcB/s1600/Screen%2BShot%2B2016-07-29%2Bat%2B17.22.08.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="162" src="https://3.bp.blogspot.com/-KovogZp0WZk/V5uC31ws5UI/AAAAAAAAKQE/FRI5bhhE3aQZnEay-axNVHyUk6vG0cXdgCLcB/s400/Screen%2BShot%2B2016-07-29%2Bat%2B17.22.08.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Using the Shiny app, we can see that a F=8 yields a left-tailed p value of about 0.05 when <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit; text-align: start;">2</sup><span style="font-size: small; text-align: start;"> </span>is approximately 0.523<span style="font-size: small; text-align: start;">.</span></td></tr></tbody></table><br /><ul><li>If 0 <span style="background-color: white; font-family: inherit;"><span style="color: #252525; font-family: sans-serif; font-size: 14px;">≤ </span>ω</span><sup style="font-family: inherit;">2 </sup><span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px;">≤ </span>0.108, the observed F statistic would be surprisingly large (that is, the right-tailed p <span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px;">≤ </span>0.05)</li><li>If 0.523 <span style="background-color: white; font-family: inherit;"><span style="color: #252525; font-family: sans-serif; font-size: 14px;">≤ </span>ω</span><sup style="font-family: inherit;">2 </sup><span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px;">≤ </span>1, the observed F statistic would be surprisingly small (that is, the left-tailed p <span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px;">≤ </span>0.05)</li><li>If 0.108 <span style="background-color: white; font-family: inherit;"><span style="color: #252525; font-family: sans-serif; font-size: 14px;">≤ </span>ω</span><sup style="font-family: inherit;">2 </sup><span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px;">≤ </span>0.523, the observed F statistic would not be surprisingly large or small. </li></ul><br />Critically, we've used p values to make all of these statements. The p values tell us whether values would be "surprisingly extreme", under particular assumptions;<b> p values allow us, under frequentist logic, to <i>rule out </i>true values of <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2</sup>, <i>but not to rule them in.</i></b><br /><br /><h3>p values and confidence intervals</h3>Many people are aware of the relationship between p values and confidence intervals. A typical X% (two-tailed) confidence interval contains all parameter values such that neither one-sided p values are less than (1-X/100)/2. That sounds complicated, but it isn't; for a 90% confidence interval, we need just need all the values for which the observed data would not be "too surprising" (p<0.05, for one of the two-sided tests).<br /><br />We've already computed the 90% confidence interval for <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>in our example; for all values in [0.108,<span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px;"> </span>0.523], the p value for both one sided tests is p>0.05. From each of two-sided tests we get an error rate of 0.05, and hence the confidence coefficient is 100 times 1 - (0.05 + 0.05) = 90%.<br /><br />How can we interpret the confidence interval? Confidence interval advocates would have us believe that the interval [0.108,<span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px;"> </span>0.523] gives "plausible" or "likely" values for the parameters, and that the width of this interval tells us the precision of our estimate. But remember how the CI was computed: using p values. We know that nonsignificant high p values do not <i>rule in </i>parameter values as plausible; rather, the values outside the interval have been <i>ruled out</i>, due to the fact that if those were the true values, the observed data would be surprising.<br /><br />So rather than thinking of the CI as values that are "ruled in" as "plausible" or "likely" by the data, we should rather (from a frequentist perspective, at least) <b><i>think of the confidence interval as values that have not yet been ruled out by a significance test.</i></b><br /><b><i><br /></i></b><br /><h3>Does this matter?</h3><div>This distinction matters a great deal for understanding both p values and confidence intervals. In order to use p values in any way that approaches reasonability, we need to understand the "surprise" interpretation, and we need to realise that we can compute p values for many hypotheses, not just the null hypothesis. In order to interpret confidence intervals well, we need to understand the <a href="https://errorstatistics.com/2012/06/02/anything-tests-can-do-cis-do-better-cis-do-anything-better-than-tests/" target="_blank">"fallacy of acceptance"</a>: Just because a value is in the CI, doesn't mean it is plausible; it only means that it has not yet been ruled out.</div><div><br /></div><div>To see the real consequences of this fallacy, consider what we would infer if F(2,27)=0.001 (p=0.999). Any competent data analyst would notice that there is something wrong; the means are surprisingly similar. Under the null hypothesis, when all error is due to error within the groups, we expect the means to vary. This F statistic indicates that the means are so similar that even under the null hypothesis -- where the true means are exactly the same -- we would expect more similar observed means only one time in a thousand.</div><br />In fact, the F statistic is so small that under all values of <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>, the left-tailed p value is at most 0.001. Why? Because <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>can't be any lower than 0, and this represents the null hypothesis. If we built a 90% confidence interval, it would be empty because there are no values of <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>that yield p>0.05. For all true values of <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>, the observed data are "surprising". Now this presents no particular problem for an interpretation of p values that rests solely on their relationship with p values. But note that the very high p value tells us more than the confidence interval; the CI depends on the confidence, and is simply empty. The p value and the F statistic have the information we want; they tells us that the means are much more similar than we would typically expect under any hypothesis. A competent data analyst would, at this point, check the procedure or data for problems. The entire model is suspect.<br /><br />But what does this mean for a confidence interval advocate who is invested in the (incorrect) interpretation of the CI in terms of "plausible values" or "precision"? Consider Steiger (2004), who suggests replacing a missing bound with "0" in the CI for <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2</sup>. This is an awful suggestion. In the example above with F=0.001, this would imply that the confidence interval includes a single value, 0. But the observed data F=0.001 would be very surprising if <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup><span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px;">= </span>0. Under frequentist logic, the value -- and all other values -- should be ruled out. Moreover, a CI of (0) is infinitesimally thin. Steiger admits that this obviously does not imply infinite precision, but neither Steiger nor any other CI advocate give a formal reason why CIs must, in general have an interpretation in terms of precision. When the interpretation obviously fails, this should make us doubt whether the interpretation was correct in the first place. The p value tells the story much better than the CI, without encouraging us to fall into fallacies of acceptance or precision.<br /><br /><h3>Where to go from here?</h3>It is often claimed that confidence interval is more informative than p values. This assertion is based on a flawed interpretation of confidence intervals, which we call the "likelihood" or "plausibility" fallacy, and is related to Mayo's "fallacy of acceptance". A proper interpretation of confidence intervals in, terms of the underlying significance tests, avoids this fallacy and prevents bad interpretations of the CIs, in particular when the model is suspect. The entire concept of the "confidence interval" encourages the fallacy of acceptance, and it is probably best if CIs were abandoned altogether. If one does not want to be Bayesian one option that is more useful than confidence intervals -- where all values are either rejected or not at a fixed level of significance -- is viewing curves of p values (for similar use of p value curves, <a href="http://www.phil.vt.edu/dmayo/pubs/Mayo_Spanos_2006_Severe_testing_Basic_Concept_NP_indcution.pdf" target="_blank">see Mayo's work on "severity"</a>).<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-E7ciPd-nN58/V5umkfXX2oI/AAAAAAAAKQU/W-v9stwlYK4CevqKr6LREthCuvObzUG0ACLcB/s1600/Rplot01.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://4.bp.blogspot.com/-E7ciPd-nN58/V5umkfXX2oI/AAAAAAAAKQU/W-v9stwlYK4CevqKr6LREthCuvObzUG0ACLcB/s400/Rplot01.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Curves of right- and left-tailed p values for the two F statistics mentioned in this post.</td></tr></tbody></table>Consider the plot on the left above, which shows all right- and left-tailed p values for F=8. The horizontal line at p=0.05 allows us to find the 90% confidence interval. For any value of <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>such that either the blue or red line is lower than the horizontal line, the observed data would be "surprising". It is easy to see that for p=0.05, these values are [0.108,<span style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px;"> </span>0.523]. The plot easily shows the necessary information without encouraging the fallacy of acceptance.<br /><br />Now, consider the plot on the right. For F=0.001, however, all values of <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>yield a left-tailed p value of less than 0.05, and hence F=0.001 would be "surprising". There are no values for which both the red and left lines are above p=0.05. The plot does not encourage us to believe that <span style="background-color: white; font-family: inherit;">ω</span><sup style="font-family: inherit;">2 </sup>is small or 0, it also does not encourage any interpretation in terms of precision; instead, it shows that <i>all</i> values are suspect.<br /><br />The answer to fallacious interpretations of p values is not to move to confidence intervals; confidence intervals only encourage related fallacies, which one can find in any confidence interval advocacy paper. If we wish to rid people of fallacies involving p values, more p values are needed, not fewer. <b><i>Confidence intervals are not "better" than p values</i></b>. The only way to interpret CIs reasonably is in terms of p values, and considering entire p value curves enables us to jettison the reliance on an arbitrary confidence coefficient, and helps us avoid fallacies.<br /><br /><br />Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com225tag:blogger.com,1999:blog-50311760553201901.post-55174498316542664142016-05-03T09:29:00.000-07:002016-05-03T10:43:10.558-07:00Numerical pitfalls in computing varianceOne of the most common tasks in statistical computing is computation of sample variance. This would seem to be straightforward; there are a number of algebraically equivalent ways of representing the sum of squares \(S\), such as \[ S = \sum_{k=1}^n ( x_k - \bar{x})^2 \] or \[ S = \sum_{k=1}^n x_k^2 + \frac{1}{n}\bar{x}^2 \] and the sample variance is simply \(S/(n-1)\).<br /><br />What is straightforward algebraically, however, is sometimes not so straightforward in the floating-point arithmetic used by computers. Computers cannot represent numbers to infinite precision, and arithmetic operations can affect the precision of floating-point numbers in unexpected ways.<br /><br /><br /><a name='more'></a><br /><br />Consider the numbers .1 and .3/3. These two numbers are equal. However,<br /><pre class="brush: r">.1 - .3/3<br /></pre><pre><code>## [1] 0.00000000000000001388<br /></code></pre><pre><code><br /></code></pre>is not exactly 0, as one would expect it to be (for more, see "Falling into the Floating<br />Point Trap", Chapter 4 in the <a href="http://www.burns-stat.com/pages/Tutor/R_inferno.pdf" target="_blank">"R Inferno" by Patrick Burns</a>. Multiple ways of computing the variance that are algebraically equivalent do not necessarily yield equal answers in software such as R, and some ways are better than others.<br /><br />In a series of posts, <a href="https://twitter.com/JohnDCook" target="_blank">John D. Cook</a> shows that the seemingly reasonable, commonly used second method above, which he calls the "sum of squares" method, <a href="http://www.johndcook.com/blog/2008/09/26/comparing-three-methods-of-computing-standard-deviation/" target="_blank">can be extremely unstable in certain circumstances</a>, even giving impossible negative values. He also discusses <a href="http://www.johndcook.com/blog/standard_deviation/" target="_blank">how to compute the sample variance in a numerically stable way using Welford's method</a>. Both of these posts are well worth reading.<br /><br />When I read them, I thought two things. First, I was reminded that I use [used; this was written some time ago] the "sum of squares" method in the <a href="https://cran.r-project.org/web/packages/BayesFactor/index.html" target="_blank"><span style="font-family: "courier new" , "courier" , monospace;">BayesFactor</span> package</a>. Secondly, I thought I would not be affected by the problem, because I represent numbers as logarithms internally for numerical stability and ease of division. Logarithms make many things easier: very large and very small numbers become easier to work with; exponentiation becomes multiplication; and multiplication and division become addition and subtraction. The tricky part of working with logarithms is addition and subtraction. If we have two numbers, \(\exp(a)\) and \(\exp(b)\) represented by their logarithms \(a\) and \(b\), and we want to know the logarithm of their sum, we can make use of the identities<br />\[<br />\begin{eqnarray*}<br />\log(\exp(a) + \exp(b)) = a + \log(1 + \exp(b - a))\\<br />\log(\exp(a) - \exp(b)) = a + \log(1 - \exp(b - a))<br />\end{eqnarray*}<br />\]<br />Now arithmetic with \(a\) and \(b\) is addition and subtraction, and we can use <a href="http://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf" target="_blank">accurate floating point approximations</a> of \(\log(1+\exp(x))\) and \(\log(1-\exp(x))\).<br /><h4></h4><h3></h3><h3></h3><h3>When logarithms don't help</h3><br /><br />But I wasn't really sure whether I would be affected by the instability of the "sum of squares" method, so I decided to check. It turns out, representing numbers logarithmically doesn't necessarily help. In order to demonstrate this easily, I created an R S4 class that eases arithmetic on logarithmically-represented values. First, we load necessary libraries/files: <br /><pre class="brush: r"># Install the BayesFactor and devtools packages, if you don't already have them<br /><br /># Load my S4 class for representing real numbers with logarithms<br /># and performing arithmetic on them<br /># See the code at https://gist.github.com/richarddmorey/3c77d0065983e31241bff3807482443e<br />devtools::source_gist('3c77d0065983e31241bff3807482443e')<br /><br /># set random seed so results are reproducible<br />set.seed(2)<br /></pre><br /><a href="https://gist.github.com/richarddmorey/3c77d0065983e31241bff3807482443e" target="_blank">[Click here to view the R file you'll be sourcing above]</a><br /><br /><div class="p1"><span class="s1">To see the S4 class in action, we need to generate some numbers that are logarithmically-represented. The variables </span><span class="s2">x</span><span class="s1"> and </span><span class="s2">y</span> below are equal to \(\exp(1)=2.718\) and \(\exp(2)=7.389\)<span class="s1">, respectively. The </span><span class="s2"><span style="font-family: "courier new" , "courier" , monospace;">modulo</span></span><span class="s1"> argument gives the log-represented magnitude of the number, and the </span><span class="s2"><span style="font-family: "courier new" , "courier" , monospace;">sign</span></span><span class="s1"> argument gives the sign (with <span style="font-family: "courier new" , "courier" , monospace;">1L</span> meaning the integer representation of 1):</span></div><pre class="brush: r">x = logRepresentedReal(modulo = 1, sign = 1L)<br />y = logRepresentedReal(modulo = 2, sign = 1L)<br /></pre>We can add the two numbers together, for instance: <br /><pre class="brush: r">x + y<br /></pre><pre><code>## 10.10734<br /></code></pre>Although the result does not look logarithmically-represented, we can verify that it is using the <span style="font-family: "courier new" , "courier" , monospace;">str</span> function: <br /><pre class="brush: r">str( x + y )<br /></pre><pre><code>## Formal class 'logRepresentedReal' [package ".GlobalEnv"] with 2 slots<br />## ..@ modulo: num 2.31<br />## ..@ sign : int 1<br /></code></pre>The result is of class <span style="font-family: "courier new" , "courier" , monospace;">logRepresentedReal</span>, and the <span style="font-family: "courier new" , "courier" , monospace;">modulo</span> slot tells us \(\log(x+y)\). With the arithmetic on the logarithmically-represented numbers defined using the <span style="font-family: "courier new" , "courier" , monospace;">logRepresentedReal</span> class, we can test whether our logarithms help stabilize the estimate of the variance. Following Cook, we will sample values from a uniform distribution, making use of the fact that if \(z\) has a uniform distribution, then \(−\log(z)\) has an standard exponential distribution: <br /><pre class="brush: r">runif2 = function(n){<br /> # Sample log values from exponential distribution<br /> x = -rexp(n)<br /> # represent all values logarithmically in a list<br /> lapply(x, logRepresentedReal, sign = 1L)<br />}<br /><br />n = 100<br />z = runif2(n)<br /></pre>We sampled \(n=100\) values from a uniform distribution. We can now compute the variance in several ways. The first way is to use the “sum of squares” method on the exponentiated values: <br /><pre class="brush: r"># Sum of squares method<br />var.sumsq.exp = function(z)<br />{<br /> n = length(z)<br /> z = sapply(z, as.numeric)<br /> (sum(z^2) - n*mean(z)^2)/(n-1)<br />}<br /><br />var.sumsq.exp(z)<br /></pre><pre><code>## [1] 0.07419988<br /></code></pre>This presents no problem, since our uniformly-distributed values are rather moderate. We now use Welford’s method on the logarithmically-represented values to compute the variance: <br /><pre class="brush: r">var.welford <- function(z){<br /> n = length(z)<br /> M = list()<br /> S = list()<br /> M[[1]] = z[[1]]<br /> S[[1]] = 0<br /><br /> for(k in 2:n){<br /> M[[k]] = M[[k-1]] + ( z[[k]] - M[[k-1]] ) / k<br /> S[[k]] = S[[k-1]] + ( z[[k]] - M[[k-1]] ) * ( z[[k]] - M[[k]] )<br /> }<br /> return(S[[n]] / (n - 1))<br />}<br /><br />var.welford(z)<br /></pre><pre><code>## 0.07419988<br /></code></pre>And finally, we can use the “sum of squares” method on the logarithmically-represented values: <br /><pre class="brush: r">var.sumsq = function(z){<br /> n = length(z)<br /> zsqr = sapply(z, function(x) x^2)<br /><br /> sumz = 0<br /> sumz2 = 0<br /> for(k in 1:n){<br /> sumz = sumz + z[[k]]<br /> sumz2 = sumz2 + zsqr[[k]] <br /> }<br /> mnz = sumz/n<br /> ssz = sumz2 - n * mnz^2 <br /> return(ssz / (n-1)) <br />}<br /><br />var.sumsq(z)<br /></pre><pre><code>## 0.07419988<br /></code></pre>Again, this presents no problem, since our uniformly-distributed values are moderate. So far, we see no signs of numerical instability, but none are expected. As Cook did in his example, we can add a very large number — in this case, one billion — to all sampled values. This makes the variance quite small compared to the mean, and would be expected to make the sum of squares estimates unstable. <br /><pre class="brush: r">const = 1e9<br />z = sapply(z, function(x) x + const)<br /><br />var.sumsq.exp(z)<br /></pre><pre><code>## [1] -165.4949<br /></code></pre><pre class="brush: r">var.welford(z)<br /></pre><pre><code>## 0.07419973<br /></code></pre><pre class="brush: r">var.sumsq(z)<br /></pre><pre><code>## 35886<br /></code></pre>Notice that both the sum of square estimates fail; the logarithmic representation used by <span style="font-family: Courier New, Courier, monospace;">var.sumsq</span> does not help the numeric stability. The Welford method, on the other hand, yields an accurate value. <br /><h3><br />Conclusion </h3><br />There are many ways of combating numerical instability. Representing numbers as logarithms is one important method. Although representing numbers as logarithms is effective at combating numerical instability from some sources, it does not necessarily help in all cases. The superiority of the Welford method of computing variance, even when numbers are represented logarithmically, shows this clearly. <br /><div></div>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com34tag:blogger.com,1999:blog-50311760553201901.post-16902415594378718002016-04-03T05:33:00.001-07:002016-04-04T00:42:47.178-07:00How to train undergraduate psychologists to be post hoc BS generatorsTeaching undergraduate psychology is difficult for a variety of reasons. Students come in with preconceived notions about what psychological research is and are sometimes disappointed with the mismatch between their preconceptions and reality. Much of what psychologists do is highly specialized and requires skills that are difficult to teach, and psychologists-in-training can't offer much research-wise until they have years of experience. The assignments we ask undergraduates to complete are meant to train their critical thinking skills to prepare them for a more substantive contribution to research. Sometimes, however, they do exactly the opposite; instead, assignments can reward post hoc BS generation rather than actual critical thinking.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><br /><br /><a name='more'></a>If the recent so-called crisis in psychology has highlighted anything, it is the prevalence and danger of post hoc narratives. Although statistical practices (e.g., use if significance testing) have gotten much of the blame -- at least in my corner of the research world -- the main problems are actually a level or two above that. Combining ill- or flexibly-defined theoretical concepts, post hoc reasoning, and publication bias yields a potent mixture that I would argue is responsible for the crisis.<br /><br />I have been thinking about this in the context of assignments that we ask our undergraduates to do, and how we actually train post hoc reasoning early on. Here I'll offer examples of two undergraduate assignments that I think <i>reward</i> scientists-in-training for their BS generation skills. I'll also elaborate on what I think we can do about this.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-HDDeMzAbnxQ/VwEVVcbHKhI/AAAAAAAAKNU/YqRKn4WXtR0RqiJLZgvtmjUpDwHMrNDww/s1600/Texas%2BSharpshooter%2Bcartoon.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="https://2.bp.blogspot.com/-HDDeMzAbnxQ/VwEVVcbHKhI/AAAAAAAAKNU/YqRKn4WXtR0RqiJLZgvtmjUpDwHMrNDww/s200/Texas%2BSharpshooter%2Bcartoon.png" width="157" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The "Texas Sharpshooter" paints the target around his bullet holes.</td></tr></tbody></table><br /><h3>Example assignment: Critique a peer-reviewed article</h3>The assignment: Students are assigned an article from a peer-reviewed psychological journal and asked to critique it. Ideally, they develop choose a few critiques for which they argue for in their essay.<br /><br />The basic problem with this assignment is that students are not particularly well-versed in any particular psychological topic, nor in psychological research methods. On the other hand peer-reviewed articles have been reviewed by people who are, which means that whatever problems remain with the research have evaded skilled reviewers. This is not to say that peer-reviewed research does not have major problems, but it does mean that students who have had only a few basic courses and do not have much experience in reading peer-reviewed research are unlikely to be able to find good quality-critiques spontaneously.<br /><br />Upon reading such an article and having difficulty find a critique, a student is in an awkward position: they must write an essay. So what do they do? They come up with whatever critiques come to mind, which are likely to be low-quality critiques. I suspect readers of this blog have experienced these sorts of critiques in student assignments: maybe there are cultural differences? The sample <i>seems </i>small. Are these really the best stimuli to use? They must choose a number of these arguments, and argue for them, in spite of the fact that they don't have sufficient knowledge on which to base such a critique. We're training them in the fine art of <i>bullshit</i>.<br /><br />This is not to say that these problems don't occur in some studies. But forming a good argument <i>why</i> takes specialized knowledge they they don't yet have, so we get back noise from the students. And who gets the best marks for such an assignment? <i>Students who can write clearly about things of which they have little actual understanding.</i><br /><br />We have to ask ourselves: is it any wonder that we have a replication crisis?<br /><br /><h3>Example assignment: Do an experiment and interpret the results</h3>The assignment: Students are asked to perform a simple experiment (often in groups), analyze the data, and report the results. They must interpret the results in light of the research they've read (often primarily the textbook).<br /><br />Experience doing simple experiments and analyzing the results is critical to a psychologist-in-training. But how the assignment is framed and marked is critical to whether we are training the skills we want. Students in chemistry, biology, and physics all perform easy experiments and report the results; this is as it should be.<br /><br />What is different about interpreting the results of a typical psychology experiment and that of a chemistry experiment is that there are very strong reasons to expect something specific to happen in the chemistry experiment. If the psychology experiment doesn't come out as the textbook predicts, though, they must describe why that might be. There are, of course, a hundred possible reasons why this might be the case, including the possibility that the original study was wrong, statistical noise, and sloppiness in their experimental procedure.<br /><br />But these explanations will not be the ones they will explore. We require students to show creativity and independent reading/thought. In an assignment like this, students know that the best way to get a good mark is to find a paper whose logic might predict the results obtained, and include a cogent argument why this might have caused the differences. The students turn in the paper, and will not test their hypothesis, of course; the argument is simply thrown in to get a better mark. <i>The students who do the most independent reading and form the best-sounding argument will get the best mark.</i><br /><br />This should all sound eerily familiar: we are training them in the time-honored tradition of post hoc arguments for "<a href="http://rolfzwaan.blogspot.co.uk/2015/01/the-dripping-stone-fallacy-confirmation.html" target="_blank">hidden moderators</a>".<br /><br /><h3>Fixing the problems</h3>If we want to train good psychologists, we must be very sensitive to the skills we're actually teaching, as opposed to those we <i>think</i> we are teaching. The practices in the field will be a reflection of what students are taught. How might we use assignments to train critical thinking, without teaching bad practice?<br /><br /><h4>Critiquing pop science</h4>The problem with the critique of the peer-reviewed article is that students are unlikely to be able to spot the real problems with the article. This is somewhat like asking first-year sports therapists to critique a professional sports player's technique; the imperfections are simply too fine, because the professionals have been honing their craft with help for years. It would be better to ask them to critique amateur sports players' techniques, because they will have more glaring problems.<br /><br />Unfortunately, there is no "amateur" peer-reviewed research. There is, however, a lot of very bad non-peer-reviewed pop science. Psychologists-in-training would benefit from assessing bad popular science (not just popular psychology) assessing, for instance, spurious claims of causation (vs correlation), overblown effect sizes, and mismatches between what is claimed about a research in a pop article versus what was actually done. Critiquing popular science develops the similar skills as critiquing a peer-reviewed article, without the unfortunate side effect of asking students to BS their way to a good mark.<br /><br /><br /><h4>Separating critiques of method from critiques of results</h4>Critiquing methods along side of the results leads to the unfortunate asymmetry that if an experiment yields the expected result, the methods are not critiqued, whereas if it doesn't, the students are encouraged to generate BS reasons why it might not work, with no expectation of testing those reasons. If students were asked to critique methods by themselves, then they would not be rewarded for such post hoc reasoning. Moreover, in an essay of typical length, this leaves more room to discuss reasons why the methods are problematic; for instance, if the sample size is problematic, a methods-only critique would allow space for a power analysis. In a methods+results critique, I often see critiques of sample sizes with no corresponding argument <i>why </i>the sample size is a problem.<br /><br /><br /><h4>Being specific about potential critiques</h4>In whatever assignments we give to undergraduates, we should be specific about what sorts of critiques we are expecting, preferably giving a short list of possible critiques. The students will still have to read the target article, but instead of giving a shot in the dark and being forced to argue for it, students will be forced to ask, for instance, "Does this research suffer from a confound with X?", or "is this experiment sufficiently powered to detect an effect size of Z?" or "Does this DV represent a good operationalization of W?"<br /><br />Perhaps, for instance, power is not a problem; they would then be in a position to argue that, <i>yes, the experiment is sufficiently powered, </i>instead of always (vaguely) attacking an article. Always asking for critique teaches the students that critical thinking is about dreaming up as many ways to attack an article as possible, and then forming a plausible-seeming argument around them. In contrast, being very specific about <i>possible</i> critiques -- that may not, in fact, turn out to be problems -- will develop critical thinking and argumentation skills better.<br /><br /><h3>Wrap up</h3><div>If we believe psychology is in crisis, we should look at the way we train undergraduates to see if part of the problem lies there. I think the crisis in psychology is reflected in some ways in our training. Doing better is not just about better statistical training or better open science training; it is also about ensuring a match between what we <i>think</i> we are teaching and what we <i>actually </i>teach. </div><br />Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com43tag:blogger.com,1999:blog-50311760553201901.post-31706011693834130682016-03-30T02:42:00.000-07:002016-04-03T02:49:37.980-07:00How to check Likert scale summaries for plausibilitySuppose you are reading a paper that uses Likert scale responses. The paper reports the mean, standard deviation, and number of responses. If we are -- for some reason -- suspicious of a paper, we might ask, "Are these summary statistics possible for this number of responses, for this Likert scale?" Someone asked me this recently, so I wrote some simple code to help check. In this blog post, I outline how the code works.<br /><br /><a name='more'></a>Suppose we are reading a paper that uses a 5-category Likert scale response (0-4) and they report that for 100 responses, the mean response was .5, and the standard deviation of the responses was 5. I have intentionally chosen these numbers to be impossible: for the mean to be .5, the responses have to be on average near the bottom of the scale. But if the responses are near the bottom of the scale, the standard deviation also has to be very low, because there is a bound at 0. The standard deviation of 5 is much to large for the mean.<br /><br />Another possible inconsistency arises due to the discreteness of Likert scales. For 10 Likert responses on a 3 point scale (0-2), the mean must be a multiple of .1. This, in turn, imposes a complicated constraint on the standard deviation.<br /><br />Checking whether a response pattern is possible may be simple for low N, but it gets complex as N increases. Suppose we are wondering, for a 6-item Likert scale (0-5), whether N=94, M=.83, and SD=1.21 are possible summary statistics. There are several ways we could go about this using R.<br /><br />The code for both is available here: <a href="https://gist.github.com/richarddmorey/787d7b7547a736a49c3f" target="_blank">https://gist.github.com/richarddmorey/787d7b7547a736a49c3f</a><br /><h3></h3><h3></h3><h3></h3><h3>Option 1: Brute force</h3><div>In the brute force method, we create all possible response patterns (defined as the number of responses in each response category) and then check them, finding the ones closest to our desired response pattern. The code I linked above has two functions: <span style="font-family: "courier new" , "courier" , monospace;">count.ns</span><span style="font-family: inherit;">, which counts the total number of response patterns for a given Likert scale and N. For the summaries above, this will be</span></div><div><span style="font-family: inherit;"><br /></span></div><div><div><span style="background-color: white; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 18.2px; white-space: pre;">> </span><span style="background-color: white; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;">count.ns(</span><span class="pl-smi" style="background-color: white; box-sizing: border-box; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;">N</span><span style="background-color: white; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;">, </span><span class="pl-smi" style="background-color: white; box-sizing: border-box; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;">nlev</span><span style="background-color: white; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;">)</span></div><div><span style="font-family: "courier new" , "courier" , monospace;">[1] 71523144</span></div></div><div><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div><div>or about 72 million response patterns. Brute force isn't pretty, but this job is possible in a minute or two on a modern PC. A note: on some older Windows computers this may exhaust your memory.</div><div><br /></div><div>The function <span style="font-family: "courier new" , "courier" , monospace;">get.ns</span> will compute all possible response patterns and put them in a matrix. This may take a bit of time (on my Macbook Pro it takes about a minute and a half):</div><div><br /></div><div><span class="pl-v" style="background-color: white; box-sizing: border-box; color: #ed6a43; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;"><span style="color: #333333; line-height: 18.2px;">> </span>x</span><span style="background-color: white; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;"> </span><span class="pl-k" style="background-color: white; box-sizing: border-box; color: #a71d5d; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;">=</span><span style="background-color: white; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;"> get.ns( 94</span><span style="background-color: white; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;">, </span><span class="pl-v" style="background-color: white; box-sizing: border-box; color: #ed6a43; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;">nlev</span><span class="pl-k" style="background-color: white; box-sizing: border-box; color: #a71d5d; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;">=6</span><span style="background-color: white; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 16.8px; white-space: pre;">)</span></div><div><div><span class="pl-smi" style="background-color: white; box-sizing: border-box; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 18.2px; white-space: pre;">> x</span><span style="background-color: white; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 18.2px; white-space: pre;">[</span><span class="pl-c1" style="background-color: white; box-sizing: border-box; color: #0086b3; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 18.2px; white-space: pre;">1</span><span class="pl-k" style="background-color: white; box-sizing: border-box; color: #a71d5d; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 18.2px; white-space: pre;">:</span><span class="pl-c1" style="background-color: white; box-sizing: border-box; color: #0086b3; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 18.2px; white-space: pre;">10</span><span style="background-color: white; color: #333333; font-family: "consolas" , "liberation mono" , "menlo" , "courier" , monospace; font-size: 12px; line-height: 18.2px; white-space: pre;">,]</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"> [,1] [,2] [,3] [,4] [,5] [,6]</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"> [1,] 0 0 0 0 0 94</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"> [2,] 0 0 0 0 1 93</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"> [3,] 0 0 0 0 2 92</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"> [4,] 0 0 0 0 3 91</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"> [5,] 0 0 0 0 4 90</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"> [6,] 0 0 0 0 5 89</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"> [7,] 0 0 0 0 6 88</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"> [8,] 0 0 0 0 7 87</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"> [9,] 0 0 0 0 8 86</span></div><div><span style="font-family: "courier new" , "courier" , monospace;">[10,] 0 0 0 0 9 85</span></div></div><div><br /></div><div>As you can see, <span style="font-family: "courier new" , "courier" , monospace;">x</span> now contains the possible response patterns for N=94 responses, with K=6 Likert response categories. Above I show the first 10 patterns.</div><div><br /></div><div>All that remains now is to compute the mean and standard deviation of each response pattern and compare it to the target. We can use the sum of the squared deviations from the target mean and standard deviation to sort the possible solutions. Note that if we wanted both deviations to be less than .005 (to account for rounding to the nearest .01) then we would want solutions that are no greater than 2 times .005^2 = .00005 in their summed squared error.</div><div><br /></div><div>The code linked above does all the sorting and places the solutions in an object called <span style="font-family: "courier new" , "courier" , monospace;">res</span>. After running the code, the first 10 rows of <span style="font-family: "courier new" , "courier" , monospace;">res </span><span style="font-family: inherit;">are:</span></div><div></div><table class="highlight tab-size js-file-line-container" data-tab-size="8" style="background-color: white; border-collapse: collapse; border-spacing: 0px; box-sizing: border-box; color: #333333; font-family: Helvetica, arial, nimbussansl, liberationsans, freesans, clean, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; font-size: 13px; line-height: 18.2px; tab-size: 8;"><tbody style="box-sizing: border-box;"><tr style="box-sizing: border-box;"><td class="blob-code blob-code-inner js-file-line" id="file-likert_check-r-LC74" style="box-sizing: border-box; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; overflow: visible; padding: 0px 10px; position: relative; vertical-align: top; white-space: pre; word-wrap: normal;"><span class="pl-smi" style="box-sizing: border-box;">> res</span>[<span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">1</span><span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">:</span><span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">10</span>,]</td></tr></tbody></table><div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> resp0 resp1 resp2 resp3 resp4 resp5 mean sum x^2 std. dev. sum error</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [1,] 50 30 0 9 4 1 0.8297872 200 1.206063 1.554821e-05</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [2,] 54 22 0 17 0 1 0.8297872 200 1.206063 1.554821e-05</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [3,] 56 18 1 18 1 0 0.8297872 200 1.206063 1.554821e-05</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [4,] 57 15 4 17 1 0 0.8297872 200 1.206063 1.554821e-05</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [5,] 59 3 27 0 4 1 0.8297872 200 1.206063 1.554821e-05</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [6,] 59 10 7 18 0 0 0.8297872 200 1.206063 1.554821e-05</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [7,] 60 1 27 2 3 1 0.8297872 200 1.206063 1.554821e-05</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [8,] 60 7 10 17 0 0 0.8297872 200 1.206063 1.554821e-05</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [9,] 41 46 1 0 0 6 0.8297872 200 1.206063 1.554821e-05</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;">[10,] 43 42 2 1 1 5 0.8297872 200 1.206063 1.554821e-05</span></div><div><br /></div></div><div>The first six columns contain the possible response pattern; the next three columns contain the summary statistics; the final column contains the sum of the error. We wished to match M=.83 and SD=1.21, and there are many solutions that yield these summary statistics. There is no reason to be suspicious of these summaries.</div><div><br /></div><h3>Option 2: Linear Inverse Models</h3><div>The brute force solution above does the job, but it is slow and memory intensive. If we had a 7-item Likert scale, the number of possible response patterns would be about 1.2 billion; add more, and you can see that the amount of time and memory for the brute force method becomes prohibitive. We can actually use an approximate method -- linear inverse models -- to get close.</div><div><br /></div><div>The idea of linear inverse models is that we can try to minimize a quadratic function, subject to some constraints. In our case, suppose we would like to find proportions of responses that would yield as summary statistics as close as possible to our given summary statistics. We have some reasonable constraints:</div><div><ol><li>Proportions must sum to 1 (equality constraint).</li><li>Our summary statistics should be as close as possible to our target summaries (approximate equality constraint)</li><li>Proportions must be between 0 and 1 (inequality constraint).</li></ol></div><div>The <a href="https://cran.r-project.org/web/packages/limSolve/limSolve.pdf" target="_blank"><span style="font-family: "courier new" , "courier" , monospace;">linSolve</span></a> package in R allows us to sample approximate solutions for our response proportions according to these constraints. I will not go into detail here about how to define these constraints for the <span style="font-family: "courier new" , "courier" , monospace;">linSolve</span> package; see my code linked above and the <span style="font-family: "courier new" , "courier" , monospace;">linSolve</span> manual. I will, however, show you the output of <span style="font-family: "courier new" , "courier" , monospace;">limSolve</span>'s <span style="font-family: "courier new" , "courier" , monospace;">xsample </span><span style="font-family: inherit;">function, which we use to sample possible solutions:</span></div><div><br /></div><div><table class="highlight tab-size js-file-line-container" data-tab-size="8" style="background-color: white; border-collapse: collapse; border-spacing: 0px; box-sizing: border-box; color: #333333; font-family: Helvetica, arial, nimbussansl, liberationsans, freesans, clean, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; font-size: 13px; line-height: 18.2px; tab-size: 8;"><tbody style="box-sizing: border-box;"><tr style="box-sizing: border-box;"></tr><tr style="box-sizing: border-box;"><td class="blob-code blob-code-inner js-file-line" id="file-likert_check-r-LC103" style="box-sizing: border-box; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; overflow: visible; padding: 0px 10px; position: relative; vertical-align: top; white-space: pre; word-wrap: normal;"><span class="pl-smi" style="box-sizing: border-box;">> xs</span> <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;"><-</span> <span class="pl-e" style="box-sizing: border-box; color: #795da3;">limSolve</span><span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">::</span>xsample(<span class="pl-v" style="box-sizing: border-box; color: #ed6a43;">A</span> <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">=</span> <span class="pl-smi" style="box-sizing: border-box;">A</span>, <span class="pl-v" style="box-sizing: border-box; color: #ed6a43;">B</span> <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">=</span> <span class="pl-smi" style="box-sizing: border-box;">B</span>, <span class="pl-v" style="box-sizing: border-box; color: #ed6a43;">E</span> <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">=</span> <span class="pl-smi" style="box-sizing: border-box;">E</span>, <span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">F</span> <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">=</span> <span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">F</span>, <span class="pl-v" style="box-sizing: border-box; color: #ed6a43;">G</span> <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">=</span> <span class="pl-smi" style="box-sizing: border-box;">G</span>, <span class="pl-v" style="box-sizing: border-box; color: #ed6a43;">H</span> <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">=</span> <span class="pl-smi" style="box-sizing: border-box;">H</span>, <span class="pl-v" style="box-sizing: border-box; color: #ed6a43;">sdB</span> <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">=</span> <span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">1</span>)</td></tr></tbody></table><table class="highlight tab-size js-file-line-container" data-tab-size="8" style="background-color: white; border-collapse: collapse; border-spacing: 0px; box-sizing: border-box; color: #333333; font-family: Helvetica, arial, nimbussansl, liberationsans, freesans, clean, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; font-size: 13px; line-height: 18.2px; tab-size: 8;"><tbody style="box-sizing: border-box;"><tr style="box-sizing: border-box;"><td class="blob-code blob-code-inner js-file-line" id="file-likert_check-r-LC74" style="box-sizing: border-box; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; overflow: visible; padding: 0px 10px; position: relative; vertical-align: top; white-space: pre; word-wrap: normal;"><span class="pl-smi" style="box-sizing: border-box;">> xs$X</span>[<span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">1</span><span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">:</span><span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">10</span>,]</td></tr></tbody></table><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [,1] [,2] [,3] [,4] [,5] [,6]</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [1,] 0.5231726 0.2862201 0.12002470 0.02459116 0.00000000 0.04599141</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [2,] 0.5231726 0.2862201 0.12002470 0.02459116 0.00000000 0.04599141</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [3,] 0.5231726 0.2862201 0.12002470 0.02459116 0.00000000 0.04599141</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [4,] 0.5231726 0.2862201 0.12002470 0.02459116 0.00000000 0.04599141</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [5,] 0.5231726 0.2862201 0.12002470 0.02459116 0.00000000 0.04599141</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [6,] 0.5085929 0.3226566 0.08151242 0.01938088 0.03202715 0.03582995</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [7,] 0.5085929 0.3226566 0.08151242 0.01938088 0.03202715 0.03582995</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [8,] 0.5085929 0.3226566 0.08151242 0.01938088 0.03202715 0.03582995</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;"> [9,] 0.5860701 0.2541897 0.04073821 0.03497458 0.01822376 0.06580361</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: xx-small;">[10,] 0.5860701 0.2541897 0.04073821 0.03497458 0.01822376 0.06580361</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div></div><div>The possible solutions for the proportions of responses in each response category are in each row of the <span style="font-family: "courier new" , "courier" , monospace;">xs$X</span> element of the output. We need to multiply these by N=94 to see the response patterns themselves:</div><div><br /></div><div><table class="highlight tab-size js-file-line-container" data-tab-size="8" style="background-color: white; border-collapse: collapse; border-spacing: 0px; box-sizing: border-box; color: #333333; font-family: Helvetica, arial, nimbussansl, liberationsans, freesans, clean, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; font-size: 13px; line-height: 18.2px; tab-size: 8;"><tbody style="box-sizing: border-box;"><tr style="box-sizing: border-box;"><td class="blob-code blob-code-inner js-file-line" id="file-likert_check-r-LC74" style="box-sizing: border-box; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; overflow: visible; padding: 0px 10px; position: relative; vertical-align: top; white-space: pre; word-wrap: normal;"><span class="pl-smi" style="box-sizing: border-box;">> xs$X</span>[<span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">1</span><span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">:</span><span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">10</span>,]*94 </td></tr></tbody></table></div><div><div> <span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> [,1] [,2] [,3] [,4] [,5] [,6]</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> [1,] 49.17823 26.90469 11.282322 2.311569 0.000000 4.323192</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> [2,] 49.17823 26.90469 11.282322 2.311569 0.000000 4.323192</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> [3,] 49.17823 26.90469 11.282322 2.311569 0.000000 4.323192</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> [4,] 49.17823 26.90469 11.282322 2.311569 0.000000 4.323192</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> [5,] 49.17823 26.90469 11.282322 2.311569 0.000000 4.323192</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> [6,] 47.80774 30.32972 7.662167 1.821802 3.010553 3.368016</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> [7,] 47.80774 30.32972 7.662167 1.821802 3.010553 3.368016</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> [8,] 47.80774 30.32972 7.662167 1.821802 3.010553 3.368016</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> [9,] 55.09059 23.89383 3.829392 3.287611 1.713034 6.185539</span></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">[10,] 55.09059 23.89383 3.829392 3.287611 1.713034 6.185539</span></div></div><div><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><br /></span></div><div>There are several things to notice here. First, there are some duplicates, which is to be expected from the sampling method. Second, these solutions are not integers. Since response patterns must be integers, we would have to round these and make sure they sum to N=94 before testing them to see if their summaries are acceptably close. </div><div><br /></div><div>Take the first solution: we might round this to obtain</div><div><span style="font-family: "courier new" , "courier" , monospace;">49 27 11 2 0 4</span></div><div><br /></div><div>However, this only sums to N=93, so we need to add 1 somewhere. Suppose we add it to the fourth category, to obtain</div><div><span style="font-family: "courier new" , "courier" , monospace;">49 27 11 3 0 4</span></div><div><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div><div><span style="font-family: inherit;">The mean response for this response pattern is almost exactly .83, our target mean. The standard deviation is 1.20, which is only .01 away from our target standard deviation. </span></div><div><br /></div><div>Finally, note the similarity of these solutions to the ones we obtained by brute force. The linear inverse model method has gotten us in the neighborhood of the good solutions without the inelegance, time, and memory hogging of the brute force method. However, unlike the brute force method, it is not constrained to integer solutions, so we need to do some post processing. </div><h3></h3><h3></h3><h3></h3><h3>Conclusion</h3><div>Both the brute force and linear inverse model solution yield the same answer: for a 6-item Likert scale (0-5), N=94, M=.83, and SD=1.21 are not problematic summary statistics. Which of these methods one uses depends largely on the situation, but one could even combine them for a more efficient search. As a reminder, the complete code can be found here: <a href="https://gist.github.com/richarddmorey/787d7b7547a736a49c3f" target="_blank">https://gist.github.com/richarddmorey/787d7b7547a736a49c3f</a>.</div>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com196tag:blogger.com,1999:blog-50311760553201901.post-13834374133300187322016-01-19T14:00:00.000-08:002016-01-19T14:03:39.321-08:00New digital feature editorship at the Psychonomic SocietyAs of January 1, 2016, I am the new methods editor for the <a href="http://www.psychonomic.org/default.aspx" target="_blank">Psychonomic Society</a> digital features. Steve Lewandowsky has written an introductory post with a bit of background, and my first post -- about <a href="http://link.springer.com/article/10.3758%2Fs13423-015-0988-z" target="_blank">Arsenault and Buchsbaum's recent article in Psychonomic Bulletin & Review on neuroimaging evidence for motor theories of speech</a> -- is up as well.<br /><br /><br /><ul><li>"<a href="http://www.psychonomic.org/featured-content-detail/from-bach-to-bayes-wales-richard-morey-challenge" target="_blank">From Bach to Bayes and Wales: the Richard Morey challenge</a>" (by Steve Lewandowsky)</li><li>"<a href="http://www.psychonomic.org/featured-content-detail/in-mind-s-ear-no-connection-between-hearing-speaki" target="_blank">In the mind's ear: No connection between hearing and speaking in motor cortex</a>" (my first post)</li></ul>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com52tag:blogger.com,1999:blog-50311760553201901.post-35496810287665126662016-01-09T02:29:00.000-08:002016-01-09T04:20:12.264-08:00Asymmetric funnel plots without publication biasIn my <a href="http://bayesfactor.blogspot.co.uk/2016/01/averaging-can-produce-misleading.html" target="_blank">last post about standardized effect sizes</a>, I showed how averaging across trials before computing standardized effect sizes such as partial \(\eta^2\) and Cohen's <i>d</i> can produce arbitrary estimates of those quantities. This has drastic implications for meta-analysis, but also for the interpretations of these effect sizes. In this post, I use the same facts to show how one can obtain asymmetric funnel plots — commonly taken to indicate publication bias — without any publication bias at all. You should <a href="http://bayesfactor.blogspot.co.uk/2016/01/averaging-can-produce-misleading.html" target="_blank">read the previous post if you haven't already</a>.<br /><br /><a name='more'></a>A <a href="https://en.wikipedia.org/wiki/Funnel_plot" target="_blank">funnel plot</a> is a commonly-used meta-analytic technique for the detection of bias in a subset of the scientific literature. The basic thinking is that if a literature is unbiased, the <i>average</i> estimates of an effect should not depend on the sample size (or some other measure of the "precision" of a study). For a given sample size, estimates of the effect size should be spread around the true effect size, with this spread decreasing as sample size gets larger.<br /><br />Publication bias, which is often assumed to manifest itself as 1) a tendency for statistically significant results to be published, and 2) a tendency for researchers to publish effects consistent with their theoretical outlook, will result in asymmetric funnel plots. Read this <a href="http://blogs.discovermagazine.com/neuroskeptic/2015/11/10/reproducibility-crisis-the-plot-thickens/" target="_blank">Neuroskeptic post</a> about a <a href="http://psycnet.apa.org/index.cfm?fa=buy.optionToBuy&id=2015-48744-001" target="_blank">paper by Shanks and colleagues</a> for an example how asymmetric funnel plots are used to argue for publication bias. Notice that the plots use a standardized effect size on the <i>x</i> axis.<br /><br /><h4>A (not so) hypothetical paradigm</h4><div><br /></div>Since many priming effects have been called into question of recently, I will use a priming example. Suppose we are interested in an emotional face priming: we ask participants to perform a lexical decision task, but prior to every trial we "subliminally" (ie, very quickly) present either an excited face or a sad face, thinking that the excited face will speed performance on the task. Participants perform a number of trials in both priming conditions, which are averaged to obtain two "observations" per participant: an average RT in each condition. This is very common in the psychological literature. A paired <i>t</i> test is used to assess the effect of the prime.<br /><br />Now suppose this same paradigm is used across many labs, with only variation in sample sizes. Each lab reports the standard statistics: the mean difference in RTs across participants, its standard error, and the <i>t</i> statistic. A skeptic comes along, collects all the statistics across all the papers, and computes Hedge's <i>g</i> standardized effect size (a variation on the standardized difference score) from the <i>t</i> statistic. They produce the funnel plot shown below by plotting the sample size<sup><a href="#footnote1">1</a> </sup>(number of participants) against the standardized effect size:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-0oaqunJrtVE/VpDRd9wdHPI/AAAAAAAAKKo/kpv5zDKxmJQ/s1600/unnamed-chunk-1-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://3.bp.blogspot.com/-0oaqunJrtVE/VpDRd9wdHPI/AAAAAAAAKKo/kpv5zDKxmJQ/s400/unnamed-chunk-1-1.png" width="400" /></a></div>This is a <i>massively</i> asymmetric funnel plot, and would likely be taken as strong evidence of publication bias. However, because I simulate the data, I know that there is no publication bias at all. This is merely an artifact of averaging and standardized effect sizes. You can obtain my simulation code here: <a href="https://gist.github.com/richarddmorey/beb49a28dd130f851ae4" target="_blank">github gist</a><br /><br />Why is the funnel plot asymmetric? In all studies, the total number of trials performed was approximately the same: 2000 trials. The way these broke down across participants was different. Some studies had 100 trials per condition and 10 participants; others, 10 trials per condition and 100 participants. The standard deviation of the difference scores around their mean is a function of the number of trials performed per participant. When the number of trials is high, the standardized effect size is high, just as discussed in the <a href="http://bayesfactor.blogspot.co.uk/2016/01/averaging-can-produce-misleading.html" target="_blank">previous blog post</a>. But here, because the total amount of "effort" per study is conserved (that is, all studies have the same number of total trials), the studies with larger numbers of trials per participant have a <i>smaller </i>number of participants. The funnel plot therefore looks problematic, but it is an artifact.<br /><br />One wonders if <a href="http://stats.stackexchange.com/questions/151941/the-use-of-cohens-d-effect-size-in-funnel-plots-to-assess-small-study-bias" target="_blank">this Cross Validated query</a> was related to this artifact.<br /><br />Creating a funnel plot from the raw effect sizes removes the asymmetry; a funnel plot with the standard error on the <i>y</i> axis also does so.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-X-pVTYdE1uM/VpDWkSB_b_I/AAAAAAAAKK8/8rkGZlNf7xY/s1600/unnamed-chunk-2-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://4.bp.blogspot.com/-X-pVTYdE1uM/VpDWkSB_b_I/AAAAAAAAKK8/8rkGZlNf7xY/s400/unnamed-chunk-2-1.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>This does not mean that using the standard error on the <i>y</i> axis fixes the problem. Consider another way number of trials and number of participants can be divided: positively correlated, rather than negatively as before. That is, studies that run more participants also run more trials per condition. The funnel plots end up looking very strange, with an asymmetry that is <i>reverse</i> of the one we expect. Larger effect sizes are obtained with larger numbers of participants.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Rh9M3Q9aATY/VpDWkU9k7MI/AAAAAAAAKK4/XLEgYXNsz5o/s1600/unnamed-chunk-3-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://1.bp.blogspot.com/-Rh9M3Q9aATY/VpDWkU9k7MI/AAAAAAAAKK4/XLEgYXNsz5o/s400/unnamed-chunk-3-1.png" width="400" /></a></div>Without reflection, this pattern might be offered as evidence that there was something very strange happening in a literature. But there's nothing strange here, except with the analysis. If there <i>were </i>publication bias, though, this artifact might actually mask it.<br /><br /><h4>Wrap up</h4><br />I suspect there are other artifacts one could generate using standardized effect sizes in a meta-analysis<sup><a href="#footnote2">2</a></sup>. How can we keep from getting fooled? In some cases, perhaps the correction I mentioned in the <a href="http://bayesfactor.blogspot.co.uk/2016/01/averaging-can-produce-misleading.html" target="_blank">previous post</a> might be of use. Since a funnel plot is often used for detecting problematic bias in a literature rather than estimating the effect size, the fact that there is no "true" effect size is not problematic.<br /><br />For future research, <a href="http://opennessinitiative.org/" target="_blank">data sharing</a> and reporting of different effect size measures will help. Modifications of Cohen's <i>d</i> and Hedge's <i>g</i> exist which will reduce this problem (see <a href="https://www.meta-analysis.com/downloads/Meta-analysis%20Effect%20sizes%20based%20on%20means.pdf" target="_blank">"Computing <i>d</i> and <i>g</i> from studies that use pre-post scores or matched groups"</a>, for instance), but these modified statistics cannot be computed from typically-reported statistics. The fact that we need statistics that are not typically reported in order to perform reasonable meta-analyses raises the question of whether current reporting practices really allow a cumulative science.<br /><br /><h4>Footnotes</h4><div><br /></div><sup>1</sup><a href="https://www.blogger.com/blogger.g?blogID=50311760553201901#" name="footnote1"></a>Funnel plots can be created with a variety of statistics on the <i>y</i> axis. Different researchers make different recommendations for both axes (see, for instance, <a href="http://jama.jamanetwork.com/article.aspx?articleid=202337#METHODS" target="_blank">Peters et al 2006</a>), and as we will see, this can have a dramatic effect on the conclusions.<br /><br /><sup>2</sup><a href="https://www.blogger.com/blogger.g?blogID=50311760553201901#" name="footnote2"></a><a href="http://www.bmj.com/content/343/bmj.d4002" target="_blank">Sterne et al (2011)</a> note minor asymmetries caused by a correlation between an effect and a standard error, as can be caused in estimation of extreme proportions or similar parameters, but nothing as dramatic or fundamental as shown here. Their asymmetries are mostly problematic for asymmetry tests, which can pick up minor asymmetries with larger samples.Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com25tag:blogger.com,1999:blog-50311760553201901.post-71461051748774410962016-01-07T14:21:00.001-08:002016-01-09T02:27:17.433-08:00Averaging can produce misleading standardized effect sizesRecently, there have been many calls for a focus on effect sizes in psychological research. In this post, I discuss how naively using standardized effect sizes with averaged data can be misleading. This is particularly problematic for meta-analysis, where differences in number of trials across studies could lead to very misleading results.<br /><br /><br /><a name='more'></a><br /><br />There are two main types of effect sizes in typical use: raw effect sizes and standardized effect sizes. <em>Raw effect sizes</em> are what you typically see in a plot: for instance, the effect of a priming manipulation might be 30ms. The advantage of raw effect sizes are that they are closer to the process of interest and more interpretable. We all know what it means for something to take 30ms to happen.<br /><br />Another kind of effect size is the <em>standardized</em> effect size. With a standardized effect size, the raw effect is compared to the some measure of variability in the population. For instance, if the standard deviation of childrens’ heights at age 10 were 3 inches, and a “good” diet had an effect of 1.5 inches on average, we could say that the effect of the diet was .5 standard deviations. This is the logic of Cohen’s <span class="math inline">\(d\)</span>, for instance. The disadvantage of this is that it is more difficult to understand what an effect of “half a standard deviation” means (and the variance-accounted-for statistics such as <span class="math inline">\(\eta^2\)</span> and <span class="math inline">\(\omega^2\)</span> are even more difficult to interpret); but standardized effect sizes have many good properties, including a close relationship to the concept of statistical power, comparability across paradigms, and the fact that they can often be computed from reported statistics such as <span class="math inline">\(t\)</span> and <span class="math inline">\(F\)</span>.<br /><br />For these reasons, standardized effect sizes are very common in meta-analysis. However, the common practice of averaging over trials in cognitive psychology makes them difficult to compare or even interpret.<br /><br />Consider a typical cognitive experiment in with 30 participants, each performing 10 response time trials in two conditions. Typically each participant’s data will be averaged to form a single, average response time in each condition; these average RTs are then submitted to a repeated measures ANOVA (in fact, some R packages, such as <a href="https://cran.r-project.org/web/packages/afex/index.html">afex</a> do this automatically).<br /><br />Hypothetical data is shown in the plot below. These data represent two hypothetical experiments, one with 10 samples per participant and one with 50.<br /><div class="separator" style="clear: both; text-align: center;"><br /></div><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-AdsJ-gqfpMA/Vo7fA15ANtI/AAAAAAAAKKQ/dNVBa3XqrvM/s1600/unnamed-chunk-2-1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="250" src="http://1.bp.blogspot.com/-AdsJ-gqfpMA/Vo7fA15ANtI/AAAAAAAAKKQ/dNVBa3XqrvM/s400/unnamed-chunk-2-1.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Error bars represent <a href="http://www.tqmp.org/RegularArticles/vol04-2/p061/index.html" target="_blank">Morey (2008)</a> adjusted within-subject error bars, computed using <a href="http://www.cookbook-r.com/Graphs/Plotting_means_and_error_bars_(ggplot2)/" target="_blank">this code</a>.</td></tr></tbody></table><br />I have generated the data so that everything is the same across the two experiments <em>except</em> the scale of the “error”: the scale of the error in experiment 2 is <span class="math inline">\(1/\sqrt{5}\)</span> times that of experiment 1, because each averaged “observation” represents five times more data. The raw effect size is precisely the same, but our certainty about the effect size is greater in experiment 2. This is exactly as it should be.<br /><br />We can now perform the typical repeated measures ANOVAs on these two data sets, using the <a href="https://cran.r-project.org/web/packages/afex/index.html">afex</a>. The package will, if requested, compute the common partial <span class="math inline">\(\eta^2\)</span> standardized effect size statistic.<br /><br />The results of the ANOVA for the “condition” in experiment 1 are:<br /><style>table { color: #333; /* Lighten up font color */ font-family: Helvetica, Arial, sans-serif; /* Nicer font */ width: 480px; border-collapse: collapse; border-spacing: 0; } td, th { border: 1px solid #CCC; height: 30px; } /* Make cells a bit taller */ th { background: #F3F3F3; /* Light grey background */ font-weight: bold; /* Make sure they're bold */ } td { background: #FAFAFA; /* Lighter grey background */ text-align: center; /* Center our text */ } </style> <br /><table><thead><tr class="header"><th align="left">Effect</th><th align="left">df</th><th align="left">MSE</th><th align="left">F</th><th align="left">pes</th><th align="left">p.value</th></tr></thead><tbody><tr class="odd"><td align="left">condition</td><td align="left">1, 29</td><td align="left">0.08</td><td align="left">7.72</td><td align="left">.21</td><td align="left">.009</td></tr></tbody></table><br />And the results of the ANOVA for the “condition” in experiment 2 are:<br /><table><thead><tr class="header"><th align="left">Effect</th><th align="left">df</th><th align="left">MSE</th><th align="left">F</th><th align="left">pes</th><th align="left">p.value</th></tr></thead><tbody><tr class="odd"><td align="left">condition</td><td align="left">1, 29</td><td align="left">0.02</td><td align="left">38.58</td><td align="left">.57</td><td align="left"><.0001</td></tr></tbody></table><br />This is would not be unexpected by anyone who routinely uses repeated measures ANOVA. Typically, the whole point of running more trials is to get more power. We perform more trials, and we obtained a higher <span class="math inline">\(F\)</span> value for the comparison of interest.<br /><br />Notice that the sum of squares for the effect is precisely the same for both experiments. That’s because the raw effect is precisely the same. What is driving the higher <span class="math inline">\(F\)</span> value is the lower residual mean square (MSE) for the comparison, which is about 5 times smaller in Experiment 2. Again, this is what we expect. More trials, less “noise”.<br /><br />But notice what happens to partial <span class="math inline">\(\eta^2\)</span>. Because the MSE is smaller, the proportion of variance accounted for by the condition effect is larger. This drives the partial <span class="math inline">\(\eta^2\)</span> from .21 in experiment 1 to .57 in experiment 2. Researchers have previously warned about using partial <span class="math inline">\(\eta^2\)</span> for comparisons across designs (see for instance, <a href="http://psych.colorado.edu/~willcutt/pdfs/Olejnik_2003.pdf">Olejnik & Algina, 2003</a>), but these two experiments appear to have the <em>same</em> design; at least, from the perspective of someone used to only analysing averaged data, they do.<br /><br />This has the potential to wreak havoc on meta-analyses. Suppose someone combs the literature looking for <span class="math inline">\(F\)</span> values and computing partial <span class="math inline">\(\eta^2\)</span> values from the <span class="math inline">\(F\)</span> values (or, alternatively, Cohen’s <span class="math inline">\(d\)</span> from <span class="math inline">\(t\)</span> values). Assume experiment 1 represents a patient group; due to time constraints, the patients only had time for 10 trials per condition. Suppose experiment 2, on the other hand, represents a group of college students, who had time for more trials. The figure below shows the standardized effects in the two experiments.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-RC_Chv6N_Rk/Vo7fA_NjwkI/AAAAAAAAKKc/sCzcyEcclG8/s1600/unnamed-chunk-5-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://4.bp.blogspot.com/-RC_Chv6N_Rk/Vo7fA_NjwkI/AAAAAAAAKKc/sCzcyEcclG8/s400/unnamed-chunk-5-1.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><br />Although the raw effect size is <em>precisely</em> the same across the two experiments, the standardized effect size is radically different, possibly leading to erroneous conclusions. Even if there are no systematic differences in number of trials across experiments experiments with different kinds of groups, this introduces a new source of variability into estimates, as well as making it nearly impossible to interpret the effect size. What is the “true” standardized effect size? It seems difficult to say. How can we solve this problem?<br /><br /><h3>Solution 1: Generalized \(\omega^2\)</h3><div><br /></div>One option is generalized <span class="math inline">\(\omega^2\)</span> (see for instance, <a href="http://psych.colorado.edu/~willcutt/pdfs/Olejnik_2003.pdf">Olejnik & Algina, 2003</a>) Instead of using the residual variance to standardize against, generalized <span class="math inline">\(\omega^2\)</span> standardizes against all <em>measured</em> (as opposed to manipulated) factors. For instance, the variability in participants is a measured source of variability. These sources of variability are assumed to be stable properties of populations and not effected by mere design choices. We can compute generalized <span class="math inline">\(\omega^2\)</span> again using the <a href="https://cran.r-project.org/web/packages/afex/index.html">afex</a>, which yields <span class="math inline">\(\omega^2_g=0.011\)</span> for the experiment 1 and <span class="math inline">\(\omega^2_g=0.012\)</span> for experiment 2; notice that these very similar. The effect “looks” smaller, because participants vary quite a bit relative to the size of the effect.<br /><br />One problem with this approach is that the statistics necessary to compute generalized <span class="math inline">\(\omega^2\)</span> are not typically reported, meaning that this solution is useless for meta-analysis of existing literature. Ideally, we’d like a way to use reported statistics to at least compare across studies, when designs are similar enough.<br /><br /><h3>Solution 2: Adjust the MSE</h3><br />If we know the relative numbers of trials across two studies that have the same basic design, we should be able to “adjust” the MSE in the formula for the effect size (whichever effect size it happens to be) for the number of trials. In our example, experiment 2 has five times as many trials as experiment 1; we therefore would expect the MSE of experiment 2 to be one-fifth as large as that for experiment 1. To make the effect size computed from experiment 2 comparable to that from experiment 1, we can multiply its MSE by 5 before applying the formula for the effect size of interest. For partial <span class="math inline">\(\eta^2\)</span>, this leads to an easy adjustment:<br />\[ \eta^2_2 = \frac{1}{c/\eta^2_1 - c + 1} \] where <span class="math inline">\(c\)</span> is the adjustment factor, <span class="math inline">\(\eta^2_1\)</span> is the original partial <span class="math inline">\(\eta^2\)</span>, and <span class="math inline">\(\eta^2_2\)</span> is the adjusted partial <span class="math inline">\(\eta^2\)</span>.<br /><br />As an example, take our experiment 2, which had a partial <span class="math inline">\(\eta^2\)</span> of 0.571 and a sample size 5 times larger than experiment 1. Applying the formula above yields<br />\[ \begin{eqnarray*} \eta^2_2 &=& \frac{1}{5/0.571 - 5 + 1}\\ &\approx&0.21 \end{eqnarray*} \] which matches the partial <span class="math inline">\(\eta^2\)</span> from experiment 1 very well.<br /><br />The problem with this approach is that it uses experiment 1 as a “reference” experiment. It is therefore not clear what the standardized effect size means in this case, except as a way to compare across experiments with similar designs. This may be enough to someone performing a meta-analysis — particularly if they can’t obtain the statistics to compute generalized <span class="math inline">\(\omega^2\)</span> — but as a general reporting solution, it is unsatisfactory. <br /><br /><h3>Wrap up</h3><div><br /></div>Although standardized effect sizes have been advocated as a general tool for science and are increasingly reported, they are difficult to interpret because they are affected by trivial, common design decisions. The issues I raise here should be of interest to anyone working with standardized effect sizes, particularly those performing meta-analysis. They affect repeated measures designs with averaged data most acutely; however, between-subjects designs are also affected if each participant contributes an “average” score to the analysis. In the between-subjects case the adjustment would have to be different, but for large number of trials per participant might be acceptably ignored, if the error each participants’ score is small enough.<br /><br /><h4>Added postscript</h4><div><br /></div><div>After a re-read, I want to ensure that I make clear that I'm <i>not implying that the only problem here is with meta-analyses</i>; that's just what drove me to write this post, and how I decided to frame it. But consider this: if an arbitrary decision (driven merely by the resources at hand, such as time or money, or even whim) such as "how many trials will we perform per cell in this experiment?" can cause the standardized effect size to increase almost 200%, that standardized effect size should not be taken to reveal any psychological "truth" and is useless for drawing substantive conclusions.</div>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com27tag:blogger.com,1999:blog-50311760553201901.post-17684879460425445792015-12-03T15:19:00.000-08:002015-12-03T15:44:12.215-08:00Confidence intervals: What they are and are notOver at the <a href="http://www.psychonomic.org/Featured-Content" target="_blank">Psychonomic Society Featured Content</a> blog, there are several new articles outlining some of our work on confidence intervals published previously in <a href="http://www.springer.com/psychology/cognitive+psychology/journal/13423" target="_blank">Psychonomic Bulletin & Review</a>. In a three-part series, <a href="http://www.psychonomic.org/lewandowsky" target="_blank">Steve Lewandosky</a> and <a href="http://www.psychonomic.org/etz" target="_blank">Alexander Etz</a> lay out our case for why confidence intervals are <i>not</i> what people think they are. I've written enough about confidence intervals lately, so I'll just link you to their articles.<br /><div><br /><a name='more'></a></div><div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-KYpa0bT0ssM/VmDToKHLuOI/AAAAAAAAKJc/yzUDiAbuNUw/s1600/Untitled%2B2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="http://4.bp.blogspot.com/-KYpa0bT0ssM/VmDToKHLuOI/AAAAAAAAKJc/yzUDiAbuNUw/s320/Untitled%2B2.png" width="320" /></a></div><ul><li>Part 1 (Steve Lewandowsky): <a href="http://www.psychonomic.org/featured-content-detail/95-stepford-interval-confidently-not-what-it-appea" target="_blank">The 95% Stepford Interval: Confidently not what it appears to be</a>. Lewandowsky lays out the basic idea of a confidence interval, what people expect it to be, and how that is wrong.</li><li>Part 2 (Steve Lewandowsky): <a href="http://www.psychonomic.org/featured-content-detail/when-you-could-be-sure-that-submarine-is-yellow-it" target="_blank">When you could be sure that the submarine is yellow, it’ll frequentistly appear red, blue, or green</a>. Lewandowsky digs deeper into some strangeness in confidence intervals, caused by the fact that they guarantee certain behavior in the long run, but do not guarantee any reasonable post-data inferences.</li><li>Part 3 (Alexander Etz): <a href="http://www.psychonomic.org/featured-content-detail/confidence-intervals-more-like-confusion-intervals" target="_blank">Confidence intervals? More like confusion intervals</a>. Etz summarizes the problems with the confidence interval advocacy literature, including incorrect (even contradictory) interpretations of confidence intervals.</li></ul><div>Check it out, and don't forget that Morey et al (2015, 2015a) are open access!</div></div><div><br /></div><div><div style="background-color: white; box-sizing: border-box; color: #333333; font-family: 'PT Serif', serif; font-size: 14px; line-height: 20px; margin-bottom: 10px;">Morey, R. D., Hoekstra, R., Rouder, J. N., Lee, M. D., & Wagenmakers, E.-J. (2015). <a href="https://learnbayes.org/papers/confidenceIntervalsFallacy/" target="_blank">The Fallacy of Placing Confidence in Confidence Intervals</a>. <em style="box-sizing: border-box;">Psychonomic Bulletin & Review</em>, doi: 10.3758/s13423-015-0947-8.</div><div style="background-color: white; box-sizing: border-box; color: #333333; font-family: 'PT Serif', serif; font-size: 14px; line-height: 20px; margin-bottom: 10px;">Morey, R. D., Hoekstra, R., Rouder, J. N., & Wagenmakers, E.-J. (2015). <a href="http://link.springer.com/article/10.3758/s13423-015-0955-8" target="_blank">Continued misinterpretation of confidence intervals: response to Miller and Ulrich</a><em style="box-sizing: border-box;">. Psychonomic Bulletin & Review</em>, doi: 10.3758/s13423-015-0955-8.</div></div>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com8tag:blogger.com,1999:blog-50311760553201901.post-24294446659945585442015-12-02T05:00:00.001-08:002015-12-02T05:10:12.826-08:00Reviewers and open science: why PRO?As of yesterday, <a href="http://opennessinitiative.org/PRO_Initiative_RSOS.pdf" target="_blank">our paper outlining the PRO Initiative</a> for open science was accepted for publication in the journal <a href="http://rsos.royalsocietypublishing.org/" target="_blank">Royal Society Open Science</a>. It marks the end of many tweaks to the basic idea, and hopefully the beginning of a new era in peer reviewing: the empowered peer reviewer. The basic idea behind the PRO Initiative is that the peer relationship is fundamental in science, and it is this relationship that should drive cultural change. Open science is necessary, possible, and overdue. As reviewers, we can make it happen.<br /><br /><a name='more'></a><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-XVFpZyjuHwk/Vl7rwgO7ESI/AAAAAAAAKI8/R5m1Qim4r5o/s1600/pro_lock_wide2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="128" src="http://4.bp.blogspot.com/-XVFpZyjuHwk/Vl7rwgO7ESI/AAAAAAAAKI8/R5m1Qim4r5o/s320/pro_lock_wide2.png" width="320" /></a></div>What is the PRO Initiative? <a href="https://opennessinitiative.org/signatories/" target="_blank">Signatories</a> on the Initiative agree that after 1 January, 2017, they will not offer comprehensive review for any manuscript that does not have open data and open materials (ie, they are placed for all to access on a reliable host, <a href="https://opennessinitiative.org/making-your-data-public/" target="_blank">such as those listed here</a>) <i>or </i>at least justifies the lack of openness with the manuscript itself. If a reviewer receives such a manuscript to review, they can contact the editor to ask the authors to fix the problem so that the review process can continue; if the authors refuse, then the reviewer should offer a simple review that focuses only on the lack of data, and does not recommend acceptance. The manuscript was incomplete, after all; it should not become part of the scientific record in that form.<br /><br />Science is, of course, complicated, and so the Initiative is a bit more complicated too; <a href="http://opennessinitiative.org/PRO_Initiative_RSOS.pdf" target="_blank">read the full paper</a> for more details. But the basic idea is that <i>reviewers</i> are the ones who define how to review a paper to ensure that good science is rewarded. There's nothing sacred about the current non-interactive, comprehensive review system; this makes sense in a past world where physical manuscripts were being sent through the mail. Clearly, in this world you don't want a back-and-forth between reviewers, because that would take ages. Now, however, reviewers can ask for a fix instantaneously. I have, for instance, noted that a manuscript was missing a figure, and gotten a new manuscript in a day. If a manuscript does not meet certain minimal standards, there is no reason for a reviewer to continue the review. Ask for a fix; it's your right as a reviewer.<br /><br />Is open science important enough that reviewers should change the way they review? I say <i>yes, absolutely</i>. Ultimately, though, that is up to you as a reviewer to decide. I hope that you'll decide to <a href="https://opennessinitiative.org/join-the-initiative/" target="_blank">stand with the other reviewers</a> who have signed the PRO Initiative. In the coming year before the Initiative takes effect, we'll be organizing events at conferences to talk about open science. Let's change science together!<br /><br />If you do support the Initiative, head over to the website to get <a href="https://opennessinitiative.org/pro-initiative-media/" target="_blank">social media avatars to show your support</a>!<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-EHK0zik5f2I/Vl7tjVw8dXI/AAAAAAAAKJI/pjAEpz_Pp0c/s1600/pro_lock.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://3.bp.blogspot.com/-EHK0zik5f2I/Vl7tjVw8dXI/AAAAAAAAKJI/pjAEpz_Pp0c/s200/pro_lock.png" width="200" /></a></div><br /><br /><br />[<a href="http://opennessinitiative.org/PRO_Initiative_RSOS.pdf" target="_blank">Read the paper</a> -- <a href="https://opennessinitiative.org/join-the-initiative/" target="_blank">Sign the Initiative</a> -- <a href="https://opennessinitiative.org/guidelines-for-authors/" target="_blank">More resources for open science</a>]<br /><br /><br />Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com44tag:blogger.com,1999:blog-50311760553201901.post-43333453582235453322015-11-19T07:35:00.001-08:002015-11-19T07:40:30.988-08:00Habits and open data: Helping students develop a theory of scientific mindThis post is related to my <a href="http://www.learnbayes.org/talks/Psychonomics2015/" target="_blank">open science talk</a> with <a href="http://www.research.ed.ac.uk/portal/en/persons/candice-morey(c0ff598a-2f60-40b4-87d8-96ac58a1bc2a).html" target="_blank">Candice Morey</a> at P<a href="http://www.psychonomic.org/annual-meeting" target="_blank">sychonomics 2015 in Chicago</a>; also read <a href="https://candicemorey.org/?p=45" target="_blank">Candice's new post on the pragmatics: "A visit from the Ghost of Research Past"</a>. In this post, we suggest three ideas that can be implemented in a lab setting to improve scientific practices, and encourage habits that make openness easier. These ideas are designed to be minimally effortful for the adviser, but to have a big impact on practice:<br /><br />* <b>Data partners:</b> young scientists have a partner in another lab, with whom they swap data. The goal is to see if their data documentation is good enough that their partner can reproduce their main analysis with minimal interaction.<br />* <b>Five year plan:</b> When a project is part-way through, students must give a brief report that details what they have done to insure that the data and analyses will be comprehensible to members of the lab in five-year's time, after they have left.<br />* <b>Submission check:</b> At first submission of an article based on the project, advisors should discuss with their advisees the pros and cons of opening their data, and how the data will be promoted online, if it will be open.<br /><br /><a name='more'></a><br /><br /><h3>Betrayed by our habits</h3><div><br /></div>Science, like a lot of other things, is based largely on habit. We learn habits early on in our careers, and most of them serve us well. Habits like checking for problems in our data, such as bad coding or outliers, can keep us from getting fooled. Other habits, like doing a final, full read through of a paper before submission, save us work in the long run.<br /><br />Other habits, however, can keep us from doing better science. Scientists value openness, at least in the abstract. Many scientists have had the frustrating experience of *closed* science: for instance, colleagues that do not share their data. Yet, most science is not open, in spite of the fact that many tools to facilite open science are freely available.<br /><br />To us, the reasons seems obvious. Open science does not bring great immediate reward, and open practices are not part of most scientists' habits. This is natural; many scientists were trained before openness was easy and expected. Our habits were formed without an expectation, for instance, that our data would be open to everyone. Analyses are messy, badly documented, and full of ad hoc solutions to problems that we decided to improve later. If you weren't expecting data to be open, then making it so requires work.<br /><br />When we are faced with opening our data at paper submission or publication, then, our habits betray us. Our values may say "we should be open", but our real choice is *not* between open science and closed science; it is between "hours of work now with uncertain payoff" versus "no work now, and maybe no one will ask for it." The result is not a free choice about open science. Our habits have encumbered our choice with irrelevent issues, such as "I don't feel like doing this work right now. I'll do something else more fun," and everything else is more fun.<br /><br />If we had habits that were more attuned to the expectation of scientific openness, we might be able to do better. Forming such habits later in a career takes work, but forming them early in one's career is much easier. We suggest here a number of things that senior researchers who run labs can do to help their advisees build better habits. None of these things require much work, but we believe that they can help ensure the next generation of scientists has better habits than the current one.<br /><br /><h3>Helping young scientists form better habits: three ideas</h3><div><br /></div>The ideas presented here are designed with several features in mind. They all:<br />* Require minimal effort on the part of an advisor.<br />* They require little *marginal* effort from a young scientist. They may even save effort, since they will encourage good practices and help avoid mistakes.<br />* They encourage development of a "theory of scientific mind": How do other scientists think about data and materials? What would they expect of a data set? Will others understand what I've done?<br />* They help young scientists *truly* have a choice about whether to be open. By the time the choice must be made, no extra work is necessary. The decision can be driven by the arguments in favor, or against, open science, instead of mere momentary pragmatic concerns.<br /><br />These are roughly ordered in where they would appear in an advisees training. We should emphasize that none of these require an advisor promoting them. Young scientists can do these things without their advisor's support, to help build good habits.<br /><br /><h4>Data partners</h4><div><br /></div>In the "data partner" scheme, young scientists in one lab partner with young scientists in another lab working on related topics. The goal of the data partner scheme is to help build an understanding of what information is necessary when sharing data, and to help catch analysis errors early on.<br /><br />When collecting and analyzing their data, students should plan to share their data with their data partner with a short report containing an initial methods section, and a primary analysis (but without numbers). The data partner will be expected to reproduce the primary analysis *without* interaction with the student. The data will need to be well-documented, and the analysis sufficiently detailed, for the primary analysis to be reproducible. Details such as how the data are to be cleaned will be critical.<br /><br />Once the data partner has attempted to reproduce the primary analysis, the two can discuss what was lacking. What could have been more clear? If the results could not be replicated, why? This will build the students' understanding of data analysis, develop their theory of scientific mind, and catch many mistakes early in a project. As a side benefit, the student now has created substantial documentation of their data set: precisely the information necessary for releasing data to others.<br /><br /><h4>The five-year plan</h4><div><br /></div>One of the issues that often comes up when training students is turn-over. A student often has "ownership" of a project, and an adviser is less involved, guiding the student along but not having complete knowledge of the entire project. This can be problematic. When a student leaves, what if the adviser wants to send the data to someone? What if another student wants to re-analyze it to check a hunch? What if the lab wants to perform a meta-analysis?<br /><br />A lab runs on data; old materials (including stimuli), data, and analyses should be archived in a clear way so that if someone from the lab, years later, wants to use the materials or data, or wants to reproduce the analysis, they should be able to do so. This is part of being a good lab citizen.<br /><br />When a project is mature, advisers should give time to the student in a lab meeting to answer the question "What have you done to insure that this project -- including the materials, data, and statistical analysis -- will be useable in five years?" This encourages students to think of the long-term usefulness of their data to others. Over the years, a formal meeting may be come unnecessary as lab standards become more geared toward openness.<br /><br /><h4>The submission check</h4><div><br /></div>As the name implies, the "submission check" is meant to occur before a project is submitted for publication. If all has gone well, the project should be well-documented and ready to release. The work has been done; all that needs decided is whether the project will be open. If the answer is not pre-determined by a journal or granting agency requiring open data and materials, then the adviser should have a conversation with the young scientist: should we open these data and materials? What are the arguments for opening it? Are there arguments against it?<br /><br />If it is decided to open the data, then the next question should be, "How will you promote these data and materials from the project?" For a young scientist thinking of the next stage of their career, promotion is critical. One of the advantages of open data is that it yields another product of the research that can be promoted; open materials and open code provide others. The student should be encouraged to think about how these can be leveraged to their advantage, and to follow through with their promotion ideas.<br /><br /><h3>Promoting good habits and open science</h3><div><br /></div>For many scientists, open science is a difficult choice because it is encumbered by a number of unnecessary pragmatic concerns flowing from habits formed over many years. Openness is not truly a free choice, driven by the merits of open science. This need not be the case for the next generation of researchers. Senior researchers have an important role to play in helping their advisees form good habits and develop a theory of scientific mind. The data partner scheme, the five-year plan, and the submission check can help establish good lab practices, with the benefit that students will be prepared for a more open science.<br /><br /><br /><br /><br />Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com5tag:blogger.com,1999:blog-50311760553201901.post-61700198297113968752015-11-12T10:21:00.002-08:002015-11-12T10:21:23.207-08:00Neyman does science, part 2In <a href="http://bayesfactor.blogspot.co.uk/2015/11/neyman-does-science-part-1.html" target="_blank">part one of this series</a>, we discussed the different philosophical viewpoints of Neyman and Fisher on the purposes of statistics. Neyman had a behavioral, decision based view: the purpose of statistical inference is to select one of several possible decisions, enumerated before the data have been collected. To Fisher, and to Bayesians, the purpose of statistical inference is related to the quantification of evidence and rational belief. I agree with Fisher on this issue, and I was curious how Neyman -- with his pre-data inferential philosophy -- would actually tackle a problem with real data. In this second part of the series, we examine Neyman's team's analysis of the data from the Whitetop weather modification experiment in the 1960s.<br /><br /><br /><a name='more'></a><h4>First: Get the data!</h4><div><br /></div>I have saved the data in Table 2 in Neyman et al. (1969) <a href="https://gist.github.com/richarddmorey/862ca2681afd3cd85b3b" target="_blank">online in a text format</a>. It can be loaded into R using the following code (<a href="https://gist.github.com/richarddmorey/5f22fc742535d078c25f" target="_blank">available as a gist</a>):<br /><br /><pre class="brush: r">## R code to get data and make plots<br />## You may have to install the devtools and RCurl packages first<br />library('devtools')<br />source_gist("https://gist.github.com/richarddmorey/5f22fc742535d078c25f")<br /></pre><br />The code will also regenerate the panels from Neyman et al.'s Figure 2.<br /><br /><h4>Project Whitetop</h4><div><br /></div><div>Project Whitetop, a weather modification experiment performed during the summers in 1960-1964, was one of the first meticulously randomized, large-scale experiments of its kind. Before the experiment began, every day was designated as a "seed" day or a "non-seed" day (control). The designation was kept secret until the last moment. Every day in the morning, the experimenters would determine whether the conditions were good for seeding. If there were westerly winds and "high precipitable water" in Little Rock, Arkansas and Columbia, Missouri, then the day was designated as an "experimental" day, and the envelope containing the seed instructions was opened. On seeded days, an airplane dumped <a href="https://en.wikipedia.org/wiki/Silver_iodide" target="_blank">silver iodide</a> into the clouds around <a href="https://en.wikipedia.org/wiki/West_Plains,_Missouri" target="_blank">West Plains, Missouri</a>. The area at the center of the concentric circles in the figure below shows the experimental area.</div><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-_9aNnlLftoo/VkS3mXO1OgI/AAAAAAAAKH0/i0l2GUprxhQ/s1600/whitetop.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="385" src="http://4.bp.blogspot.com/-_9aNnlLftoo/VkS3mXO1OgI/AAAAAAAAKH0/i0l2GUprxhQ/s400/whitetop.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: justify;">Figure 1 from Neyman et al (1969) overlaid on a modern map. Original caption reads "Approximate map of the region around the Project Whitetop target. Solid circles mark the location of rain gages used for the evaluation. The radii of the concentric circles are multiples of 30 miles; the letters A, B, C, D, E, and F designate the region within the inner circle and the regions within the successive rings, respectively. For example, region B is the area bounded by the 30 mile (inner) circle and the 60 mile (second) circle. Additionally, the area within the outermost circle is designated as 'entire' (Tables 1 and 2)."</td></tr></tbody></table><br />The original analysis as reported by Neyman et al (1969) was of the change in precipitation in the hour when the seeded plume was overhead. There appeared to be an unexpected <i>decrease</i> in the precipitation due to the seeding ("some" <i>p</i><0.01, as it was reported by Neyman et al).<br /><br />Neyman's team was interested in assessing the effect of seeding at longer time scales (24 hours) and at greater distances (up to 180 miles from the experimental area). In their minds, these longer-term, larger-range effects were much more interesting from a policy perspective.<br /><br /><h4>What did Neyman think makes a good analysis?</h4><div><br /></div><div>In the same year, Neyman, Scott, and Wells wrote a paper outlining statistical inference for weather modification experiments (<a href="http://www.jstor.org/stable/1402287" target="_blank">"Statistics in meteorology"</a>, 1969). The paper is important because it lays out what we might expect from Neyman's Whitetop analysis. They describe power as related to the notion of an "informative" experiment, and briefly mention the "optimal" class of tests that will be used to analyse the Whitetop data. The critical role of power is emphasized: </div><blockquote class="tr_bq">[The] rational planning of a rain stimulation experiment must emphasize the question whether, with this design, with this proposed duration and with this particular statistical test, the probability of detecting the effect of treatment that one wishes to detect is 0.2, or 0.5, or 0.8, etc. In other words, in experimentation with weather control, it is of paramount importance to estimate the power of the statistical test to be used on the data that may be provided by the contemplated design of the experiment. (p. 123)</blockquote>And again on page 124: "[T]he power of the test to be used in the evaluation of a rain stimulation experiment is of prime importance." This makes it explicit: the optimality of the test is interesting, but the <i>power of the test </i>with reference to the experimental design is critical. An optimal test can give an uninformative experiment, if the design is bad (e.g., low sample size). It should not be a surprise that Neyman emphasizes the importance of ensuring the test is worthwhile before one undertakes it. This is especially critical if one faces interpreting null results ($p>\alpha$), as Neyman points out elsewhere.<br /><br /><h4>Neyman et al's Whitetop analysis</h4><div><br /></div><div>Here we discuss Neyman et al. (1969)'s analysis (<a href="http://www.sciencemag.org/content/163/3874/1445.abstract" target="_blank">"Areal Spread of the Effect of Cloud Seeding at the Whitetop Experiment"</a>). In order to examine the effect of distance, they decided to use data from the 174 rain gages within 180 miles of the experimental area. The goal, according to Neyman et al, was two-fold:</div><blockquote class="tr_bq">Specifically, an effort was made to determine (i) the differences in the 24-hour precipitation amounts at different distances from the center of the Whitetop target, averaged over the 102 days with seeding and over the 96 experimental days without seeding, and (ii) the probability (<i>P</i>) of obtaining such differences, or larger, purely through unavoidable chance variation. (pp. 1445-1446)</blockquote>Of critical interest to use is how <i>P</i> (the <i>p</i> value) is interpreted later on. The figure above (overlaid on the map) shows how Neyman et al. divided the area into 6 concentric rings. For each ring A-F, the change in percent change precipitation was computed, along with a two-tailed <i>p</i> value. Note that a positive change was expected, and only the discovery of the negative effect in this <i>same data set</i> would lead one to look for a negative effect.<br /><div><br /></div><div>Before we look at the analysis results themselves, let me emphasize that I am not concerned with whether Neyman et al are correct; what I'm interested in is how they use statistics to support their case. Of special interest are some ideas that never appear in this paper. These include:</div><div><ul><li>An $\alpha$ level</li><li>Power</li><li>Error rates</li><li>A significant or nonsignificant result (other than the already-mentioned, previously-published $p<.01$ result with the same data, which is described as "significant")</li><li>Pre-determined decisions</li></ul></div>With that in mind, we can look at the results, which I have combined from their Table 2 and Figure 2. The first panel shows the effect of the seeding on days that are "wet" (that is, given rain occurred, how much did it rain?). The second panel shows the effect of seeding on all days.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-NvuBcpcE_nI/VkS37l7TzkI/AAAAAAAAKIE/Fhtw8h-hmxE/s1600/Newman_wet_days.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="http://2.bp.blogspot.com/-NvuBcpcE_nI/VkS37l7TzkI/AAAAAAAAKIE/Fhtw8h-hmxE/s400/Newman_wet_days.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: justify;">Recreation of top panel from Neyman et al's (1969) Figure 2. See bottom panel for original caption.</td></tr></tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-wWj5HyUFd4I/VkS37on5J-I/AAAAAAAAKII/xfQtkywXuIE/s1600/Newman_all_days.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="http://1.bp.blogspot.com/-wWj5HyUFd4I/VkS37on5J-I/AAAAAAAAKII/xfQtkywXuIE/s400/Newman_all_days.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: justify;">Recreation of bottom panel from Neyman et al's (1969) Figure 2. Note that these points are not independent. Original caption reads: "Average daily precipitation versus average distance from the target center. (Top) Precipitation averaged per wet day; (bottom) precipitation averaged per day, wet or dry. In each case the upper curve represents experimental days not seeded, the middle curve represents experimental days seeded, and the lower curve represents the 267 days of June, July, and August 1960-64, which were not classified as experimental." I have not added the lower line, since these numbers are not included in the tables (and they are irrelevant).</td></tr></tbody></table><br />Neyman et al refer primarily to the results for all days. This is how Neyman et al describe the results:<br /><blockquote class="tr_bq">The estimate of the average seeding effect in the entire region is a 21-percent loss of rain. In the absence of a real effect, chance alone could produce such an estimate of loss, or a larger one, about once in 15 independent trials. (p 1447)</blockquote>Note the switch from a two-sided <i>p</i> value (<i>p</i>=0.13, lower panel) to a one-sided <i>p</i> value! This occurs without any <i>a priori</i> mention of this hypothesis, aside from the fact that it was found in these same data previously; no mention of any pre-determined decision criterion. In fact, they note that this magnitude of negative effect is supported by no "intelligible theory" (p. 1447). This is a purely evidential use of the one-sided <i>p</i> value. They continue:<br /><blockquote class="tr_bq">From the point of view of the question as to whether the current state of weather modification technology justifies its use for alleviating water shortages, [the data] <b><i>appear decisive</i></b>. As already mentioned, the Whitetop experiment was conducted in a locality where summer precipitation is critical. In fact, the possibilities of increases due to seeding as modest as 5 to 10 percent have been mentioned as something to be hoped for. When instead of such gains the experimental results show losses averaging 20 percent over an area of some 100,000 square miles, then even the <b><i>slightest possibility that these losses were caused by seeding must be considered as disqualifying the underlying technology</i></b>. Actually, <b><i>the evidence in support of the causal relation between seeding and loss of rain appears quite strong. </i></b>(p 1447; emphasis mine)</blockquote>The only mention of a decision is related to the evidence -- that is, <i>it is decisive</i> -- not any decision pre-planned decision. Moreover, the interpetation is concerned with the possibility, on a graded scale, that the effect is real and negative, and the evidence is "quite strong" -- again, a graded way of referring to the strength of the evidence.<br /><br />Almost immediately, <a href="http://www.sciencemag.org/content/165/3893/618.short" target="_blank">Battan (1969)</a> responded to Neyman et al saying what probably most of the readers of this blog post were thinking, on seeing the mediocre <i>p</i> values Neyman et al used as evidence:<br /><blockquote class="tr_bq">The two-tailed significance levels in the two tables are not so small as to make it self-evident that the rainfall differences were caused by seeding. Several hypotheses might be offered to explain effects of seeding downwind of the seeding area, but no plausible hypothesis has been offered to explain effects upwind and to the side to distances of 180 miles. (p. 618)</blockquote>In the clearest sign that their use of statistical inference was evidential and post-data, rather than decision-based and pre-data, <a href="http://www.sciencemag.org/content/165/3893/618.short" target="_blank">Neyman et al responded</a>:<br /><blockquote class="tr_bq">Battan is certainly entitled to his opinion that "significance levels . . . are not so small as to make it self-evident that the rainfall differences were caused by seeding." In fact, <i><b>we agree about the lack of self-evidence</b></i>. But, if there is anything in the contention that a gain in the rainfall of 5 to 10 percent is worth talking about, then a 20 percent loss, experienced over a vast area of some 100,000 square miles, must be a disaster. In these conditions, <b><i>the odds of 14 to 1 that this loss was caused by seeding</i></b> do not appear negligible to us. We feel that it is imperative that the general public and the government be informed of the situation. (p. 618, emphasis mine)</blockquote>In a stunning misuse of statistics, Neyman et al have confused a <i>p </i>value with a posterior odds. Not just any <i>p</i> value; this was a <i>p</i> value from a post hoc one-tailed test. The need for a post-data mode of statistical inference is so great that Neyman -- who was famed for his pre-data theory of statistical inference -- is forced into a basic fallacy when responding to a critic. To me, this is quite remarkable.<br /><h4><br /></h4><h4>Wrapping up</h4><div><br /></div><div>Fisher and Neyman disagreed about the philosophy of statistical inference. To Fisher (and indeed to almost all scientists), statistical inference was post-data and evidential. Neyman, however, had a pre-data, behavioural view of statistical inference. Neyman's viewpoint (which, unfortunately, has stuck around a long time in training of scientists using Type I and Type II errors) is not one that is conducive to science. Neyman himself, when doing science, appears to have had a post-data mind-set.</div><div><br /></div><div>In part 3 of the series, I will look at the aftermath of the Neyman's Whitetop analysis, and how Neyman eventually abandoned the conclusions.</div><br />Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com3tag:blogger.com,1999:blog-50311760553201901.post-92142206806751136312015-11-10T07:34:00.002-08:002015-11-12T10:22:45.269-08:00Neyman does science, part 1On reading Neyman's statistical and scientific philosophy (e.g., <a href="http://www.jstor.org/stable/1401671" target="_blank">Neyman, 1957</a>), one of the things that strikes a scientist is its extreme rejection of post-data reasoning. Neyman adopts the view that once data is obtained statistical inference is not about reasoning, but is rather about the automatic adoption of one of several decisions. Given the importance of post-data reasoning to scientists -- which can be confirmed by reading any scientific manuscript -- I wondered how Neyman would think and write about an actual, applied problem. This series of blog posts explores Neyman's work on the analysis of weather modification experiments. The (perhaps unsurprising) take-home message from this series of posts is this: <i>not even Neyman applied Neyman's philosophy, when he was confronted with real data.</i><br /><div><a name='more'></a><div><br />Consider the view of statistical inference put forward by Fisher in contrast to Neyman's perspective:</div><div><blockquote class="tr_bq">Decision [as opposed to reasoning] itself must properly be referred to a set of motives, the strength or weakness of which should have had no influence whatever on any estimate of probability. We aim, in fact, at methods of inference which should be equally convincing to all rational minds, irrespective of any intentions they may have in utilizing the knowledge inferred. (Fisher, 1955, p. 77)</blockquote></div><div>The concept of <i>evidence</i> -- that is, information which warrants changes in belief -- is central to science. Under an evidential view, evidence can be strong or weak, or, in other words, convincing or unconvincing. This occurs as a matter of degrees, independent of any particular decisions one might have in mind. Evidence is a <i>post-data</i> concept, applying to the interpretation of data after it has been collected.</div><div><br /></div><div>Neyman, on the other hand, appears to reject epistemology altogether. Post-data ideas like beliefs -- justified or otherwise -- are not a target of statistical analysis:<br /><div><blockquote class="tr_bq">The beliefs of particular scientists are a very personal matter and it is useless to attempt to norm them by any dogmatic formula...The content of the concept of inductive behavior is the recognition that the purpose of every piece of serious research is to provide grounds for the selection of one of several contemplated courses of action. (Neyman, 1957, p. 16)</blockquote>Neyman offers a <i>pre-data</i> philosophy. Decision criteria are set before the experiment, on the basis of considerations of long-run Type I and Type II errors. The outcome of a study is the selection of a decision, not a reasonable change in belief. For those of us in the sciences, Neyman's is a very strange outlook on science and statistics. Fisher, of course, was primarily a scientist; Neyman, a statistician. This difference shows in their respective ideas of how statistical inference is to be undertaken.<br /><br />Neyman, however, goes to great lengths to show that Fisher, in fact, acted consistently with Neyman's own philosophy, and not with Fisher's stated one. In highlighting a case where Fisher has interpreted a low <i>p</i> value as indicating that a particular null hypothesis is not true, Neyman says:<br /><blockquote class="tr_bq"><div class="p1">The trouble is that the premise "<i>P</i> is less than .01" does not imply that "the departures are not fortuitous" [that is, did not arise by chance, under the null hypothesis]. In fact, even if the inheritance of the characteristics considered conformed exactly with the assumed model, the probability of observing $\chi^2$ corresponding to the value of <i>P</i> less than 0.01 is positive and approximately equal to 0.01. Thus, the assertion "the departures are not fortuitous" cannot be deduced from "<i>P</i> is less than .01". Yet, this assertion is made, and is made in very definite terms...[O]ne may presume that the assertion "the departures are not fortuitous" is interpreted by Fisher as equivalent to the adoption of the hypothesis of differential viability. (Neyman, 1957, p. 12) </div></blockquote>And on this point, Neyman is right. A low <i>p</i> value does not imply that the null hypothesis is false (at least, not by itself). Neyman takes this to mean that Fisher was making a <i>decision </i>to reject the null hypothesis, rather than engaging in any sort of post-data "reasoning".<br /><br />The main problem is that it is difficult to see how Neyman's philosophy is applicable to science, which, as Fisher pointed out, is primarily designed to incrementally increase knowledge, and concerned with graded evidence and beliefs. I wanted to understand how Neyman would interpret the results of an already-performed experiment. How could he avoid post-data evaluations of evidence?<br /><br /><h3>Neyman's weather modification work</h3><div><br /></div><div>In a series of articles in starting in the 1960s, Neyman was involved with the analysis of meteorological data from the Whitetop project, which was designed to evaluate the efficacy of <a href="https://en.wikipedia.org/wiki/Cloud_seeding" target="_blank">cloud seeding</a> to increase rainfall. In cloud seeding, particles (such as <a href="https://en.wikipedia.org/wiki/Silver_iodide" target="_blank">silver iodide</a>) are scattered into clouds in the hope that the water or ice will condense around them. In theory, this should increase precipitation, because it is meant to mimic the natural processes underlying precipitation.<br /><br />The Whitetop project was one of the first large-scale, randomized experiments in cloud seeding. It was designed to test whether silver iodide dropped from a plane in summer months could increase the probability of rain, or, given that rain occurred, how much rain actually fell. Silver iodide was dropped on random days within an area about 60 miles in radius around <a href="https://www.google.co.uk/maps/place/West+Plains,+MO,+USA/@36.7490824,-91.9002479,13z/data=!3m1!4b1!4m2!3m1!1s0x87d057992433d777:0xdaf1a6f25c1d098a!6m1!1e1" target="_blank">West Plains, Missouri</a> in five summer seasons. This area at the time was suffering from low rainfall that threatened agriculture in the area.<br /><br />There are a few relevant facts that we can lay out before we start:</div></div></div></div><div><ul><li>The field of weather modification at the time (and perhaps still) was susceptible to widespread confirmation bias and what we would call today questionable research practices, including reliance on <i>p</i> hacking to explain away negative results in a primary outcome (<a href="http://www.sciencemag.org/content/195/4274/139.extract" target="_blank">Atals, 1977</a>).</li><li>In light of the previous point, it is perhaps not surprising that the efficacy of cloud seeding is still disputed. According to the <a href="http://www.nap.edu/catalog/10829/critical-issues-in-weather-modification-research" target="_blank">National Research Council's </a><a href="http://www.nap.edu/catalog/10829/critical-issues-in-weather-modification-research" target="_blank">Committee on the Status of and Future Directions in U.S. Weather Modification Research and Operations (2003)</a>: "The Committee concludes that there still is no convincing scientific proof of the efficacy of intentional weather modification efforts. In some instances there are strong indications of induced changes, but this evidence has not been subjected to tests of significance and reproducibility."</li><li>There was (is?) no known mechanism by which seeding could affect rainfall upwind at distances on the order of 100 miles (<a href="http://www.jstor.org/stable/2286722" target="_blank">Braham, 1979</a>). </li></ul><div>Neyman's team of researchers at the University of California, Berkeley was not part of the original team of researchers on the Whitetop project team. In a series of papers they analysed the Whitetop data with an interest in determining the long-distance, medium-term (1 day) effects of cloud seeding. In the next post, I will examine how Neyman presents data analyses in several papers, with a focus on how he and his team writes about data analysis and statistical inference. The resulting analysis will be similar to how Neyman (1957) assessed Fisher's language, but in reverse: we will see that Neyman strongly favored evidential language and did not use decisions, error rates, or power to interpret the data.<br /><br /><i><a href="http://bayesfactor.blogspot.co.uk/2015/11/neyman-does-science-part-2.html" target="_blank">Go to part 2 >>></a></i><br /><br /><hr /><h4>Bibliography for these posts</h4></div><div><div class="csl-bib-body" style="line-height: 2; padding-left: 2em; text-indent: -2em;"><div class="csl-entry"><div class="csl-bib-body" style="line-height: 2; padding-left: 2em; text-indent: -2em;"><div class="csl-entry"><div class="csl-bib-body" style="line-height: 2; padding-left: 2em; text-indent: -2em;"><div class="csl-entry"><span style="font-size: xx-small;">Atals, D. (1977). <a href="http://www.sciencemag.org/content/195/4274/139.short" target="_blank">The Paradox of Hail Suppression</a>. <i>Science</i>, <i>195</i>(4274), 139–145.</span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=The%20Paradox%20of%20Hail%20Suppression&rft.jtitle=Science&rft.volume=195&rft.issue=4274&rft.aufirst=David&rft.aulast=Atals&rft.au=David%20Atals&rft.date=1977&rft.pages=139-145&rft.spage=139&rft.epage=145"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">Braham, R. R. (1979). <a href="http://www.jstor.org/stable/2286722" target="_blank">Field Experimentation in Weather Modification</a>. <i>Journal of the American Statistical Association</i>, <i>74</i>(365), 57–68.</span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Field%20Experimentation%20in%20Weather%20Modification&rft.jtitle=Journal%20of%20the%20American%20Statistical%20Association&rft.volume=74&rft.issue=365&rft.aufirst=Roscoe%20R.&rft.aulast=Braham&rft.au=Roscoe%20R.%20Braham&rft.date=1979&rft.pages=57-68&rft.spage=57&rft.epage=68"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">Committee on the Status and Future Directions in U.S Weather Modification Research and Operations, National Research Council. (2003).<a href="http://www.nap.edu/download.php?record_id=10829" target="_blank"> <i>Critical issues in weather modification research</i></a>. National Academies Press. </span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=book&rft.btitle=Critical%20issues%20in%20weather%20modification%20research&rft.publisher=National%20Academies%20Press&rft.aulast=Committee%20on%20the%20Status%20and%20Future%20Directions%20in%20U.S%20Weather%20Modification%20Research%20and%20Operations%2C%20National%20Research%20Council&rft.au=Committee%20on%20the%20Status%20and%20Future%20Directions%20in%20U.S%20Weather%20Modification%20Research%20and%20Operations%2C%20National%20Research%20Council&rft.date=2003"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">Fisher, R. A. (1955). <a href="http://www.jstor.org/stable/2983785" target="_blank">Statistical Methods and Scientific Induction</a>. <i>Journal of the Royal Statistical Society. Series B (Methodological)</i>, <i>17</i>, 69–78.</span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Statistical%20Methods%20and%20Scientific%20Induction&rft.jtitle=Journal%20of%20the%20Royal%20Statistical%20Society.%20Series%20B%20(Methodological)&rft.volume=17&rft.aufirst=R.%20A.&rft.aulast=Fisher&rft.au=R.%20A.%20Fisher&rft.date=1955&rft.pages=69-78&rft.spage=69&rft.epage=78"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">J. L. Lovasich, M. A. W., J. Neyman, E. L. Scott. (1971). <a href="http://www.jstor.org/stable/61194" target="_blank">Hypothetical Explanations of the Negative Apparent Effects of Cloud Seeding in the Whitetop Experiment</a>. <i>Proceedings of the National Academy of Sciences of the United States of America</i>, <i>68</i>(11), 2643–2646.</span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Hypothetical%20Explanations%20of%20the%20Negative%20Apparent%20Effects%20of%20Cloud%20Seeding%20in%20the%20Whitetop%20Experiment&rft.jtitle=Proceedings%20of%20the%20National%20Academy%20of%20Sciences%20of%20the%20United%20States%20of%20America&rft.volume=68&rft.issue=11&rft.aufirst=M.%20A.%20Wells%2C%20J.%20Neyman%2C%20E.%20L.%20Scott&rft.aulast=J.%20L.%20Lovasich&rft.au=M.%20A.%20Wells%2C%20J.%20Neyman%2C%20E.%20L.%20Scott%20J.%20L.%20Lovasich&rft.date=1971&rft.pages=2643-2646&rft.spage=2643&rft.epage=2646&rft.issn=00278424"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">Lovasich, J. L., Neyman, J., Scott, E. L., & Smith, J. A. (1969). <a href="http://www.pnas.org/content/64/3/810.abstract" target="_blank">Wind directions aloft and effects of seeding on precipitation in the Whitetop experiment</a>. <i>Proceedings of the National Academy of Sciences</i>, <i>64</i>(3), 810–817.</span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Wind%20directions%20aloft%20and%20effects%20of%20seeding%20on%20precipitation%20in%20the%20Whitetop%20experiment&rft.jtitle=Proceedings%20of%20the%20National%20Academy%20of%20Sciences&rft.volume=64&rft.issue=3&rft.aufirst=Jeanne%20L.&rft.aulast=Lovasich&rft.au=Jeanne%20L.%20Lovasich&rft.au=Jerzy%20Neyman&rft.au=Elizabeth%20L.%20Scott&rft.au=Jerome%20A.%20Smith&rft.date=1969&rft.pages=810-817&rft.spage=810&rft.epage=817"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">Lovasich, J. L., Neyman, J., Scott, E. L., & Wells, M. A. (1971). <a href="http://www.pnas.org/content/68/1/147.abstract" target="_blank">Further Studies of the Whitetop Cloud-Seeding Experiment</a>. <i>Proceedings of the National Academy of Sciences</i>, <i>68</i>(1), 147–151.</span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Further%20Studies%20of%20the%20Whitetop%20Cloud-Seeding%20Experiment&rft.jtitle=Proceedings%20of%20the%20National%20Academy%20of%20Sciences&rft.volume=68&rft.issue=1&rft.aufirst=Jeanne%20L.&rft.aulast=Lovasich&rft.au=Jeanne%20L.%20Lovasich&rft.au=Jerzy%20Neyman&rft.au=Elizabeth%20L.%20Scott&rft.au=Marcella%20A.%20Wells&rft.date=1971&rft.pages=147-151&rft.spage=147&rft.epage=151"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">Neyman, J. (1957). <a href="http://dx.doi.org/10.2307/1401671" target="_blank">“Inductive Behavior” as a Basic Concept of Philosophy of Science</a>. <i>Review of the International Statistical Institute</i>, <i>25</i>, 7–22.</span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=%E2%80%9CInductive%20Behavior%E2%80%9D%20as%20a%20Basic%20Concept%20of%20Philosophy%20of%20Science&rft.jtitle=Review%20of%20the%20International%20Statistical%20Institute&rft.volume=25&rft.aufirst=Jerzy&rft.aulast=Neyman&rft.au=Jerzy%20Neyman&rft.date=1957&rft.pages=7-22&rft.spage=7&rft.epage=22"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">Neyman, J. (1977). <a href="http://www.pnas.org/content/74/11/4714" target="_blank">A statistician’s view of weather modification technology (A Review)</a>. <i>Proceedings of the National Academy of Sciences of the United States of America</i>, <i>74</i>(11), 4714–4721.</span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A%20statistician's%20view%20of%20weather%20modification%20technology%20(A%20Review)&rft.jtitle=Proceedings%20of%20the%20National%20Academy%20of%20Sciences%20of%20the%20United%20States%20of%20America&rft.volume=74&rft.issue=11&rft.aufirst=J.&rft.aulast=Neyman&rft.au=J.%20Neyman&rft.date=1977&rft.pages=4714-4721&rft.spage=4714&rft.epage=4721"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">Neyman, J., Scott, E. L., & Smith, J. A. (1969). <a href="http://www.sciencemag.org/content/165/3893/618.short" target="_blank">Whitetop Experiment (response to Battan)</a>. <i>Science</i>, <i>165</i>(3893), 618.</span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Whitetop%20Experiment%20(response%20to%20Battan)&rft.jtitle=Science&rft.volume=165&rft.issue=3893&rft.aufirst=J.&rft.aulast=Neyman&rft.au=J.%20Neyman&rft.au=E.%20L.%20Scott&rft.au=J.%20A.%20Smith&rft.date=1969&rft.pages=618"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">Neyman, J., Scott, E. L., & Wells, M. A. (1969). <a href="http://www.jstor.org/stable/1402287" target="_blank">Statistics in Meteorology</a>. <i>Review of the International Statistical Institute</i>, <i>37</i>(2), 119–148.</span></div><span style="font-size: xx-small;"><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Statistics%20in%20Meteorology&rft.jtitle=Review%20of%20the%20International%20Statistical%20Institute&rft.volume=37&rft.issue=2&rft.aufirst=J.&rft.aulast=Neyman&rft.au=J.%20Neyman&rft.au=E.%20L.%20Scott&rft.au=M.%20A.%20Wells&rft.date=1969&rft.pages=119-148&rft.spage=119&rft.epage=148"></span> </span><br /><div class="csl-entry"><span style="font-size: xx-small;">Neyman, J., Scott, E., & Smith, J. A. (1969). <a href="http://www.sciencemag.org/content/163/3874/1445.abstract" target="_blank">Areal Spread of the Effect of Cloud Seeding at the Whitetop Experiment</a>. <i>Science</i>, <i>163</i>(3874), 1445–1449.</span></div><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Areal%20Spread%20of%20the%20Effect%20of%20Cloud%20Seeding%20at%20the%20Whitetop%20Experiment&rft.jtitle=Science&rft.volume=163&rft.issue=3874&rft.aufirst=Jerzy&rft.aulast=Neyman&rft.au=Jerzy%20Neyman&rft.au=Elizabeth%20Scott&rft.au=Jerome%20A.%20Smith&rft.date=1969&rft.pages=1445-1449&rft.spage=1445&rft.epage=1449"></span></div></div><span class="Z3988" title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Areal%20Spread%20of%20the%20Effect%20of%20Cloud%20Seeding%20at%20the%20Whitetop%20Experiment&rft.jtitle=Science&rft.volume=163&rft.issue=3874&rft.aufirst=Jerzy&rft.aulast=Neyman&rft.au=Jerzy%20Neyman&rft.au=Elizabeth%20Scott&rft.au=Jerome%20A.%20Smith&rft.date=1969&rft.pages=1445-1449&rft.spage=1445&rft.epage=1449"></span></div></div></div></div></div>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com2tag:blogger.com,1999:blog-50311760553201901.post-13265118823257849972015-09-24T13:52:00.000-07:002015-09-24T13:52:51.549-07:00BayesFactor version 0.9.12-2 released to CRANI've released BayesFactor 0.9.12-2 to <a href="https://cran.r-project.org/web/packages/BayesFactor/index.html" target="_blank">CRAN</a>; it should be available on all platforms now. The changes include:<br /><div><br /><ul style="background-color: white; box-sizing: border-box; color: #333333; line-height: 25.6px; margin-bottom: 0px !important; margin-top: 0px; padding: 0px 0px 0px 2em;"><li style="box-sizing: border-box;"><span style="font-family: inherit;">Added feature allowing fine-tuning of priors on a per-effect basis: see new argument rscaleEffects of lmBF, anovaBF, and generalTestBF</span></li></ul><ul style="background-color: white; box-sizing: border-box; color: #333333; line-height: 25.6px; margin-bottom: 0px !important; margin-top: 0px; padding: 0px 0px 0px 2em;"><li style="box-sizing: border-box;"><span style="font-family: inherit;">Fixed bug that disallowed logical indexing of probability objects</span></li></ul><ul style="background-color: white; box-sizing: border-box; color: #333333; line-height: 25.6px; margin-bottom: 0px !important; margin-top: 0px; padding: 0px 0px 0px 2em;"><li style="box-sizing: border-box;"><span style="font-family: inherit;">Fixed minor typos in documentation</span></li></ul><ul style="background-color: white; box-sizing: border-box; color: #333333; line-height: 25.6px; margin-bottom: 0px !important; margin-top: 0px; padding: 0px 0px 0px 2em;"><li style="box-sizing: border-box;"><span style="font-family: inherit;">Fixed bug causing regression Bayes factors to fail for very small R^2</span></li></ul><ul style="background-color: white; box-sizing: border-box; color: #333333; line-height: 25.6px; margin-bottom: 0px !important; margin-top: 0px; padding: 0px 0px 0px 2em;"><li style="box-sizing: border-box;"><span style="font-family: inherit;">Fixed bug disallowing expansion of dot (.) in generalTestBF model specifications</span></li></ul><ul style="background-color: white; box-sizing: border-box; color: #333333; line-height: 25.6px; margin-bottom: 0px !important; margin-top: 0px; padding: 0px 0px 0px 2em;"><li style="box-sizing: border-box;"><span style="font-family: inherit;">Fixed bug preventing cancelling of all analyses with interrupt</span></li></ul><ul style="background-color: white; box-sizing: border-box; color: #333333; line-height: 25.6px; margin-bottom: 0px !important; margin-top: 0px; padding: 0px 0px 0px 2em;"><li style="box-sizing: border-box;"><span style="font-family: inherit;">Restricted contingency prior to values >=1</span></li></ul><ul style="background-color: white; box-sizing: border-box; color: #333333; line-height: 25.6px; margin-bottom: 0px !important; margin-top: 0px; padding: 0px 0px 0px 2em;"><li style="box-sizing: border-box;"><span style="font-family: inherit;">All BFmodel objects have additional "analysis" slot giving details of analysis</span></li></ul></div>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com3tag:blogger.com,1999:blog-50311760553201901.post-62568862668010082322015-09-09T12:52:00.001-07:002015-09-09T12:52:12.344-07:00Please help: BayesFactor testimonialsI'm compiling a portfolio about the BayesFactor software, and I would love to have short comments (a few sentences to a paragraph) from people who have found the software useful. If you have used the software and you wouldn't mind sending me a short blurb about your experience, I'd love to hear from you! Please send your BayesFactor testimonial to<a href="mailto: richarddmorey@gmail.com" target="_blank"> richarddmorey@gmail.com</a>. Thanks in advance!Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com6tag:blogger.com,1999:blog-50311760553201901.post-68922247386615329202015-08-10T12:36:00.000-07:002015-08-10T12:40:19.131-07:00On radical manuscript opennessOne of my papers that has attracted a lot of attention lately is "The Fallacy of Placing Confidence in Confidence Intervals," in which we describe some of the fallacies held by the proponents and users of confidence intervals. This paper has been discussed on <a href="https://twitter.com/search?src=typd&q=fallacy%20confidence%20intervals" target="_blank">twitter</a>, <a href="https://www.reddit.com/r/statistics/comments/2etb42/the_fallacy_of_placing_confidence_in_confidence/" target="_blank">reddit</a>, on blogs (eg, <a href="http://andrewgelman.com/2014/12/11/fallacy-placing-confidence-confidence-intervals/" target="_blank">here</a> and <a href="http://maximum-entropy-blog.blogspot.co.uk/2015/04/the-fundamental-confidence-fallacy.html" target="_blank">here</a>), and via email with people who found the paper in various places. A person unknown to me has used the article as the basis for edits to the <a href="https://en.wikipedia.org/wiki/Confidence_interval" target="_blank">Wikipedia article on confidence intervals</a>. I have been told that several papers currently under review cite it. Perhaps this is a small sign that traditional publishers should be worried: this paper has not been "officially" published yet.<br /><br /><a name='more'></a><br />I am currently wrapping up the final revisions on the paper, which has been accepted pending minor revisions at <a href="http://www.springer.com/psychology/cognitive+psychology/journal/13423" target="_blank">Psychonomic Bulletin & Review</a>. The paper has benefited from an extremely public revision process. When I had a new major version to submit, I published the text and all code on <a href="https://github.com/richarddmorey/ConfidenceIntervalsFallacy/" target="_blank">github</a>, and shared it via social media. Some of resulting discussions have been positive, others negative; some useful and enlightening, others not useful and frustrating. Most scientific publications almost exclusively reflect input from the coauthors and the editors and reviewers. This manuscript, in contrast, has been influenced by scores of people I've never met, and I think the paper is better for it.<br /><br />This is all the result of my exploring ways to make my writing process more open, which led to the idea of releasing successive major versions of the text and R code on <a href="https://guides.github.com/activities/citable-code/" target="_blank">github with DOIs</a>. But what about after it is published? How can manuscript openness continue after the magic moment of publication?<br /><br />One of the downsides of the traditional scientific publishing model is that once the work is put into a "final" state, it becomes static. The PDF file format in which articles find their final form <span style="background-color: white; color: #545454; font-family: arial, sans-serif; font-size: x-small; line-height: 18.2000007629395px;">—</span> and in which they are exchanged and read <span style="background-color: white; color: #545454; font-family: arial, sans-serif; font-size: x-small; line-height: 18.2000007629395px;">—</span> enforces certain rigidity, a rigor mortis. The document is dead and placed behind glass for the occasional passerby to view. It is of course good to have a citable version of record; we would not, after all, want a document to be a moving target, constantly changing on the whim of the authors. But it seems like we can do better than the current idea of a static, final document, and I'd like to try.<br /><br />I have created a <a href="http://learnbayes.org/papers/confidenceIntervalsFallacy/" target="_blank">website for the paper</a> that, on publication, will contain the text of the paper in its entirety, free to read for anyone. It also contains extra material, such as teaching ideas and interactive apps to assist in understanding the material in the paper. The version of the website corresponding to the "published" version of the paper will be versioned on github, along with the paper. But unlike the paper at the journal, a website is flexible, and I intend to take advantage of this in several ways.<br /><br />First, <b>I have enabled <a href="https://hypothes.is/" target="_blank">hypothes.is annotation</a> across the entire text. </b>If you open part of the text and look in the upper right hand corner, you will see three icons that can be used to annotate the text:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-g4XiHpRCaNc/Vcj2FRclZbI/AAAAAAAAKFo/w5D4B4paaeA/s1600/Screen%2BShot%2B2015-08-10%2Bat%2B20.05.24.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="200" src="http://4.bp.blogspot.com/-g4XiHpRCaNc/Vcj2FRclZbI/AAAAAAAAKFo/w5D4B4paaeA/s200/Screen%2BShot%2B2015-08-10%2Bat%2B20.05.24.png" width="94" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The hypothes.is annotation tools.</td></tr></tbody></table>Moreover, highlighting a bit of text will open up further annotation tools:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-PmURNtcVt2E/Vcj2kesG2MI/AAAAAAAAKFw/vGjXi7DR8Ew/s1600/Screen%2BShot%2B2015-08-10%2Bat%2B20.07.22.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="69" src="http://4.bp.blogspot.com/-PmURNtcVt2E/Vcj2kesG2MI/AAAAAAAAKFw/vGjXi7DR8Ew/s320/Screen%2BShot%2B2015-08-10%2Bat%2B20.07.22.png" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Highlighting the text brings up more annotation tools.</td></tr></tbody></table>Anyone can annotate the document, and others can see the annotations you make. Am I worried that on the Internet, some people might not add the highest quality annotations? A bit. But my curiosity to see how this will be used, and the potential benefits, outweighs my trepidation.<br /><br /><div>Second, <b>I will update the site with new information, resources, and corrections</b>. These changes will be versioned on github, so that anyone can see what the changes were. Due to the fact that the journal will have the version of record, there is no possibility of "hiding" changes to the website. So I get the best of both worlds: the trust that comes with having a clear record of the process, with the ability to change the document as the need arises. And the entire process can be open, through the magic of github.</div><div><br /></div><div>Third, <b>I have enabled <a href="https://togetherjs.com/" target="_blank">together.js </a>across every page of the manuscript. </b>together.js allows collaboration between people looking at the same website. Unlike hypothes.is, together.js is meant for small groups to privately discuss the content, not for public annotation. This is mostly to explore its possibilities for teaching and discussion, but I also imagine it holds promise for post-publication review and drafting critiques of the manuscript.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-EHXpsdRULrM/Vcj6jwHj33I/AAAAAAAAKF8/6F8KDy2LNX8/s1600/Screen%2BShot%2B2015-08-10%2Bat%2B20.24.28.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="186" src="http://3.bp.blogspot.com/-EHXpsdRULrM/Vcj6jwHj33I/AAAAAAAAKF8/6F8KDy2LNX8/s400/Screen%2BShot%2B2015-08-10%2Bat%2B20.24.28.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The together.js collaboration tools allow making your mouse movements and clicks visible to others, text chat, and voice chat.</td></tr></tbody></table></div><div><br /></div><div>Critics could discuss the manuscript using together.js, chatting about the content of the manuscript. The communication in together.js is peer-to-peer, ensuring privacy; nothing is actually being managed by the website itself, except for making the collaboration tools available.</div><br />The best part of this is that it requires no action or support from the publisher. This is essentially a sophisticated version of a pre-print, which I would release anyway. We don't have to wait for the publishers to adopt policies and technologies friendly for post-publication peer review; we can do it ourselves. All of these tools are freely available, and anyone can use them. If you have any more ideas for tools that would be useful for me to add, let me know; the experiment hasn't even started yet!<br /><br />Check out <a href="https://learnbayes.org/papers/confidenceIntervalsFallacy/" target="_blank">"The Fallacy of Placing Confidence in Confidence Intervals,"</a> play around with the tools, and let me know what you think.<br /><br />Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com16tag:blogger.com,1999:blog-50311760553201901.post-71748944923787806842015-05-26T13:55:00.001-07:002015-05-26T13:55:35.988-07:00Call for papers: Bayesian statistics, at Zeitschrift für Psychologie<br />I am guest editing a special topical issue of <i>Zeitschrift für Psychologie </i>on Bayesian statistics. The complete call, with details, can be found here: <a href="http://www.hogrefe.com/fileadmin/redakteure/hogrefe_com/Periodicals/Zeitschrift_fuer_Psychologie/CfP_zfp_2016_03_web.pdf" target="_blank">[pdf]</a>. Briefly:<br /><blockquote class="tr_bq">As Bayesian statistics become part of standard analysis in psychology, the <i>Zeitschrift für Psychologie</i> invites papers to a topical issue highlighting Bayesian methods. We invite papers on a broad range of topics, including the benefits and limitations of Bayesian approaches to statistical inference, practical benefits of Bayesian methodologies, interesting applications of Bayesian statistics in psychology, and papers related to statistical education of psychologists from a Bayesian perspective. In addition to suggestions for full original or review articles, shorter research notes and opinion papers are also welcome. </blockquote><blockquote class="tr_bq">We invite scholars from various areas of scholarship, including but not limited to psychology, statistics, philosophy, and mathematics, to submit their abstracts on potential papers.</blockquote>Abstracts are due at the end of July. Critiques and articles about the history of Bayesian statistics are also welcome.<br />Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com6tag:blogger.com,1999:blog-50311760553201901.post-51828688560543778562015-05-10T02:37:00.000-07:002015-05-10T02:39:34.677-07:00Visualizing statistical distributions with javascriptFor the past few years, I've been developing and using a library I created that allows me to easily generate visualizations of statistical distributions for teaching. One can specify a distribution along with a parametrization, and the library sees it and generates a table containing all the distributions, which gives links to interactive plots that allow anyone to see how changing the parameters affects the distribution. In addition, clicking on the plot allows finding areas under the distribution. Users can switch between PDF and CDF views. I've now opened the code on <a href="https://github.com/richarddmorey/stat-distributions-js" target="_blank">github</a>.<br /><a name='more'></a><br />You can also link directly to a visualization using URL parameters. For instance:<br /><br /><a href="http://learnbayes.org/demo/stat-distributions-js/distributionDisplay.html?dist=normal&ptzn=2&plotxrng=50,150&rangesLo=50,3&rangesHi=150,45&starts=100,15">http://learnbayes.org/demo/stat-distributions-js/distributionDisplay.html?dist=normal&ptzn=2&plotxrng=50,150&rangesLo=50,3&rangesHi=150,45&starts=100,15</a><br /><br />See the <a href="http://learnbayes.org/demo/stat-distributions-js/distributionTable.html" target="_blank">live demo</a> and the <a href="https://github.com/richarddmorey/stat-distributions-js" target="_blank">github repository</a> for more details.<br /><br />Example screenshots:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-xBKPoJmk4F8/VU8k5kRgSTI/AAAAAAAAKDk/MwwFtsJ4Wqg/s1600/Screen%2BShot%2B2015-05-10%2Bat%2B09.22.12.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="261" src="http://2.bp.blogspot.com/-xBKPoJmk4F8/VU8k5kRgSTI/AAAAAAAAKDk/MwwFtsJ4Wqg/s400/Screen%2BShot%2B2015-05-10%2Bat%2B09.22.12.png" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-IoABPyUV0bw/VU8k5gJH2II/AAAAAAAAKDo/WHp3jUstWvw/s1600/Screen%2BShot%2B2015-05-10%2Bat%2B09.22.22.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="198" src="http://1.bp.blogspot.com/-IoABPyUV0bw/VU8k5gJH2II/AAAAAAAAKDo/WHp3jUstWvw/s400/Screen%2BShot%2B2015-05-10%2Bat%2B09.22.22.png" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-EDvIrNhSbfc/VU8k5hvolTI/AAAAAAAAKD0/w5r19Jd7xYE/s1600/Screen%2BShot%2B2015-05-10%2Bat%2B10.03.12.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="168" src="http://4.bp.blogspot.com/-EDvIrNhSbfc/VU8k5hvolTI/AAAAAAAAKD0/w5r19Jd7xYE/s400/Screen%2BShot%2B2015-05-10%2Bat%2B10.03.12.png" width="400" /></a></div><br /><br /><br />Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com7tag:blogger.com,1999:blog-50311760553201901.post-74937908941655113222015-04-20T23:53:00.000-07:002015-04-20T23:54:59.889-07:00The fallacy of placing confidence in confidence intervals (version 2)I, with my coathors, have submitted a new draft of our paper "The fallacy of placing confidence in confidence intervals". This paper is substantially modified from its previous incarnation. Here is the main argument:<br /><blockquote class="tr_bq"><blockquote class="tr_bq">"[C]onfidence intervals may not be used as suggested by modern proponents because this usage is not justified by confidence interval theory. If used in the way CI proponents suggest, some CIs will provide severely misleading inferences for the given data; other CIs will not. Because such considerations are outside of CI theory, developers of CIs do not test them, and it is therefore often not known whether a given CI yields a reasonable inference or not. For this reason, we believe that appeal to CI theory is redundant in the best cases, when inferences can be justified outside CI theory, and unwise in the worst cases, when they cannot."</blockquote></blockquote>The document, source code, and all supplementary material is available <a href="https://github.com/richarddmorey/ConfidenceIntervalsFallacy" target="_blank">here on github</a>.Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com2tag:blogger.com,1999:blog-50311760553201901.post-70037436068278461062015-04-17T02:39:00.000-07:002015-04-17T02:58:44.400-07:00Guidelines for reporting confidence intervalsI'm working on a manuscript on confidence intervals, and I thought I'd share a draft section on the reporting of confidence intervals. The paper has several demonstrations of how CIs may, or may not, offer quality inferences, and how they can differ markedly from credible intervals, even ones with so-called "non-informative" priors.<br /><br /><h3><a name='more'></a>Guidelines for reporting confidence intervals</h3><br /><b>Report credible intervals instead.</b> We believe any author who chooses to use confidence intervals should ensure that the intervals correspond numerically with credible intervals under some reasonable prior. Many confidence intervals cannot be so interpreted, but if the authors know they can be, they should be called "credible intervals". This signals to readers that they can interpret the interval as they have been (incorrectly) told they can interpret confidence intervals. Of course, the corresponding prior must also be reported. This is not to say that one can't also call them confidence intervals if indeed they are; however, readers are likely more interested in the post-data properties of the procedure -- not the coverage -- if they are interested arriving at substantive conclusions from the interval.<br /><br /><b>Do not use procedures whose Bayesian properties are not known.</b> As <a href="http://projecteuclid.org/euclid.lnms/1215458835" target="_blank">Casella (1992)</a> pointed out, the post-data properties of a procedure are necessary for understanding what can be inferred from an interval. Any procedure whose Bayesian properties have not been explored can have properties that make it unsuitable for post-data inference. Procedures whose properties have not been adequately studied are inappropriate for general use.<br /><br /><b>Warn readers if the confidence procedure does not correspond to a Bayesian procedure. </b>If it is known that a confidence interval does not correspond to a Bayesian procedure, warn readers that the confidence interval cannot be interpreted as having a X% probability of containing the parameter, that it cannot be interpreted in terms of the precision of measurement, and that cannot be said to contain the values that should be taken seriously: the interval is merely an interval that, prior to sampling, had a X% probability of containing the true value. Authors using confidence intervals have a responsibility to keep their readers from invalid inferences if they choose to use them, and it is almost sure that readers will misinterpret them without a warning (<a href="http://www.ejwagenmakers.com/inpress/HoekstraEtAlPBR.pdf" target="_blank">Hoekstra et al, 2014</a>).<br /><br /><b><i>Never</i> report a confidence interval without noting the procedure and the corresponding statistics.</b> As we have described, there are many different ways to construct confidence intervals, and they will have different properties. Some will have better frequentist properties than others; some will correspond to credible intervals, and others will not. It is unfortunately common for authors to report confidence intervals without noting how they were constructed. As can be seen from the examples we've presented, this is a terrible practice because without knowing which confidence intervals was used, it is unclear what can be inferred. A narrow interval could correspond to very precise information or very imprecise information depending on which procedure was used. Not knowing which procedure was used could lead to very poor inferences. In addition, enough information should be presented so that any reader can compute a different confidence interval or credible interval. In most cases, this is covered by standard reporting practices, but in other cases more information may need to be given.<br /><br /><b>Consider reporting likelihoods or posteriors instead.</b> An interval provides fairly impoverished information. Just as proponents of confidence intervals argue that CIs provide more information than a significance test (although this is debatable for many CIs), a likelihood or a posterior provides much more information than an interval. Recently,<a href="http://pss.sagepub.com/content/25/1/7" target="_blank"> Cumming (2014)</a> [<a href="http://www.psychologicalscience.org/index.php/publications/observer/2014/march-14/theres-life-beyond-05.html" target="_blank">see also here</a>] has proposed so-called "cat's eye" intervals which are either fiducial distributions or Bayesian posteriors under a "non-informative" prior (the shape is the likelihood, but he interprets the area, so it must be a posterior or a fiducial distribution). With modern scientific graphics so easy to create, along with the fact that likelihoods are often approximately normal, we see no reason why likelihoods and posteriors cannot replace intervals in most circumstances. With a likelihood or a posterior, the arbitrariness of the confidence or credibility coefficient is avoided altogether.<br /><div><br /></div>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com3tag:blogger.com,1999:blog-50311760553201901.post-85377416823712446712015-04-10T02:34:00.001-07:002015-04-10T02:39:31.061-07:00All about that "bias, bias, bias" (it's no trouble)<style>div.definition { width: 60%; border-style: solid; border-width: medium; padding: 1em; margin-left: auto; margin-right: auto; } </style> <br />At some point, everyone who fiddles around with Bayes factors with point nulls notices something that, at first blush, seems strange: small effect sizes seem “biased” toward the null hypothesis. In null hypothesis significance testing, power simply increases when you change the true effect size. With Bayes factors, there is a non-monotonicity where increasing the sample size will slightly increase the degree to which a small effect size favors the null, then the small effect size becomes evidence for the alternative. I recall puzzling with this with Jeff Rouder years ago when drafting <a href="http://drsmorey.org/bibtex/upload/Rouder:etal:2009a.pdf" target="_blank">our 2009 paper </a>on Bayesian t tests.<br /><a name='more'></a><br /><br />Uri Simonsohn has a <a href="http://datacolada.org/2015/04/09/35-the-default-bayesian-test-is-prejudiced-against-small-effects/">blog post</a> critiquing default Bayes factors for their supposed “bias” toward the null hypothesis for small sample sizes. I have several brief responses:<br /><ul><li> We do not suggest a “default” prior; we suggest a family of default priors, which an adjustable prior parameter (see also <a href="http://bayesfactor.blogspot.co.uk/2015/02/on-making-bayesian-omelet.html">this paper describing our view</a>, which is conditionally accepted at Multivariate Behavioral Research). If you're looking for a small effect, adjust the prior.</li><li> The whole point of a Bayes factor analysis is that you do not know what the true effect size is (see Jeff Rouder and Joe Hilgard's <a href="http://jeffrouder.blogspot.co.uk/2015/04/reply-to-uri-simonsohns-critique-of.html">response here</a>). Noting that the Bayes factor will mislead when you know there's a small effect, but you use a prior that says the effect size is probably moderate to large is not useful. Bayes factors just do what you ask them to do!</li><li> More broadly, though, I think it is helpful to think about this supposed “bias”. Is it what we would expect for a reasonable method? Sometimes our intuitions fail us, and we end up thinking something undesirable, when actually we should be worried if that thing <em>didn't</em> happen. </li></ul>The third point is what this blog post is about. Here, I show that the “bias” toward the null for small effect sizes is exactly what must happen for any reasonable method that meets four, simple desiderata.<br /><br />We start with the idea of a measure of evidence comparing some composite alternative hypothesis to the null hypothesis. For our purposes here, it could be any measure of evidence; it does not have to be a Bayes factor. What we will do is set a number of reasonable desiderata on the properties of this evidence measure, and show that the so-called “bias” in favor of the null for small effect sizes must occur.<br /><br />We assume that our data can be summarized in terms of an effective sample size and an (observed) effect size measure. This effect size should have a “nullest” member (for instance, d=0, or R<sup>2</sup>=0). For any given sample size, the evidence against the null will be an increasing function of this observed effect size. We also need the concept of “no”, or equivocal, evidence; that is, that the data do not favor either hypothesis. This defines a 0 point on the evidence scale, whatever it is. <br /><br />The important concept for our demonstration is the idea of a bivariate space of sample size vs evidence. Sample size begins at 0 and increases along the x axis, and “no” evidence is marked on the y axis. We can think of sample size abstractly as indexing the amoung of information in the data. We are going to imagine fixing an observed effect size and varying the sample size, which will trace a curve through this bivariate space:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-4y4IMAnWT74/VSeWs98GcAI/AAAAAAAAJ_Q/7GZsiG6X7wg/s1600/unnamed-chunk-2-1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-4y4IMAnWT74/VSeWs98GcAI/AAAAAAAAJ_Q/7GZsiG6X7wg/s1600/unnamed-chunk-2-1.png" height="400" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A bivariate sample size / evidence space.</td></tr></tbody></table><br /><br />We can now give four desired properties that any evidence measure will have.<br /><br /><h3>Desiderata 1: The evidence with no data is “equivocal”.</h3><div><br /></div>If we observe no data, the strength of the evidence does not favor either hypothesis. Whatever the “0 evidence” point in the evidence space, having no data must put you there. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-PfLkJv_kIDk/VSeWvfzvypI/AAAAAAAAJ_Y/ziqVtYM23Vw/s1600/unnamed-chunk-3-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-PfLkJv_kIDk/VSeWvfzvypI/AAAAAAAAJ_Y/ziqVtYM23Vw/s1600/unnamed-chunk-3-1.png" height="400" width="400" /></a></div><br /><br />[For a Bayes factor, this means that prior odds and the posterior odds are the same — with no data, they don't change — and the log Bayes factor is 0.]<br /><br /><h3>Desiderata 2: The evidence for a “null” observed effect size is an increasing function of sample size, in favor of the null. However much evidence a “null” observed effect provides, no other observed effect size can exceed it.</h3><div><br /></div>For instance, if we observe d=0 with N=1000 participants, this is more convincing evidence in favor of the null than of we had observed d=0 with N=10. Obviously, this null observed effect should offer the most evidence possible, for a given sample size.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-bBbiECRs-CY/VSeWvZ-KlqI/AAAAAAAAJ_c/IYqjOF5-ySY/s1600/unnamed-chunk-4-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-bBbiECRs-CY/VSeWvZ-KlqI/AAAAAAAAJ_c/IYqjOF5-ySY/s1600/unnamed-chunk-4-1.png" height="400" width="400" /></a></div><br /><h3></h3><h3>Desiderata 3: A fixed non-null observed effect size must yield arbitrarily large amounts of evidence as sample size increases.</h3><div><br /></div>If we observe d=.3, with 10 participants, this isn't terribly convincing; but if we observed d=.3 with more and more participants, we are increasingly sure that the null hypothesis is false. In the bivariate space, this means that all non-null effect size curves eventually must end up either at -∞ or at an asymptote at some large value in favor of the alternative.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-fITMY_xKCTI/VSeWvUauGZI/AAAAAAAAJ_0/flmlYYswvmk/s1600/unnamed-chunk-5-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-fITMY_xKCTI/VSeWvUauGZI/AAAAAAAAJ_0/flmlYYswvmk/s1600/unnamed-chunk-5-1.png" height="400" width="400" /></a></div><br /><h3></h3><h3>Desiderata 4: The closer an observed effect size is to the null effect size, the more it's curve “looks like” the null</h3><div><br /></div>This is just a smoothness assumption. The conclusions we obtain from observing d=0 should be very close to the ones we obtain from d=.001 and even closer to those we obtain from d=.0000001. Of course, this smoothness should also hold for all other observed effect sizes, not just the null, but for our purposes here the observed null is what is important.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-aXPOfH4MjZQ/VSeWvwPSRvI/AAAAAAAAJ_w/SdlqW9i_mt4/s1600/unnamed-chunk-6-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-aXPOfH4MjZQ/VSeWvwPSRvI/AAAAAAAAJ_w/SdlqW9i_mt4/s1600/unnamed-chunk-6-1.png" height="400" width="400" /></a></div><br /><br />For small sample sizes, this means that the curves for small effect sizes must be near the null effect size lines in the bivariate space. As we increase the sample size, of course, those lines must diverge downward.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-oDcd9moR1nY/VSeWwLJBv_I/AAAAAAAAJ_k/etJCXdg-6HI/s1600/unnamed-chunk-7-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-oDcd9moR1nY/VSeWwLJBv_I/AAAAAAAAJ_k/etJCXdg-6HI/s1600/unnamed-chunk-7-1.png" height="400" width="400" /></a></div><br /><br />The effect of these four desiderata is to ensure that small effect sizes “look” null. This is not a consequence of the Bayes factor, or the prior, but rather of very reasonable conditions that any evidence measure would fulfil. For a Bayes factor, of course, <em>how</em> these lines move through the bivariate space — and how small an effect size will need to be in order to “look” null — will be sensitive to the prior on the alternative, as it must be. But behaviour described by Simonsohn is natural consequence of very reasonable assumptions.<br /><br />Although it is counter intuitive, we would be worried if it <i>didn't</i> happen for some measure of evidence.Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com10tag:blogger.com,1999:blog-50311760553201901.post-22881856110461779242015-04-09T15:23:00.001-07:002015-04-09T15:26:28.371-07:00Some thoughts on replicationIn a recent blog post, <a href="http://sometimesimwrong.typepad.com/wrong/2015/04/on-flukiness.html" target="_blank">Simine Vazire</a> discusses the problem with the logic of requiring replicators to explain when they reach different conclusions to the original authors. She frames it, correctly, it as asking people to over-interpret random noise. Vazire identifies the issue as a problem with our thinking: that we under-estimate randomness. I'd like to explore other ways in which our biases interferes with clear thinking about replication, and perhaps suggest some ways we can clarify it.<br /><br />I suggest two ways in which we fool ourselves in thinking about replication: the concept of "replication" is unnecessarily asymmetric and an example of overly-linear thinking, and lack of distinction in <i>practice </i>causing a lack of distinction in <i>theory</i>.<br /><br /><a name='more'></a><h3>Fooled by language: the asymmetry of "replication"</h3><div>Imagine that a celebrated scientist, Dr. Smith, dies, and within her notes is discovered a half-written paper. Building on her previous work, this paper clearly lays out an creative experiment to test a theory. To avoid any complications such as post hoc theorising, assume the link between the theory and experiment is clear and follows from her previous work. On the Dr. Smith's computer, along with the paper, is found a data set. Dr. Smith's colleagues decide to finish the paper and publish it in her honor.</div><div><br /></div><div>Given the strange circumstances of this particular paper's history, another scientist, Dr. Jones, decides to replicate the study. Dr. Jones does his best to match the methods described in the paper, but obtains a different result. Dr. Jones tries to publish, but editors and reviewers demand an explanation: <i>why</i> is the replication different? Dr. Jones' result is doubted until he can explain the difference.</div><div><br /></div><div>Now suppose -- unbeknownst to everyone -- that the first experiment was never done. Dr. Smith simulated the data set as a pedagogical exercise to learn a new analysis technique. She never told anyone because she did not anticipate dying, of course, but everyone assumed the data was real. The second experiment is no replication at all; it is the first experiment done.</div><div><br /></div><div>Does this change the evidential value of the Dr. Jones' experiment at all? Of course not. The fact that the Dr. Smith's experiment was not done is irrelevant to the evidence in Dr. Jones' experiment. The evidence contained in a first experiment is the same, regardless of whether a second experiment is done (assuming, of course, that the methods are all sound). "Replication" is a useless label.</div><div><br /></div><div>Calling the Dr. Jones' experiment a "replication" focuses our attention on wrong relationship. One replicates an actual experiment that was done. However, the evidence that an experiment provides for a theory depends not on the relationship between the experiment's methods and an experiment that was done in the past. Rather, the evidence depends on the relationship between the experiment's methods and a <i>hypothetical </i>experiment that is designed to test the theory. One cannot replicate a hypothetical experiment, of course, because hypothetical experiments cannot be performed. Instead, one <i>realizes</i> a hypothetical experiment, and there may be several realizations of the same hypothetical experiment.</div><div><br /></div><div>Thinking in this manner eliminates the asymmetric relationship between the two experiments. If both experiments can be realizations of the same hypothetical experiment designed to test a theory, which one came first is immaterial.* The burden is no longer on the second experimenter to explain why the results are different; the burden is on the advocates of the theory to explain the extant data, which now includes two differing results. (Vazire's caution about random noise still applies here, as we still don't want to over-explain differences; it is assumed that any post hoc explanation will be tested.)</div><div><br /></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-Ssj4TCscL5I/VSbuk_K7OeI/AAAAAAAAJ_A/UtI08lIRGQg/s1600/replication2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-Ssj4TCscL5I/VSbuk_K7OeI/AAAAAAAAJ_A/UtI08lIRGQg/s1600/replication2.png" height="300" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Three hypothetical experiments that are tests of the same theory, along with five actually-run experiments. Hypothetical experiments A and B may be so-called "conceptual replications" of A, or tests of other aspects of the theory.</td></tr></tbody></table><div><br /></div><div>The conceptual distinction between a hypothetical experiment -- that is, the experiment that is planned -- and the actual experiment is critical. That hypothetical experiment can be realized in many ways: different times, different labs, different participants, even different stimuli, if these are randomly generated or are selected from a large collection of interchangeable stimuli. Importantly, when the first realization of the hypothetical experiment is done,<i> it does not get methodological priority</i>. It is temporally first, but is simply one way in which the experiment could have been realized. </div><div><br /></div><div>Conceptualizing the scientific process in this way prevents researchers who did an experiment first from claiming that their experiment takes priority. If you are "replicating" their actual experiment, then it makes sense that your results will get compared to theirs, in the same way a "copy" might be compared to the "original". But conceptually, the two are siblings, not parent and child.<br /><br /></div><h3>Lack of distinction in practice vs. theory</h3><div>The critical distinctions above is the distinction between a hypothetical experiment and an actual one. I think this is an instance where modern scientific practice causes problems. Although the idea of a hypothetical experiment arises in any experimental planning process, consider the typical scientific paper, which has an introduction, then a brief (maybe even just a few sentences!) segue describing the logic of the experiment, into the methods of an actually-performed experiment. </div><div><br /></div><div>This structure means that the hypothetical experiment and the actual experiment are impossible to disentangle. This is one of the reasons, I think, why we talk about "replication" so much, rather than performing another realization of the hypothetical experiment. We have no hypothetical experiment to work from, because it is almost completely conflated with the actual experiment.</div><div><br /></div><div>One initiative that will help with this problem is public pre-registration. A hypothetical experiment is laid out in an pre-registration document. Note that from a pre-registration document, the structure in the figure becomes clear. If someone posts a public pre-registration document, why does it matter who does the experiment first (aside from the ethical issue of "scooping", etc)? No one is "replicating" anyone else; they are each separately realizing the <i>hypothetical</i> experiment that was planned.</div><div><br /></div><div>But in current practice, which does not typically distinguish a hypothetical experiment and an actual one, the only way to add to the scientific literature about hypothetical experiment A is to try to "redo" one of its realizations. Any subsequent experiment is then logically dependent on the first actually performed experiment, and the unhelpful asymmetry crops up again.</div><div><br /></div><div>I think it would be useful to have a different word than "replication", because the connotation of the word "replication", as a fascimile or a copy of something already existing, focuses our attention in unhelpful ways.</div><div><br /></div><div>* Although logically which came first is immaterial, there may be statistical considerations to keep in mind, like the "<a href="http://andrewgelman.com/2011/09/10/the-statistical-significance-filter/" target="_blank">statistical significance filter</a>" that is more likely to affect a first study than a second. Also, as Vazire points out in the comments, the second study has fewer researcher degrees of freedom. </div><div><br /></div>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com7tag:blogger.com,1999:blog-50311760553201901.post-15808489906404854252015-04-09T08:38:00.000-07:002015-04-09T12:53:15.280-07:00My favorite Neyman passage: on confidence intervals<br />I've been doing a lot of reading on confidence interval theory. Some of the reading is more interesting than others. There is one passage from Neyman's (1952) book "Lectures and Conferences on Mathematical Statistics and Probability" (available <a href="http://drsmorey.org/bibtex/upload/Neyman-1952.pdf" target="_blank">here</a>) that stands above the rest in terms of clarity, style, and humor. I had not read this before the last draft of our <a href="https://github.com/richarddmorey/ConfidenceIntervalsFallacy" target="_blank">confidence interval paper</a>, but for those of you who have read it, you'll recognize that this is the style I was going for. Maybe you have to be Jerzy Neyman to get away with it.<br /><br />Neyman gets bonus points for the footnote suggesting the "eminent", "elderly" boss is so obtuse (a reference to Fisher?) and that the young frequentists should be "remind[ed] of the glory" of being burned at the stake. This is just absolutely fantastic writing. I hope you enjoy it as much as I did.<br /><br /><a name='more'></a><br /><i>[begin excerpt, p. 211-215]</i><br /><br /><i>[Neyman is discussing using "sampling experiments" (Monte Carlo experiments with tables of random numbers) in order to gain insight into confidence intervals. $\theta$ is a true parameter of a probability distribution to be estimated.]</i><br /><br />The sampling experiments are more easily performed than described in<br />detail. Therefore, let us make a start with $\theta_1 = 1$, $\theta_2 = 2$, $\theta_3 = 3$ and $\theta_4 = 4$. We imagine that, perhaps within a week, a practical statistician is faced four times with the problem of estimating $\theta$, each time from twelve observations, and that the true values of $\theta$ are as above <i>[ie, $\theta_1,\ldots,\theta_4$]</i> although the statistician does not know this. We imagine further that the statistician is an elderly gentleman, greatly attached to the arithmetic mean and that he wishes to use formulae (22). However, the statistician has a young assistant who may have read (and understood) modern literature and prefers formulae (21). Thus, for each of the four instances, we shall give two confidence intervals for $\theta$, one computed by the elderly Boss, the other by his young Assistant.<br /><br /><i>[Formula 21 and 22 are simply different 95% confidence procedures. Formula 21 is has better frequentist properties; Formula 22 is inferior, but the Boss likes it because it is intuitive to him.]</i><br /><br />Using the first column on the first page of Tippett's tables of random<br />numbers and performing the indicated multiplications, we obtain the following<br />four sets of figures.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-8Akim6t_lc0/VSaTSpgFlII/AAAAAAAAJ-w/_-w7I2JUy9k/s1600/Screen%2BShot%2B2015-04-09%2Bat%2B15.30.45.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-8Akim6t_lc0/VSaTSpgFlII/AAAAAAAAJ-w/_-w7I2JUy9k/s1600/Screen%2BShot%2B2015-04-09%2Bat%2B15.30.45.png" height="322" width="400" /></a></div><br /><br />The last two lines give the assertions regarding the true value of $\theta$ made by the Boss and by the Assistant, respectively. The purpose of the sampling experiment is to verify the theoretical result that the long run relative frequency of cases in which these assertions will be correct is, approximately, equal to $\alpha = .95$.<br /><br />You will notice that in three out of the four cases considered, both assertions (the Boss' and the Assistant's) regarding the true value of $\theta$ are correct and that in the last case both assertions are wrong. In fact, in this last case the true $\theta$ is 4 while the Boss asserts that it is between 2.026 and 3.993 and the Assistant asserts that it is between 2.996 and 3.846. Although the probability of success in estimating $\theta$ has been fixed at $\alpha = .95$, the failure on the fourth trial need not discourage us. In reality, a set of four trials is plainly too short to serve for an estimate of a long run relative frequency. Furthermore, a simple calculation shows that the probability of at least one failure in the course of four independent trials is equal to .1855. Therefore, a group of four consecutive samples like the above, with at least one wrong estimate of $\theta$, may be expected one time in six or even somewhat oftener. The situation is, more or less, similar to betting on a particular side of a die and seeing it win. However, if you continue the sampling experiment and count the cases in which the assertion regarding the true value of $\theta$, made by either method, is correct, you will find that the relative frequency of such cases converges gradually to its theoretical value, $\alpha= .95$.<br /><br />Let us put this into more precise terms. Suppose you decide on a number $N$ of samples which you will take and use for estimating the true value of $\theta$. The true values of the parameter $\theta$ may be the same in all $N$ cases or they may vary from one case to another. This is absolutely immaterial as far as the relative frequency of successes in estimation is concerned. In each case the probability that your assertion will be correct is exactly equal to $\alpha = .95$. Since the samples are taken in a manner insuring independence (this, of course, depends on the goodness of the table of random numbers used), the total number $Z(N)$ of successes in estimating $\theta$ is the familiar binomial variable with expectation equal to $N\alpha$ and with variance equal to $N\alpha(1 - \alpha)$. Thus, if $N = 100$, $\alpha = .95$, it is rather improbable that the relative frequency $Z(N)/N$ of successes in estimating $\alpha$ will differ from $\alpha$ by more than<br /><br />\[<br />2\sqrt{\frac{\alpha(1-\alpha)}{N}} = .042<br />\]<br /><br />This is the exact meaning of the colloquial description that the long run relative frequency of successes in estimating $\theta$ is equal to the preassigned $\alpha$. Your knowledge of the theory of confidence intervals will not be influenced by the sampling experiment described, nor will the experiment prove anything. However, if you perform it, you will get an intuitive feeling of the machinery behind the method which is an excellent complement to the understanding of the theory. This is like learning to drive an automobile: gaining experience by actually driving a car compared with learning the theory by reading a book about driving.<br /><br />Among other things, the sampling experiment will attract attention to<br />the frequent difference in the precision of estimating $\theta$ by means of the two alternative confidence intervals (21) and (22). You will notice, in fact, that the confidence intervals based on $X$, the greatest observation in the sample, are frequently shorter than those based on the arithmetic mean $\bar{X}$. If we continue to discuss the sampling experiment in terms of cooperation between the eminent elderly statistician and his young assistant, we shall have occasion to visualize quite amusing scenes of indignation on the one hand and of despair before the impenetrable wall of stiffness of mind and routine of thought on the other.<sup><a href="#footnote">[See footnote]</a></sup> For example, one can imagine the conversation between the two men in connection with the first and third samples reproduced above. You will notice that in both cases the confidence interval of the Assistant is not only shorter than that of the Boss but is completely included in it. Thus, as a result of observing the first sample, the Assistant asserts that<br /><br />\[<br />.956 \leq \theta \leq 1.227.<br />\]<br /><br />On the other hand, the assertion of the Boss is far more conservative and admits the possibility that $\theta$ may be as small as .688 and as large as 1.355. And both assertions correspond to the same confidence coefficient, $\alpha = .95$! I can just see the face of my eminent colleague redden with indignation and hear the following colloquy.<br /><br /><blockquote class="tr_bq"><b>Boss</b>: "Now, how can this be true? I am to assert that $\theta$ is between .688 and 1.355 and you tell me that the probability of my being correct is .95. At the same time, you assert that $\theta$ is between .956 and 1.227 and claim the same probability of success in estimation. We both admit the possibility that $\theta$ may be some number <i>between</i> .688 and .956 or <i>between</i> 1.227 and 1.355. Thus, the probability of $\theta$ falling within these intervals is certainly greater than zero. In these circumstances, you have to be a nit-wit to believe that<br />\[<br />\begin{eqnarray*}<br />P\{.688 \leq \theta \leq 1.355\} &=& P\{.688 \leq \theta < .956\} + P\{.956 \leq \theta \leq 1.227\}\\<br />&& + P\{1.227 \leq \theta \leq 1.355\}\\<br />&=& P\{.956 \leq \theta \leq 1.227\}.\mbox{''}<br />\end{eqnarray*}<br />\]</blockquote><blockquote class="tr_bq"><b>Assistant</b>: "But, Sir, the theory of confidence intervals does not assert anything about the probability that the unknown parameter $\theta$ will fall within any specified limits. What it does assert is that the probability of success in estimation using either of the two formulae (21) or (22) is equal to $\alpha$."</blockquote><blockquote class="tr_bq"><b>Boss</b>: "Stuff and nonsense! I use one of the blessed pair of formulae and come up with the assertion that $.688 \leq \theta \leq 1.355$. This assertion is a success only if $\theta$ falls within the limits indicated. Hence, the probability of success is equal to the probability of $\theta$ falling within these limits —."</blockquote><blockquote class="tr_bq"><b>Assistant</b>: "No, Sir, it is not. The probability you describe is the a posteriori probability regarding $\theta$, while we are concerned with something else. Suppose that we continue with the sampling experiment until we have, say, $N = 100$ samples. You will see, Sir, that the relative frequency of successful estimations using formulae (21) will be about the same as that using formulae (22) and that both will be approximately equal to .95."</blockquote><br />I do hope that the Assistant will not get fired. However, if he does, I would remind him of the <a href="http://en.wikipedia.org/wiki/Giordano_Bruno#Imprisonment.2C_trial_and_execution.2C_1593.E2.80.931600" target="_blank">glory of Giordano Bruno</a> who was burned at the stake by the Holy Inquisition for believing in the Copernican theory of the solar system. Furthermore, I would advise him to have a talk with a physicist or a biologist or, maybe, with an engineer. They might fail to understand the theory but, if he performs for them the sampling experiment described above, they are likely to be convinced and give him a new job. In due course, the eminent statistical Boss will die or retire and then —.<br /><br /><br /><a href="https://www.blogger.com/null" name="footnote"></a>[footnote] Sad as it is, your mind does become less flexible and less receptive to novel ideas as the years go by. The more mature members of the audience should not take offense. I, myself, am not young and have young assistants. Besides, unreasonable and stubborn individuals are found not only among the elderly but also frequently among young people.<br /><br /><i>[end excerpt]</i><br /><div><a href="https://www.blogger.com/null" name="footnote"><br /></a></div><a href="https://www.blogger.com/null" name="footnote"></a>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com3tag:blogger.com,1999:blog-50311760553201901.post-33500226680905565772015-03-29T05:26:00.000-07:002015-03-29T05:27:59.991-07:00The TES Challenge to Greg Francis<style>div.definition { width: 60%; border-style: solid; border-width: medium; padding: 1em; margin-left: auto; margin-right: auto; } </style> <br />This post is a follow-up to my previous post, <a href="http://bayesfactor.blogspot.co.uk/2015/03/statistical-alchemy-and-test-for-excess.html#comments">“Statistical alchemy and the 'test for excess significance'”</a>. In the comments on that post, Greg Francis objected to my points about the Test for Excess Significance. I laid out a challenge in which I would use simulation to demonstrate these points. Greg Francis agreed to the details; this post is about the results of the simulations (with links to the code, etc.)<br /><br /><a name='more'></a><br /><h2>A challenge</h2>In my previous post, I said this:<br /><br /><div class="definition"><b><i>Morey</i></b>: “…we have bit of a mystery. That $E$ [the expected number of non-significant studies in a set of $n$ studies] equals the sum of the expected [Type II error] probabilities is merely asserted [by Ioannidis and Trikalinos]. There is no explanation of what assumptions were necessary to derive that fact. Moreover, it is demonstrably false.” </div><br />Greg Francis replied:<br /><br /><div class="definition"><b><i>Francis</i></b>:“…none of your examples of the falseness of the equation are valid because you fix the number of studies to be n, which is inconsistent with your proposed study generation process. Your study generation process works if you let n vary, but then the Ioannidis & Trikalinos formula is shown to be correct…[i]n short, you present impossible sampling procedures and then complain that the formula proposed by Ioannidis & Trikalinos does not handle your impossible situations.” </div><br />To which I replied,<br /><br /><div class="definition"><b><i>Morey</i></b>:“If you don’t believe me, here’s a challenge: you pick a power and a random seed. I will simulate a very large ‘literature’ according to the ‘experimenter behaviour’ of my choice, importantly with no publication bias or other selection of studies. I will guarantee that I will use a behaviour that will generate experiment set sizes of 5. I will save the code and the ‘literature’ coded in terms of ‘sets’ of studies and how many significant and nonsignificant studies there are. You get to guess what the average number of significant studies are in sets of 5 via I&T’s model, along with a 95% CI (I’ll tell you the total number of such studies). That is, we’re just using Monte Carlo to estimate the expected number of significant studies in sets of experiments n=5; that is, precisely what I&T use as the basis of their model (for the special case of n=5).” “<b>This will answer the question of ‘what is the expected number of nonsignificant studies in a set of n?</b>’” </div><br />This challenge will very clearly show that my situations are not “impossible”. I can sample them in a very simple simulation. Greg Francis agreed to the simulation:<br /><br /><div class="definition"><b><i>Francis</i></b>: “Clearly at least one of us is confused. Maybe we can sort it out by trying your challenge. Power=0.5, random seed= 19374013” </div><br />I further clarified:<br /><br /><div class="definition"><b><i>Morey</i></b>: “Before I do this, though, I want to make sure that we agree on what this will show. I want to show that the expected number of nonsignificant studies in a set of n (=5) studies is not what I&T say it is, and hence, the reasoning behind the test is flawed (because ‘excess significance’ is defined as deviation from this expected number). I also want to be clear what the prediction is here: Since the power of the test is .5, according to I&T, the expected number of nonsignificant studies in a set of 5 is 2.5. Agreed?” </div><br />…to which Greg Francis agreed.<br /><br />I have performed this simulation. Before reading on, you should read the web page containing the results:<br /><ul><li>Web page (with code) outlining the results: <a href="http://learnbayes.org/talks/TES/TESsimulation.html">http://learnbayes.org/talks/TES/TESsimulation.html</a></li><li>Source <code>.Rmd</code> file: <a href="http://learnbayes.org/talks/TES/TESsimulation.Rmd">http://learnbayes.org/talks/TES/TESsimulation.Rmd</a></li></ul>The table below shows the results of the simulation of 1000000 “sets” of studies. All simulated “studies” are published in this simulation, no questionable research practices are involved. The first column shows \(n\), and the second column shows the average number of non-significant studies for sets of \(n\), which is a Monte Carlo estimate of I&T's \(E\). As you can see, it is not 2.5.<br /><br /><table><thead><tr class="header"><th align="left">Total studies (n) </th><th align="right">Mean nonsig. studies </th><th align="right">Expected by TES (E) </th><th align="right">SD nonsig. studies </th><th align="right">Count</th></tr></thead><tbody><tr class="odd"><td align="left">1</td><td align="right">1</td><td align="right">0.5</td><td align="right">0</td><td align="right">499917</td></tr><tr class="even"><td align="left">2</td><td align="right">1</td><td align="right">1.0</td><td align="right">0</td><td align="right">249690</td></tr><tr class="odd"><td align="left">3</td><td align="right">1</td><td align="right">1.5</td><td align="right">0</td><td align="right">125269</td></tr><tr class="even"><td align="left">4</td><td align="right">1</td><td align="right">2.0</td><td align="right">0</td><td align="right">62570</td></tr><tr class="odd"><td align="left">5</td><td align="right">1</td><td align="right">2.5</td><td align="right">0</td><td align="right">31309</td></tr><tr class="even"><td align="left">6</td><td align="right">1</td><td align="right">3.0</td><td align="right">0</td><td align="right">15640</td></tr><tr class="odd"><td align="left">7</td><td align="right">1</td><td align="right">3.5</td><td align="right">0</td><td align="right">7718</td></tr><tr class="even"><td align="left">8</td><td align="right">1</td><td align="right">4.0</td><td align="right">0</td><td align="right">3958</td></tr><tr class="odd"><td align="left">9</td><td align="right">1</td><td align="right">4.5</td><td align="right">0</td><td align="right">1986</td></tr><tr class="even"><td align="left">10</td><td align="right">1</td><td align="right">5.0</td><td align="right">0</td><td align="right">975</td></tr></tbody></table><br />(I have truncated the table at \(n=10\); see the HTML file for the full table.)<br /><br />I also showed that you can change the experimenter's behaviour and make it 2.5. This indicates that the assumptions one makes about experimenter behavior <em>matter</em> to the expected number of non-significant studies in a particular set. Across <em>all</em> sets of studies, the expected proportion of significant studies is expected to be equal to the power. However, how this is distributed across studies of different lengths is a function of the decision rule.<br /><br />The expression for the expected number of non-significant studies in a set of \(n\) is not correct (without further very strong, unwarranted assumptions).Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com34tag:blogger.com,1999:blog-50311760553201901.post-90472941995782010172015-03-28T03:18:00.000-07:002015-03-28T03:32:11.741-07:00Two things to stop saying about null hypotheses<style>div.definition { width: 60%; border-style: solid; border-width: medium; padding: 1em; margin-left: auto; margin-right: auto; } </style> <br />There is a currently fashionable way of describing Bayes factors that resonates with experimental psychologists. I hear it often, particularly as a way to describe a <em>particular use</em> of Bayes factors. For example, one might say, “I needed to prove the null, so I used a Bayes factor,” or “Bayes factors are great because with them, you can prove the null.” I understand the motivation behind this sort of language but please: stop saying one can “prove the null” with Bayes factors.<br /><br />I also often hear other people say “but the null is never true.” I'd like to explain why we should avoid saying both of these things.<br /><br /><a name='more'></a><br /><h3></h3><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-musB7WurlHg/VRaCuU9HSiI/AAAAAAAAJ-Y/rIIPeAcTEz4/s1600/mrt.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-musB7WurlHg/VRaCuU9HSiI/AAAAAAAAJ-Y/rIIPeAcTEz4/s1600/mrt.jpg" height="163" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Null hypotheses are tired of your jibber jabber</td></tr></tbody></table><h3>Why you shouldn't say “prove the null”</h3><div><br /></div>Statistics is complicated. People often come up with colloquial ways of describing what a particular method is doing: for instance, one might say a significance tests give us “evidence against the null”; one might say that a “confidence interval tells us the 95% most plausible values”; or one might say that a Bayes factor helps us “prove the null.” Bayesians often are quick to correct misconceptions that people use to justify their use of classical or frequentist methods. It is just as important to correct misconceptions about Bayesian methods.<br /><br />In order to understand why we shouldn't say “prove the null”, consider the following situation: You have a friend who claims that they can affect the moon with their mind. You, of course, think this is preposterous. Your friend looks up at the moon and says “See, I'm using my abilities right now!” You check the time.<br /><br />You then decide to head to the local lunar seismologist, who has good records of subtle moon tremors. You ask her whether about what happened at the time your friend was looking at the moon, and she reports back to you that lunar activity at that time was stronger than it typically is 95% of the time (thus passes the bar for “statistical significance”). <br /><br />Does this mean that there is evidence for your friend's assertion? The answer is “no.” Your friend made no statement about what one would expect from the seismic data. In fact, your friend's statement is completely unfalsifiable (as is the case with the typical “alternative” in a significance test, \(\mu\neq0\)). <br /><br />But consider the following alternative statements your friend could have made: “I will destroy the moon with my mind”; “I will make very large tremors (with magnitude \(Y\))”; “I will make small tremors (with magnitude \(X\)).” How do we now regard your friend's claims in light of the what happened?<br /><ul><li>“I will destroy the moon with my mind” is clearly inconsistent with the data. You (the null) are supported by an infinite amount, because you have completely falsified his statement that he would destroy the moon (the alternative).</li><li>“I will make very large tremors (with magnitude \(Y\))” is also inconsistent with the data, but if we allow a range of uncertainty around his claim, may not be completely falsified. Thus you (the null) are supported, but not by as much in the first situation.</li><li>“I will make small tremors (with magnitude \(X\))” may support you (the null) or your friend (the alternative), depending on how the magnitude predicted and observed.</li></ul>Here we can see that the support for the null depends on the alternative at hand. This is, of course, as it must be. Scientific evidence is <em>relative</em>. We can never “prove the null”: we can only “find evidence for a specified null hypothesis against a reasonable, well-specified alternative”. That's quite a mouthful, it's true, but “prove the null” creates misunderstandings about Bayesian statistics, and makes it appear that it is doing something it cannot do.<br /><br />In a Bayesian setup, the null and alternative are both models and the relative evidence between them will change based on how we specify them. If we specify them in a reasonable manner, such that the null and alternative correspond to relevant theoretical viewpoints or encode information about the question at hand, the relative <em>statistical</em> evidence will be informative for our research ends. If we don't specify reasonable models, then the relative evidence between the models may be correct, but useless.<br /><br />We never “prove the null” or “compute the probability of the null hypothesis”. We can only compare <em>a</em> null model to <em>an</em> alternative model, and determine the relative evidence. <br /><br />[See also <a href="http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf" target="_blank">Gelman and Shalizi (2013)</a> and <a href="http://pcl.missouri.edu/sites/default/files/Morey.etal_.2013.bjmsp_.pdf" target="_blank">Morey, Romeijn and Rouder (2013)</a>]<br /><br /><h3>Why you shouldn't say “the null is never true”</h3><div><br /></div>A common retort to tests including a point null (often called a 'null' hypothesis) is that “the null is never true.” This backed up by four sorts of “evidence”:<br /><ul><li>A quote from an authority: “Tukey or Cohen said so!” (Tukey was smart, but this is not an argument.)</li><li>Common knowledge / “experience”: “We all know the null is impossible.” (This was Tukey's “argument”)</li><li>Circular: “The area under a point in a density curve is 0.” (Of course if your model doesn't have a point null, the point null will be impossible.)</li><li>All models are “false” (even if this were true --- I think it is actually a category error --- it would equally apply to all alternatives as well)</li></ul>The most attractive seems to be the second, but it should be noted that people almost never use techniques that allow finding evidence for null hypotheses. Under these conditions, how is one determining that the null is never true? If a null were ever true, we would not be able to accumulate evidence for it, so the second argument definitely has a hint of circularity as well. <br /><br />When someone says “The null hypothesis is impossible/implausible/irrelevant”, what they are saying in reality is “I don't believe the null hypothesis can possibly be true.” This is a totally fine statement, as long as we recognize it for what it is: an <em>a priori commitment</em>. We should not pretend that it is anything else; I cannot see any way that one can find universal evidence for the statement “the null is impossible”.<br /><br />If you find the null hypothesis implausible, that's OK. Others might <em>not</em> find it implausible. It is ultimately up to substantive experts to decide what hypotheses <em>they</em> want to consider in their data analysis, and not up to methodologists or statisticians to decide to tell experts what to think.<br /><br />Any automatic behavior — either automatically rejecting all null hypothesis, or automatically testing null hypotheses — is bad. Hypothesis testing and estimation should be considered and deliberate. Luckily, Bayesian statistics allows both to be done in a principled, coherent manner, so informed choices can be made by the analyst and not by the restrictions of the method.Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com3tag:blogger.com,1999:blog-50311760553201901.post-54569378502150437852015-03-23T07:58:00.001-07:002015-03-23T07:58:46.583-07:00BayesFactor updated to version 0.9.11-1The BayesFactor package has been updated to version 0.9.11-1. The changes are:<br /><br /> CHANGES IN BayesFactor VERSION 0.9.11-1<br /><br />CHANGES<br /> * Fixed memory bug causing importance sampling to fail.<br /><br /> CHANGES IN BayesFactor VERSION 0.9.11<br /><br />CHANGES<br /> * Added support for prior/posterior odds and probabilities. See the new vignette for details.<br /> * Added approximation for t test in case of large t<br /> * Made some error messages clearer<br /> * Use callbacks at least once in all cases<br /> * Fix bug preventing continuous interactions from showing in regression Gibbs sampler<br /> * Removed unexported function oneWayAOV.Gibbs(), and related C functions, due to redundancy<br /> * gMap from model.matrix is now 0-indexed vector (for compatibility with C functions)<br /> * substantial changes to backend, to Rcpp and RcppEigen for speed<br /> * removed redundant struc argument from nWayAOV (use gMap instead)<br /><div><br /></div>Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.com3