tag:blogger.com,1999:blog-50311760553201901.post8537741682371244671..comments2022-01-27T03:52:39.053-08:00Comments on BayesFactor: Software for Bayesian inference: All about that "bias, bias, bias" (it's no trouble)Richard Moreyhttp://www.blogger.com/profile/11319149283079163004noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-50311760553201901.post-81752514489963019852021-07-03T23:04:35.516-07:002021-07-03T23:04:35.516-07:00
I was diagnosed as HEPATITIS B carrier in 2013 wi...<br />I was diagnosed as HEPATITIS B carrier in 2013 with fibrosis of the<br />liver already present. I started on antiviral medications which<br />reduced the viral load initially. After a couple of years the virus<br />became resistant. I started on HEPATITIS B Herbal treatment from<br />ULTIMATE LIFE CLINIC (www.ultimatelifeclinic.com) in March, 2020. Their<br />treatment totally reversed the virus. I did another blood test after<br />the 6 months long treatment and tested negative to the virus. Amazing<br />treatment! This treatment is a breakthrough for all HBV carriers.<br />MATINAhttps://www.blogger.com/profile/03056834682621326306noreply@blogger.comtag:blogger.com,1999:blog-50311760553201901.post-84540869439193209812019-11-13T13:01:22.059-08:002019-11-13T13:01:22.059-08:00تعد الاول افضل شركة تنظيف خزانات بالمدينة المنورة ...<br />تعد الاول افضل شركة <a href="https://www.bfirstseo.com/%D8%AA%D9%86%D8%B8%D9%8A%D9%81-%D8%AE%D8%B2%D8%A7%D9%86%D8%A7%D8%AA-%D8%A8%D8%A7%D9%84%D9%85%D8%AF%D9%8A%D9%86%D8%A9-%D8%A7%D9%84%D9%85%D9%86%D9%88%D8%B1%D8%A9/" rel="nofollow">تنظيف خزانات بالمدينة المنورة</a> تعمل على استخدام افضل ادوات تنظيف وتعقيم خزانات المياه<br />stevehttps://www.blogger.com/profile/17414484235052045965noreply@blogger.comtag:blogger.com,1999:blog-50311760553201901.post-26645597594698803192018-10-04T07:00:22.030-07:002018-10-04T07:00:22.030-07:00This is extremely helpful info!!
CheridaThis is extremely helpful info!! <br /><a href="http://www.thesexlocator.com/meet-cherida-girl.html" rel="nofollow">Cherida</a><br />Anikahttps://www.blogger.com/profile/03454651125810453352noreply@blogger.comtag:blogger.com,1999:blog-50311760553201901.post-36842399667181227032017-08-24T07:15:12.996-07:002017-08-24T07:15:12.996-07:00I really appreciate the kind of topics you post he...I really appreciate the kind of topics you post here. Thanks for sharing us a great information that is actually helpful. Good day!<a href="http://www.bridalshopsaudiarabia.com/" rel="nofollow">Bridal Shop Saudi Arabia</a><br /><br />Ned Polianhttps://www.blogger.com/profile/01187285965170666055noreply@blogger.comtag:blogger.com,1999:blog-50311760553201901.post-25370691191676266412017-04-29T10:05:41.281-07:002017-04-29T10:05:41.281-07:00Huge dose of knowledge! I am very happy that I fou...Huge dose of knowledge! I am very happy that I found this site. Both the post itself and the comments are able to teach something and show something new. I hope in the future also find such useful posts. Regards from <a href="http://axadsystem.com/about-anegis/" rel="nofollow">axad agency</a>.Anonymoushttps://www.blogger.com/profile/03147125378944408447noreply@blogger.comtag:blogger.com,1999:blog-50311760553201901.post-20656506864955828452015-04-20T07:18:01.839-07:002015-04-20T07:18:01.839-07:00This makes sense to me and I'm trying to put s...This makes sense to me and I'm trying to put some of this into a simple summary. A fundamental problem with assessing evidence for small effects is that 'small' is not a well-defined concept, but depends on context. Thus it is essential that any tool for distinguishing a negligible/zero effect from a small effect be calibrated for the context in question.thomhttps://www.blogger.com/profile/00392478801981388165noreply@blogger.comtag:blogger.com,1999:blog-50311760553201901.post-10249790477777167632015-04-11T13:29:49.803-07:002015-04-11T13:29:49.803-07:00Dr. R.,
Are you implying that P(effect exists) = ...Dr. R.,<br /><br />Are you implying that P(effect exists) = 1 whenever one finds a single significant result? (regardless of the size of that effect?) It is hard to interpret your statement otherwise unless you provide clarification.<br /><br />"When the aim of a study is simply to demonstrate that an effect exists, a simple significance tests is more useful."Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-50311760553201901.post-66297040094313408912015-04-10T05:53:19.699-07:002015-04-10T05:53:19.699-07:00>> But do we need an alternative to test the...>> But do we need an alternative to test the hypothesis that the true effect size is close to z? <br /><br />I think you do. Under NHST you cannot distinguish between false positive and true effect near zero. <br /><br />>> No. As sample size increases, the observed effect size will be increasingly closer to zero. <br /><br />Yes and the CI will also become narrower. This relationship is on average more or less constant (I briefly discuss that in my BSE manuscript - I had some data on it too but I felt that was a bit too much for it. Maybe I will post about it sometime). Therefore it follows the same principle as NHST in that false positive rate will remain constant regardless of sample size. <br /><br />One way that I think is appropriate though is to defined a null range of irrelevant non-zero effects and to quantify how distinct the CI is from that range. This is basically what Zoltan Dienes describes here as an alternative to Bayes factors (he prefers the latter but seems to suggest that these are largely equivalent approaches):<br />journal.frontiersin.org/Journal/10.3389/fpsyg.2014.00781/abstractAnonymoushttps://www.blogger.com/profile/08650498486195818917noreply@blogger.comtag:blogger.com,1999:blog-50311760553201901.post-62503670136905358462015-04-10T05:26:54.661-07:002015-04-10T05:26:54.661-07:00Dr. R,
> "Sampling error still produces a...Dr. R,<br /><br />> "Sampling error still produces an observed effect size that is not 0, but the tight confidence interval includes 0 and the null-hypothesis cannot be rejected. If you want to be more certain that the null-hypothesis is true, you need to collect more data and examine whether 0 is still included in an even tighter confidence interval."<br /><br />No. This is a basic, longstanding fallacy about confidence intervals; see Neyman (1941, 1952). <br /><br />> 'What is gained by comparing the hypothesis d=0 against an alternative hypothesis with an unknown effect size that is simply a guess by a researcher? "the devil is in the selection of the prior" '<br /><br />That's a great question that has been answered at length in many papers about Bayesian statistics over the past 70 years.<br /><br />Jeff:<br /><br />Yes, I saw that he simulated things. The purpose of the presentation here is different, but addresses the same question: should small observed effect sizes yield evidence (even strong evidence!) for the null. The argument above shows that yes, they should. We can question how the curves should move through that bivariate space, but they must have certain properties and one of them is that "small effect sizes look null", regardless of whether you're using Bayes factor or not.Richard Moreyhttps://www.blogger.com/profile/11319149283079163004noreply@blogger.comtag:blogger.com,1999:blog-50311760553201901.post-25556770633067686772015-04-10T04:41:52.993-07:002015-04-10T04:41:52.993-07:00Dear Richard Morey,
But do we need an alternative...Dear Richard Morey,<br /><br />But do we need an alternative to test the hypothesis that the true effect size is close to z? <br /><br />No. As sample size increases, the observed effect size will be increasingly closer to zero. <br /><br />Isn't it sufficient to show that d = .04 with a 95%CI of d = -.01 to d = .09. <br /><br />Sampling error still produces an observed effect size that is not 0, but the tight confidence interval includes 0 and the null-hypothesis cannot be rejected. <br /><br />If you want to be more certain that the null-hypothesis is true, you need to collect more data and examine whether 0 is still included in an even tighter confidence interval. <br /><br />What is gained by comparing the hypothesis d=0 against an alternative hypothesis with an unknown effect size that is simply a guess by a researcher? "the devil is in the selection of the prior" <br /><br />When people say that it is impossible to test the null-hypothesis, they really are saying it is impossible to test an exact point prediction, d = 1 / infinity. But this is also true for any other effect size, d = .3 is not equal to d = .300000000000000001. <br /><br />It is now clear to me when Bayesian statistics is useful. When you have two theories that make competing quantitative predictions (e.g., d = 0 vs. d = 3; d = .5 vs. d = 1, etc), the BF is useful whether the evidence is more consistent with theory A or B. <br /><br />When the aim of a study is simply to demonstrate that an effect exists, a simple significance tests is more useful. The current crisis of confidence is not a problem of p-values, but a problem of not reporting non-significant results. This is not a statistical problem, so we cannot fix it by changing statistics. <br /><br />Sincerely, Dr. RAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-50311760553201901.post-68762242249425200962015-04-10T04:37:28.573-07:002015-04-10T04:37:28.573-07:00Great Post. This info needs to get out to fully u...Great Post. This info needs to get out to fully understand the appeal of BF.<br /><br />That said, Uri's example is quite different. If you look carefully at his Figure 1 (http://datacolada.org/2015/04/09/35-the-default-bayesian-test-is-prejudiced-against-small-effects/), you see that the effect size is not constant. He is lowering it as sample size increases so that a 50% power is preserved. So, he is choosing a different one of your blue lines for different sample sizes. I haven't worked out the limiting behavior, largely because I don't care about 50% power curves, but it could be to the red line or not. What Uri simulated is different than what is presented here, but what is presented here is the far more understandable case. Jeff Rouderhttps://www.blogger.com/profile/12042232118911308833noreply@blogger.com