https://fivethirtyeight.com/features/science-isnt-broken/
Another study with the same goal of comparing the results from different research teams found similar disparities, though the graphs aren’t quite as pretty.
https://fivethirtyeight.com/features/science-isnt-broken/
Another study with the same goal of comparing the results from different research teams found similar disparities, though the graphs aren’t quite as pretty.
If we only look that those with p <0.05 (green) and with 95 % confidence interval, then there are 17 teams left. And they all(!) agree with more than 95% conference.
And you missed the pint in the very article about how p value isn’t really as useful as it’s been touted.
That’s not the point, which is that the results are indeed mostly very similar, unlike what OP claims.
I never said that only looking at p values is a good idea or anything else like that.
So ignore all non-significant results? What’s to say those methods result in findings closer to the truth than the methods with no significant results.
The issue is that so many seemingly legitimate methods produce different findings with the same data.