this post was submitted on 28 Apr 2025
460 points (99.1% liked)

Science Memes

14403 readers
2840 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 126 points 1 day ago* (last edited 1 day ago) (1 children)

The replication crisis is real, but I'm going to give some pushback on the "ssssh" like it's some kind of conspiracy "they" don't want you to know about(TM). We live in an era of unprecedented and extremely dangerous anti-intellectualism, and pushing this as some kind of conspiracy is honestly really gross.

  • The entire reason the crisis became known is because scientists have and are having the integrity to try to replicate results from existing studies. They want the science in their field to be sound, and they've been extremely vocal about this problem from the minute they found it. This wasn't some "whistleblower" situation.
  • Arguably a major reason why it took so long for this to come to the fore is because government agencies which administer grants focus much less on replicating previous experiments and more on "new" stuff. This would ironically be much less of a problem if more funds were allocated for scientific research (i.e. so they weren't so competitive that researchers feel the need to publish "new" research lest their request be denied). This "ssssh" rhetoric makes the voting public want the exact opposite of that because it tells them that their tax dollars are being funneled into some conspiratorial financial black hole.
  • This happens in large part because concrete, replicable research on humans is extremely hard, not because the researchers lack integrity and just want to publish slop. In CS, I can control for basically everything on my computer and give you a mathematical proof that what I wrote works for everything every time. In physics, I can give exact parameters for my simulation or literal schematics for my device. A psychological or sociological experiment is vastly more difficult to remove confounding variables from or to properly document the confounding variables in.
  • This doesn't invalidate soft sciences like anti-intellectuals would want you to believe. While some specific studies may not be replicable, this is why meta-analyses and systematic reviews are so important in medicine, psychology, sociology, etc.: they give the "average" of the existing literature on a specific subject, so outliers get discovered, and there's far more likelihood that their results are correct or close to correct.
  • This is actively being worked on, and researchers are more aware of it than ever – making them more cognizant of the way they design their experiments and discuss their methodologies.
  • One of the major reasons for problems with replication isn't actually that the original studies were bunk within the population they were sampling. Rather, it's that once replication was attempted on people from diverse cultures rather than the narrow range of cultures often sampled in many (especially older) papers ("Western, educated, industrialized, rich, democratic"), the significance observed disappeared. As noted in the linked article, 50% given that fact is actually not half-bad. With much more extensive globalization in the modern day and a larger awareness of this problem, it should become less and less severe.

EDIT: I just noticed that they also got their facts wrong in a subtle but meaningful way: the statistic is that 50% of the published papers aren't replicable, not reproducible. Reproducibility is taking an existing dataset and using it to reach the same conclusions. For example, if I have a dataset of 500 pictures of tires and publish "Tires: Are they mostly round and black?" in Tireology, claiming based on the dataset that tires are usually round and black, then I would hope that Scientist B. couldn't take that same dataset of 500 tire pictures and come to the conclusion that they're usually square and blue. However, replication would be if Scientist B. got their own new dataset of say 800 tire pictures and attempted to reach my same findings. If they found from this dataset that tires are usually square and blue but found from my dataset that they're usually round and black, then my results would be reproducible but not replicable. If Scientist B. got the same results as me from the new dataset, then my results would be replicable, but it wouldn't say anything about reproducibility. Here, a lack of replication might come from taking too narrow a sample of tires (I found the tires by camping out in a McDonald's parking lot in Norfolk, Nebraska over the course of a weekend), that I published my findings in 1985 but that 40 years later tires really have changed, that there was some issue with how I took the pictures, etc.