That wouldn’t be a problem at all if we had better science journalism. Every psychologist knows that “a study showed” means nothing. Consensus over several repeated studies is how we approximate the truth.
The psychological methodology is absolutely fine as long as you know it’s limitations and how to correctly apply it. In my experience that’s not a problem within the field, but since a lot of people think psychology = common sense, and most people think they excel at that, a lot of laypeople overconfidently interpret scientific resultst which leads a ton of errors.
The replication crisis is mainly a problem of our publications (the journals, how impact factors are calculated, how peer review is done) and the economic reality of academia (namely how your livelihood depends on the publications!), not the methodology. The methods would be perfectly usable for valid replication studies - including falsification of bs results that are currently published en masse in pop science magazines.
That wouldn’t be a problem at all if we had better science journalism. Every psychologist knows that “a study showed” means nothing. Consensus over several repeated studies is how we approximate the truth.
The psychological methodology is absolutely fine as long as you know it’s limitations and how to correctly apply it. In my experience that’s not a problem within the field, but since a lot of people think psychology = common sense, and most people think they excel at that, a lot of laypeople overconfidently interpret scientific resultst which leads a ton of errors.
The replication crisis is mainly a problem of our publications (the journals, how impact factors are calculated, how peer review is done) and the economic reality of academia (namely how your livelihood depends on the publications!), not the methodology. The methods would be perfectly usable for valid replication studies - including falsification of bs results that are currently published en masse in pop science magazines.