this post was submitted on 13 Oct 2023
19 points (95.2% liked)

BecomeMe

803 readers
1 users here now

Social Experiment. Become Me. What I see, you see.

founded 1 year ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 1 year ago (1 children)

I worked in medical research for a while. I was just a lowly technical assistant with a bachelor's degree, not a doctor or PhD.

But wow, it was eye opening. In the hunt for grant money, folks with letters after their names will massage that data and "reframe" the questions in myriad ways chasing desired outcomes.

Fortunately, peer review tears some of that bullshit apart. So, science works when done correctly. But, the replication crisis looms large, and I am skeptical of a lot of research papers and science journalism to this day.

[–] [email protected] 1 points 1 year ago

What sort of solution would work for this though?

Can we merit-restrict access to academia until the need for and availability of funding match?

Maybe have a separate verifying authority for experimental observation that need to confirm experimental data before inferences can be drawn from it?

I'm surprised that even doctor's would need to depend on such a flawed system for funding - I suppose when the stakes are high enough, everyone starts loosening up on principles, doctor or no doctor.

[–] [email protected] 9 points 1 year ago (1 children)

Here's the paper: https://doi.org/10.32942/X2GG62 Opening it and seeing 2 pages of authors is pretty weird.

The issue here isn't getting different results from different analytical methods, they're different methods. If I go to the store by car, I get a different trip than when I walk, that's completely expected.

My big question here is why do some teams pick methods that result in very obvious outliers (apart from them not knowing they're outliers, of course.) Why did they go the store by pogostick? Is it because these teams are simply bad at their job? Do they have a special reason why their method is the only accurate one in these case? Do they always use that method and simply didn't consider it? The question "Why did you analyse your data in [method X] and not [method Y]?" is one the worst questions you can get during peer review or thesis defence, and it's also one of the most important ones. I'd love to know the reason why the outlier teams made that choice.

I've got a gut feeling there's a lot of "We didn't care and this was fast" involved, simply because I see that so much in practice.

[–] [email protected] 2 points 1 year ago

"We didn't get the results we wanted, so we tried another method." is the most common justification, but rarely admitted.

[–] [email protected] 3 points 1 year ago

Yeah this is just going to end up with a shrug and people calling ecology 'not a real science'.