How to Understand Scientific Papers/Judge Quality, Laypeople

In summary: You don't see how you could judge the paper if you don't understand the subject matter.I think you can - not in the same way as it is judged during the journal submission/peer review process, for which you do need expertise on the subject matter, but in the following sense:- has this paper passed the peer review in a reputable journal? - indicating there are no glaring errors- has it been cited by other papers? - indicating it's relevant, it's been around for a while, and has been acknowledged as valuable research by the scientific community (unless the citations are all in critical papers ;)The first step usually removes 99% of dubious 'research'
  • #1
lesah
16
1
I'd like to learn how I can better understand research papers/scientific studies, judge the quality, the methods used, find out who funded it, etc. I'm a layperson.

In researching various topics, especially controversial subjects, I try to find the actual scientific papers behind everything. Everyone and every news article or blog usually says "studies show". I would like to know how to tell if a study is worth something, or if it's biased.
Certainly, knowing something about the subject matter may help, but that's usually not possible, seeing as there is so much out there.

Any tips would be very much appreciated. And I'll update the thread if I find more online.

Thank you in advance if you're able to chime in.
 
Physics news on Phys.org
  • #2
I don't see how you could judge the paper if you don't understand the subject matter.
 
  • #3
I mean more along the lines of were the methods the researchers used sound? Maybe an example would be better.

There's all this talk that "vaccines cause autism." When I look up the research, all of the studies seem to show that there is no link between them. Yet, the issue doesn't go away.
Those claiming vaccines do cause autism say there are lots of studies showing that there is a link. How can I tell if it's a poor quality study, etc.?
Here is one I'm looking at: http://www.mdpi.com/1099-4300/14/11/2227

If every major health organization keeps saying research shows "no link", how can the studies that say "vaccines DO cause autism" be valid/correct?

Thanks.
 
  • #4
Usually publication in a respected peer-review journal is enough. However, there are exceptions: for the example you cited specifically, I recommend you take a look at this:

http://scholarlyoa.com/2014/02/18/chinese-publishner-mdpi-added-to-list-of-questionable-publishers/
https://en.wikipedia.org/wiki/MDPI

News outlets, blogs, etc. are usually not very reliable for checking these things. You basically have to be friends with an expert, or maybe e-mail a professor on the subject.
 
Last edited by a moderator:
  • #5
micromass said:
I don't see how you could judge the paper if you don't understand the subject matter.
I think you can - not in the same way as it is judged during the journal submission/peer review process, for which you do need expertise on the subject matter, but in the following sense:
- has this paper passed the peer review in a reputable journal? - indicating there are no glaring errors
- has it been cited by other papers? - indicating it's relevant, it's been around for a while, and has been acknowledged as valuable research by the scientific community (unless the citations are all in critical papers ;)

The first step usually removes 99% of dubious 'research' cited on blogs and crackpot sites. Having little citations doesn't disqualify, just suggest that the topic discussed is obscure, hasn't been explored much since that paper, or is of little value.

Finally, when you read the paper as a layman, read the conclusions (you're unlikely to be qualified to judge the methods, and besides, that's what the peer review does for you). This is not to judge the paper itself, but rather to see if whomever used the paper as a source for whatever claim has not fudged or intentionally misrepresented what the paper is talking about.If you get through those steps, and find a peer-reviewed paper with some citations and apparently concluding that white is black, then I agree with Hector that it's when you need to ask an expert in the field.
 
  • #6
lesah said:
I mean more along the lines of were the methods the researchers used sound? Maybe an example would be better.

There's all this talk that "vaccines cause autism." When I look up the research, all of the studies seem to show that there is no link between them. Yet, the issue doesn't go away.
Those claiming vaccines do cause autism say there are lots of studies showing that there is a link. How can I tell if it's a poor quality study, etc.?
Here is one I'm looking at: http://www.mdpi.com/1099-4300/14/11/2227

If every major health organization keeps saying research shows "no link", how can the studies that say "vaccines DO cause autism" be valid/correct?

Thanks.

There are two separate aspects here.

1. If it is a NEW, research-front result, then you have to be patient and WAIT, sometime several years, for things to be verified and confirmed. You need to remember that publication doesn't automatically implies validity. Those are two separate issues. Scientific works are published so that others in the field can scrutinize and then, over time, and after being reproduced independently by others, be confirmed or refuted. It is part of the process. It isn't the end, but merely a means to an end.

2. You have to wait for a consensus, and in cases such as the link between autism and vaccination, or power lines and cancer, you have to rely on expert opinions, such as when the studies are produced by National Research Council/Academy of Sciences, etc.

So no, unless you have a specific expertise in certain areas, you will probably not be able to judge the validity or degree of certainty of an academic paper.

Zz.
 
  • #7
I think more of what the OP is asking though is how can an intelligent layperson know what to believe? Someone outside the field is not going to be able to judge the quality of a given paper, but there are things that a person can do to enhance their own critical thought.

  1. Consider the source of the material. As pointed out above, peer-reviewed journals are generally the most credible sources. But even amidst academic journals there are higher and lower quality journals. You can look to journal rankings to get a rough idea of the quality/credibility of the source. And sometimes even the best journals can publish errors or even material that was later found to be intentionally misleading (as was the case for the vaccine-autism fiasco).
    Higher quality journals will require that authors disclose certain conflict of interest information such as: whether the authors have a financial interest in the outcome of the work. They will also be required to disclose any sources of funding.*
  2. Consider the objectives of the source. Why has someone reported a given fact? Are they trying to sell something? Are they feeding off of the attention they are getting?
  3. Consider what is actually said by the source rather than what is implied or what is relayed in the media. If you've ever played the telephone game you know how messages can be distorted with each degree of separtation from the source. Scientists are usually pretty careful about what they say. So in academic articles you'll see measured phrasing like "the evidence presented is consistent with..." whereas in the mass media that can be translate into "scientists have shown that..." Scientitsts will also discuss the limitations of their studies (which can often be left out of media reports) and these can be critically important.
  4. Learn some basic statistics. This will help you to appreciate what is meant by the "power" of the study or how significant a result can be.
  5. Do independent, credible sources agree? How many of them are there? If all of your information can essentially be traced back to a single source, that holds a lot less weight than lots of independent groups reporting the same thing.
    Along these lines it's important to look for published reviews as well. Journals will, from time to time, get experts to critically review certain topics. These will be summaries of research in a certain area and they will help to establish consensus.
  6. Learn to recognize flags for false claims. Such flags include: argument from authority, argument from mass concensus, non-specific language, testimonials, etc. remember: the plural of anecdote is not data.

* One tricky thing to look out for even when sources of funding are disclosed is that in some cases the true source isn't always obvious. For example, research from the Global Energy Balance Network has recently come into the media spotlight for it's financial support from CocaCola. See: http://www.washingtonpost.com/news/...-with-a-biased-message-nutrition-experts-say/
 
Last edited:
  • Like
Likes Bandersnatch and artyb
  • #8
Hector Mata,
Thanks for those links. I bookmarked the scholarlyoa site.
ZapperZ said:
publication doesn't automatically implies validity.
Yeah, that sums up a main conflict I was having; Thinking that publication in a scientific journal (or what appears to be a scientific journal), means that there is credibility to the study or claim.This leads into a broader concern of mine, which is combating stupidity. The vaccines/autism is a good example. No matter how many studies come back and no, "vaccines do not cause autism" people just ignore it and insist that they do.
At the least, if I can know enough to spot poor quality or totally bogus "research", that will help.
Oh, and I like your quote, Choppy, that "the plural of anecdote is not data."
 
  • #9
lesah said:
No matter how many studies come back and no, "vaccines do not cause autism" people just ignore it and insist that they do.
I think there is a "not" at the wrong place or a comma missing.

We have a list of credible journals. Not everything published there is right, but those journals certainly do proper peer-review. This paper (and crackpot stuff) doesn't get posted there.
 
  • #10
lesa:

The replies by ZapperZ and Choppy are very informative. I would just summarize by saying that scientists are usually very conservative about their claims, or the impact of their findings. They constantly hedge their bets and qualify their statements ("it is probable...", "more research is needed", "that's not exactly my area of expertise..." etc) . Scientific journalism, however, is not subject to the same checks and balances as the actual science. Their mission is first to get people to click on a headline; accuracy and all the details and caveats of the actual science are usually relegated to the last paragraphs of the article (and most people don't make it past the opening paragraph), if they're even mentioned at all.

Also, science correspondents rarely report on other less glamorous, but equally important events in the scientific process: 1) reception/scrutiny by the wider scientific community after a result is published (it may take months or years, as ZapperZ mentioned) and 2) retraction of faulty publications (this is especially relevant in the vaccine example you bring up), or publication of follow-up studies criticizing or even refuting the original. In the case of the autism-vaccine issue, by the time the scientific community had evaluated and debunked the fraudulent study (and it eventually got retracted by the journal, too) the damage was already done, and has been with us ever since.
 
  • #11
lesah said:
This leads into a broader concern of mine, which is combating stupidity. The vaccines/autism is a good example. No matter how many studies come back and no, "vaccines do not cause autism" people just ignore it and insist that they do.

I'm not sure that "stupidity" per se is the issue. There are lots of very intelligent people out there who adhere to beliefs despite either a severe lack of evidence, or despite a very convincing amount of evidence supporting a contrary point of view.

I think the issue is more rooted in how people really think.

Take, for example, the phenomenon of confirmation bias. While many of us would like to think that we're objective and rational (and we can be at times, particularly when we put on our "scientist" hats), there is a lot of work that suggests we "naturally" tend to draw conclusions early and then seek information that support those conclusions.

Or the fact that most people do not have a very good intuitive understanding of statistics. Or consider situations where social pressure may override conclusions that one may have (correctly) arrived at in their absence.

And then there's just general ignorance. Being ignorant doesn't mean someone is stupid. It just means they aren't aware of something, and in many cases would likely make different choices if they were aware of certain facts.

The good news is that unlike "stupidity" the above issues are correctable to varying degrees. The more that people learn about basic concepts in science, the more they become educated and aware of their own biases and patterns of thought that can lead to false conclusions, the more they can compensate for these.
 
  • Like
Likes artyb
  • #12
Thanks for the additional info, Choppy, Hector Mata, and mfb.
That's true that a lot of it is understanding logical fallacies and being aware of them. I agree that just because someone hasn't come across certain information, that doesn't mean they're ignorant per se, I feel people are stupid more because of their willful and belligerent ignorance. Many people don't even try to understand, or listen to a different idea.
We (myself included, more than I like to admit) allow ourselves to be high-jacked by our emotions, and think that if we just repeat the same thing with more intensity and incredulity, then it will somehow magically become true.

Or to quote Rory Miller, "People mistake intensity for truth."
 
  • #13
It's so hard to read a paper, interpret their data, then judge how solid their conclusions are while you are reading it. So easy to just assume it is true because they say so. Very hard to consider the merit of alternatives and things they don't mention but must have thought about.

There's big gap between crackpot theories or drawing conclusions that look right, but may be wrong. Or, conclusions that in the end turn out to be right, but aren't born out by the evidence enough. So did that scientist make a mistake? Or did she/he just trust their really good instincts?
 
  • #14
Almeisan said:
It's so hard to read a paper, interpret their data, then judge how solid their conclusions are while you are reading it. So easy to just assume it is true because they say so. Very hard to consider the merit of alternatives and things they don't mention but must have thought about.

There's big gap between crackpot theories or drawing conclusions that look right, but may be wrong. Or, conclusions that in the end turn out to be right, but aren't born out by the evidence enough. So did that scientist make a mistake? Or did she/he just trust their really good instincts?

This is exactly why I said that you have to WAIT and give it months to years as more and more responses to that paper comes in.

There is such a thing called "citation index". It indexes ALL citations made to that paper. But this will only accumulate after a while, because it requires other papers citing that original paper to be published. You can then look up those citations and see if those follow-up papers agree or disagree with the original paper. And if you see most, if not all, of the citations were papers by one or more of the co-authors, and very few, if any, written by other independent authors, then you can smell a rat.

If one doesn't have access to extensive citation indexes, just use Google Scholar and look up the links that lists papers that cite that paper.

But this all requires EFFORT. You simply can't sit back and expect to be spoon-fed and told what is valid and what isn't. You need to do your own legwork to verify if you're being fed valid information, or a bunch of bull-crap. And this applies especially now with regards to US politics.

Zz.
 
  • Like
Likes mfb
  • #15
ZapperZ said:
But this all requires EFFORT. You simply can't sit back and expect to be spoon-fed and told what is valid and what isn't. You need to do your own legwork to verify if you're being fed valid information, or a bunch of bull-crap. And this applies especially now with regards to US politics.
Effort many want to avoid. Especially in the US (that might be a biased view from the other side of the Atlantic, but the US keeps delivering corresponding news).
 
  • #16
It's a good idea to check the credentials and publication history of the author. A poorly credentialed author or with few or no prior peer reviewed papers deserves suspicion. Citation to papers by controversial authors [kooks] should also be a red flag. Unless you are an accomplished mathematician or have expertise in the subject it is difficult to judge the merits of a paper. Peer reviews are typically conducted by people with these qualities which lends credibility [or at least plausibility] to the paper.
 

1. What makes a scientific paper reliable?

A reliable scientific paper is one that has been published in a reputable journal after being peer-reviewed by experts in the field. This means that the research has been critically evaluated for its methodology, data, and conclusions. Additionally, a reliable paper will provide clear and transparent information about its sources of funding, potential conflicts of interest, and limitations of the study.

2. How can I determine the quality of a scientific paper?

The quality of a scientific paper can be determined by assessing the credibility of the authors, the methodology used, and the validity and reliability of the results. It is important to look for papers written by reputable researchers or published in well-established journals. Additionally, examining the study design, sample size, and statistical analysis can give insight into the quality of the research. Reading reviews and critiques of the paper by other experts in the field can also help in determining its quality.

3. What are the key components of a scientific paper?

A scientific paper typically includes an abstract, introduction, methods, results, discussion, and conclusion. The abstract provides a brief overview of the study, while the introduction explains the background and purpose of the research. The methods section describes the procedures and techniques used to conduct the study, while the results present the data and findings. The discussion section interprets the results and relates them back to the research question, and the conclusion summarizes the main points and implications of the study.

4. How can I understand the terminology used in scientific papers?

Understanding scientific terminology can be challenging for laypeople. It is helpful to look up unfamiliar terms in a scientific dictionary or online resource. Additionally, reading related studies or reviews can provide context for the terminology. It is also important to ask questions and seek clarification from experts in the field.

5. Why is it important for laypeople to read and understand scientific papers?

Reading and understanding scientific papers can help laypeople stay informed about current research and advancements in various fields. It also allows for critical evaluation of information and helps individuals make informed decisions about their health, environment, and society. Additionally, understanding scientific papers can improve critical thinking and analytical skills, which are valuable in many aspects of life.

Similar threads

  • STEM Academic Advising
Replies
7
Views
975
Replies
19
Views
1K
  • STEM Career Guidance
Replies
3
Views
1K
  • STEM Academic Advising
Replies
11
Views
1K
  • General Discussion
Replies
12
Views
1K
  • STEM Academic Advising
Replies
7
Views
2K
  • STEM Academic Advising
Replies
2
Views
1K
  • STEM Career Guidance
Replies
6
Views
1K
  • STEM Academic Advising
Replies
1
Views
926
  • STEM Academic Advising
Replies
5
Views
2K
Back
Top