Using science to argue your case
VBnotbitter
Posts: 820 Member
Every day the MFP boards bring us a selection of threads from people claiming this food group is bad/that food group is wonderful. Every thread is then filled with a selection of arguments for and against the OPs initial supposition. Many will claim a variety of evidence or anecdotes in order to support their stance, though rarely will that evidence be produced or linked. On the odd occasion there is a link but the source is often a YouTube video or a blog post. This irritates me and I think you all can do better.
A quick search on Google or Google Scholar can bring you all sorts of information on how to critique medical scientific claims and research and how to use it to employ best practice. A lot of this is a bit wordy so here are some highlights from a Wikipedia article as it’s very good for simplifying language – although I wouldn’t necessarily use it to support a scientific argument as you will see (Tip:- I’ve referenced this article at the bottom so others can read it)
1. “Press releases, blogs, newsletters, advocacy and self-help publications, and other sources contain a wide range of biomedical information ranging from factual to fraudulent, with a high percentage being of low quality”
2. “The popular press is generally not a reliable source for scientific and medical information in articles. Most medical news articles fail to discuss important issues such as evidence quality,[11] costs, and risks versus benefits,[12] and news articles too often convey wrong or misleading information about health care.[13] Articles in newspapers and popular magazines generally lack the context to judge experimental results. They tend to overemphasize the certainty of any result, for instance, presenting a new and experimental treatment as "the cure" for a disease or an every-day substance as "the cause" of a disease. Newspapers and magazines may also publish articles about scientific results before those results have been published in a peer reviewed journal or reproduced by other experimenters. Such articles may be based uncritically on a press release, which can be a biased source even when issued by an academic medical center.[14] News articles also tend neither to report adequately on the scientific methodology and the experimental error, nor to express risk in meaningful terms. For Wikipedia's purposes, articles in the popular press are generally considered independent, primary sources.”
3. In-vitro studies and animal models serve a central role in biomedical research, and are invaluable in elucidating mechanistic pathways and generating hypotheses. However, in vitro and animal-model findings do not translate consistently into clinical effects in human beings. Where in vitro and animal-model data are cited on Wikipedia, it should be clear to the reader that the data are pre-clinical, and the article text should avoid stating or implying that the reported findings necessarily hold true in humans. The level of support for a hypothesis should be evident to the reader.
Use of small-scale, single studies make for weak evidence, and allow for easy cherry picking of data.
4. Scientific journals are the best place to find primary source articles about experiments, including medical studies. Every rigorous scientific journal is peer reviewed. Be careful of material published in a journal that lacks peer review or that reports material in a different field. (See: Martin Rimm.) Be careful of material published in disreputable journals or disreputable fields
http://en.wikipedia.org/wiki/Wikipedia:Identifying_reliable_sources_(medicine)
Ok so once you have found your peer reviewed study published in a respected scientific journal, how can you tell that the study gives you enough evidence to argue your case on an MFP thread? Medical scientific evidence is graded from 1 to 5, with level 1 evidence being the most robust. That’s not to say that level 1 evidence dating back 20 years is still relevant as science is constantly changing based on whats observed from current data. Any how what are these levels of evidence then? I’ve copied a table from the National Health and Medical Research Council from Australia, but it is standard terminology throughout the world.
In simple terms, one way of looking at levels of evidence is as follows (the higher the level, the better the quality; the lower, the greater the bias):
I. Strong evidence from at least one systematic review of multiple well-designed randomised controlled trials.
II. Strong evidence from at least one properly designed randomised controlled trial of appropriate size.
III. Evidence from well-designed trials such as pseudo-randomised or non-randomised trials, cohort studies, time series or matched case-controlled studies.
IV. Evidence from well-designed non-experimental studies from more than one centre or research group or from case reports.
V. Opinions of respected authorities, based on clinical evidence, descriptive studies or reports of expert committees.
There is a lot more too it than that of course, but all we are trying to do is to win an argument on a social media page about diet, so that should be enough to be going on with.
One final tip – avoid use of logical fallacies such as straw man arguments, ad-hominem, appeals to authority etc. Not only will you be shot down but you will look like a pillock.
A quick search on Google or Google Scholar can bring you all sorts of information on how to critique medical scientific claims and research and how to use it to employ best practice. A lot of this is a bit wordy so here are some highlights from a Wikipedia article as it’s very good for simplifying language – although I wouldn’t necessarily use it to support a scientific argument as you will see (Tip:- I’ve referenced this article at the bottom so others can read it)
1. “Press releases, blogs, newsletters, advocacy and self-help publications, and other sources contain a wide range of biomedical information ranging from factual to fraudulent, with a high percentage being of low quality”
2. “The popular press is generally not a reliable source for scientific and medical information in articles. Most medical news articles fail to discuss important issues such as evidence quality,[11] costs, and risks versus benefits,[12] and news articles too often convey wrong or misleading information about health care.[13] Articles in newspapers and popular magazines generally lack the context to judge experimental results. They tend to overemphasize the certainty of any result, for instance, presenting a new and experimental treatment as "the cure" for a disease or an every-day substance as "the cause" of a disease. Newspapers and magazines may also publish articles about scientific results before those results have been published in a peer reviewed journal or reproduced by other experimenters. Such articles may be based uncritically on a press release, which can be a biased source even when issued by an academic medical center.[14] News articles also tend neither to report adequately on the scientific methodology and the experimental error, nor to express risk in meaningful terms. For Wikipedia's purposes, articles in the popular press are generally considered independent, primary sources.”
3. In-vitro studies and animal models serve a central role in biomedical research, and are invaluable in elucidating mechanistic pathways and generating hypotheses. However, in vitro and animal-model findings do not translate consistently into clinical effects in human beings. Where in vitro and animal-model data are cited on Wikipedia, it should be clear to the reader that the data are pre-clinical, and the article text should avoid stating or implying that the reported findings necessarily hold true in humans. The level of support for a hypothesis should be evident to the reader.
Use of small-scale, single studies make for weak evidence, and allow for easy cherry picking of data.
4. Scientific journals are the best place to find primary source articles about experiments, including medical studies. Every rigorous scientific journal is peer reviewed. Be careful of material published in a journal that lacks peer review or that reports material in a different field. (See: Martin Rimm.) Be careful of material published in disreputable journals or disreputable fields
http://en.wikipedia.org/wiki/Wikipedia:Identifying_reliable_sources_(medicine)
Ok so once you have found your peer reviewed study published in a respected scientific journal, how can you tell that the study gives you enough evidence to argue your case on an MFP thread? Medical scientific evidence is graded from 1 to 5, with level 1 evidence being the most robust. That’s not to say that level 1 evidence dating back 20 years is still relevant as science is constantly changing based on whats observed from current data. Any how what are these levels of evidence then? I’ve copied a table from the National Health and Medical Research Council from Australia, but it is standard terminology throughout the world.
In simple terms, one way of looking at levels of evidence is as follows (the higher the level, the better the quality; the lower, the greater the bias):
I. Strong evidence from at least one systematic review of multiple well-designed randomised controlled trials.
II. Strong evidence from at least one properly designed randomised controlled trial of appropriate size.
III. Evidence from well-designed trials such as pseudo-randomised or non-randomised trials, cohort studies, time series or matched case-controlled studies.
IV. Evidence from well-designed non-experimental studies from more than one centre or research group or from case reports.
V. Opinions of respected authorities, based on clinical evidence, descriptive studies or reports of expert committees.
There is a lot more too it than that of course, but all we are trying to do is to win an argument on a social media page about diet, so that should be enough to be going on with.
One final tip – avoid use of logical fallacies such as straw man arguments, ad-hominem, appeals to authority etc. Not only will you be shot down but you will look like a pillock.
0
Replies
-
PubMed is a good place to search for peer-reviewed research articles: www.ncbi.nlm.nih.gov/pubmed . When attempting to use an article published in the popular press, it usually states the last name and university of the scientist who did the study somewhere in the article, and you can use this plus keywords look up the actual research paper. That step is important, because the results are almost always oversimplified in the popular press, and sometimes even say the opposite.0
-
There is a lot of crap in scientific journals too, and peer reviewed ones at that.
Quacks have their own journals with very convincing looking 'studies'.
Most studies in peer reviewed journals can't ever be repeated.0 -
Listen to your own body, it's your best advicer and decide for yourself what's best for you!0
-
Also, if people could just stop saying "Because science" as as answer that would be great0
-
Nice ernest post. Let's face it, most people don't even read the articles they reference.
Lumping together cohort studies is an unacceptable error - retrospective studies and the conclusions they draw are in general crap. They are the lowest denominator of questionnaire driven pap sold as science. They shouldn't be lumped together in your classification with prospective or trial series studies.
Because scientist.0 -
Nice thread.0
-
Also, if people could just stop saying "Because science" as as answer that would be great
Especially when they don't quote anything from their "because science" :laugh:0 -
There is a lot of crap in scientific journals too, and peer reviewed ones at that.
Quacks have their own journals with very convincing looking 'studies'.
Most studies in peer reviewed journals can't ever be repeated.
Yes. The public at large doesn't seem to grasp exactly how messed up the peer review process can be.0 -
Nice ernest post. Let's face it, most people don't even read the articles they reference.
Lumping together cohort studies is an unacceptable error - retrospective studies and the conclusions they draw are in general crap. They are the lowest denominator of questionnaire driven pap sold as science. They shouldn't be lumped together in your classification with prospective or trial series studies.
Because scientist.
You mean quickly skimming an abstract doesn't give you the full understanding of the study? Man, there goes 99.948% of all MFP arguments.
Also, I want to friend you solely on the basis of this post.0 -
There is a lot of crap in scientific journals too, and peer reviewed ones at that.
Quacks have their own journals with very convincing looking 'studies'.
Most studies in peer reviewed journals can't ever be repeated.
Yes. The public at large doesn't seem to grasp exactly how messed up the peer review process can be.
Yes, however at least if it's peer reviewed and confirmed independently you have a lot better idea if the information presented is valid. The larger problem is, most of the radical claims AREN'T SUPPORTED in any way shape or form by any science. They're based off blog posts that agree with what a person wants to believe, they never consider anything else could be right or if the science supports their argument... which is why you constantly see the common myths resurface here, don't eat late, gotta eat breakfast, more meals is better than fewer, "starvation mode", etc.0 -
Most studies in peer reviewed journals can't ever be repeated.
False. If you don't believe in science, go ahead and go with personal anecdote or whatever. How else would you suggest that research findings be communicated?0 -
People respond with "Just science" because it becomes an effort to dig up studies 50 times a day for ignorant people.0
-
first port of call would be to google scholar the type of information you want, then secondly find the correct studies, mainly repeatable studies ( hopefully with double blind controls) then checking the sources and the reviews.
Many scientists would immediately disregard the OP as they used wikipedia, many scholars say wikipedia is unreliable.
So as we see from the posts written anything can be critiqued and anything can be disregarded, the main thing is to find what works and stick with it, that is what Ancel Keys did, he may have got it wrong several times, but in the end his hypothesis became proved, this is what science is all about, proving and disproving hypothesis and recreating studies to test them.0 -
Yes, however at least if it's peer reviewed and confirmed independently you have a lot better idea if the information presented is valid. The larger problem is, most of the radical claims AREN'T SUPPORTED in any way shape or form by any science. They're based off blog posts that agree with what a person wants to believe, they never consider anything else could be right or if the science supports their argument... which is why you constantly see the common myths resurface here, don't eat late, gotta eat breakfast, more meals is better than fewer, "starvation mode", etc.
I see that about as frequently as I see people claiming things as absolute truth because there's been a couple studies done that agree with them. Or, rather, they've read the abstracts of a couple studies that agree with them, so they just roll with it, and immediately ignore any contradictory studies as garbage.
Or people basing conclusions on studies that marginally align with their thoughts, but don't go nearly as far as their conclusions would have you believe.0 -
People respond with "Just science" because it becomes an effort to dig up studies 50 times a day for ignorant people.
Or because they've been hit enough times for making completely unsupportable and untruthful claims that they decide to revert back to the, "well, if I say it frequently enough, people might begin to believe it" strategy.0 -
People respond with "Just science" because it becomes an effort to dig up studies 50 times a day for ignorant people.
Or because they've been hit enough times for making completely unsupportable and untruthful claims that they decide to revert back to the, "well, if I say it frequently enough, people might begin to believe it" strategy.
Nope. Just what I said was sufficient enough.0 -
So as we see from the posts written anything can be critiqued and anything can be disregarded, the main thing is to find what works and stick with it, that is what Ancel Keys did, he may have got it wrong several times, but in the end his hypothesis became proved, this is what science is all about, proving and disproving hypothesis and recreating studies to test them.
Ancel Keys is the guy that you use as an example of good science? Interesting.0 -
Snark aside, I am a neuroscientist and I am genuinely curious why the general public does not trust science/peer review, and what you trust instead. It makes me sad to see that people basically think that my life's work is worthless.
Also, serious offer, if anyone on here wants help with reading and interpreting any particular research article, please PM me, I'm happy to help. Sometimes if you only read the abstract, that's not helpful. The most important sections are the methods and results.0 -
Also, if people could just stop saying "Because science" as as answer that would be great
Also, a lot of studies are crap. Correlation does not equal causation, etc...0 -
Snark aside, I am a neuroscientist and I am genuinely curious why the general public does not trust science/peer review, and what you trust instead. It makes me sad to see that people basically think that my life's work is worthless.
People do not trust diet science because we have gotten years of bad science or bad conclusion pushed on the public that did not work. That doesn't mean all science is bad.0 -
Snark aside, I am a neuroscientist and I am genuinely curious why the general public does not trust science/peer review, and what you trust instead. It makes me sad to see that people basically think that my life's work is worthless.
People do not trust diet science because we have gotten years of bad science or bad conclusion pushed on the public that did not work. That doesn't mean all science is bad.
There is a difference between 'studies' and 'science' and 'good/bad science' - Jus' sayin.0 -
Snark aside, I am a neuroscientist and I am genuinely curious why the general public does not trust science/peer review, and what you trust instead. It makes me sad to see that people basically think that my life's work is worthless.
I am in medical R&D and am intimately familiar with the peer review process as well, and I don't think criticism of it invalidates our work. There are legitimate concerns, especially with regard to an actual operational definition of peer review, that sometimes are glossed over:
1. Ask someone, especially a layperson, what peer review means, and you're likely to get a somewhat vague answer about other scientists looking over your paper and certifying it as "good science".
2. Try to define "peer" -- is it someone performing similar science to me? Similar field? An expert in experimental methodologies? The guy that the editor of Science played golf with that one time?
3. What constitutes a review? A quick read and nothing jumping out at me? An analysis of my experiment, data collection and analysis, conclusion? At what depth? The latter definition is not regularly employed.
People treat peer review as if there are a bunch of well respected scientists sitting around a table waiting for a paper to hit it, then pouring over it for days and finally giving it a thumbs-up or thumbs-down. This is rarely, if ever, what happens. If the general goal of peer review is to increase the quality of published work and identify any flaws in the paper...well, it might achieve those aims slightly better than no review process at all, but doing so at the risk of fostering the belief that if something is "peer reviewed", it must be good and methodologically sound. As a scientist, you are no doubt aware that there are absolute truckloads of papers published every year of dubious quality.
I think the bigger concerns, though, is how the public interprets the meaning of "peer review", as I mentioned above. The belief that "peer review" is analogous to saying "correct" is particularly damaging from the perspective of public policy. Proper interpretation of a paper, in my opinion, should view the findings narrowly. As people, and as researchers, often lose sight of this in favor of more grandiose conclusions -- and I think policy has suffered because of it.0 -
Most studies in peer reviewed journals can't ever be repeated.
False. If you don't believe in science, go ahead and go with personal anecdote or whatever. How else would you suggest that research findings be communicated?
I am a scientist, so obviously I'd consider myself more pro-science than most, and any personal anecdotes I could come up with would be to do with my own personal experience with the peer review process. Nowhere did I say that there is a better alternative, and certainly it is far better than linking to a random blog. I am simply trying to point out that reading peer reviewed journals is not enough - there is still plenty to sift through even within these and people need to be able to spot the warning signs that suggest a study is not up to scratch, which applies to a LOT more than you think.0 -
There is a lot of crap in scientific journals too, and peer reviewed ones at that.
Quacks have their own journals with very convincing looking 'studies'.
Most studies in peer reviewed journals can't ever be repeated.
Yes. The public at large doesn't seem to grasp exactly how messed up the peer review process can be.
Yes, however at least if it's peer reviewed and confirmed independently you have a lot better idea if the information presented is valid. The larger problem is, most of the radical claims AREN'T SUPPORTED in any way shape or form by any science. They're based off blog posts that agree with what a person wants to believe, they never consider anything else could be right or if the science supports their argument... which is why you constantly see the common myths resurface here, don't eat late, gotta eat breakfast, more meals is better than fewer, "starvation mode", etc.
Totally agree!0 -
'People do not trust diet science because we have gotten years of bad science or bad conclusion pushed on the public that did not work. That doesn't mean all science is bad.
It's not just diet science that people don't trust. From my meanderings across the Internet, it's basically any science... though some of the big "untrusted" fields are diet, climate, and health (not diet related).
It also seems that, on a personal level, the degree of distrust will vary in proportion to the degree of investmentment someone has in an idea.
One non-diet example of this for me is Dark Matter and Dark Energy. To me, it sounds like a cop-out... "Hmmm, the mass numbers are wrong on a galactic scale, but if we change them by adding in 'dark matter' that does nothing but add mass, then the numbers work! Therefore dark matter! Now we just have to find some."* However, it is the general consensus, and nobody has come up with a better explanation, so I will accept it for now. Were I an astrophysicist, I would probably be trying to figure out if there is an alternate explanation that makes more sense to me (but I may also have come to understand the equations and why dark matter is the best option).
*I admit that is a way-over-simplified view of the whole dark matter argument.0 -
Some areas of study have very strong scientific research, use the scientific method, actually rerun experiments to see if conclusions can be reproduced.... Diet has not really been like that. It's been what 50 years of 1. throw out butter and eat margarine and trans fats instead - followed by 'whoops' 2. Eggs are bad - followed by eggs are good, etc...
People do not trust diet science because we have gotten years of bad science or bad conclusion pushed on the public that did not work. That doesn't mean all science is bad.
One of the things that should be considered -- and I really glossed over it, is there should be lines drawn between conclusions that are drawn on the data, and the policy conclusions that are built onto those. There can be errors at any point in the equation. For a very simple example:
Let's say that I think egg intake is the cause of a disease state, CVD.
I design a cohort study that follows along a group of similar people for a period of time, and collect data on their eating habits and health outcomes.
I analyze this data, controlling for certain factors, and it turns out that people who ate more than 5 eggs per day had a 10% increased risk of developing CVD.
As a conclusion, I recommend that, in order to protect against CVD, people should lower egg intake below five eggs per day.
Seems reasonable, right? I think so. The question, though -- is what does this actually mean?
If you're a newspaper, it likely means "EGGS: THE SILENT KILLER".
If you're a normal person, it might mean, "Eggs can cause CVD!"
In my opinion, how you should approach this study is as follows:
1. Was the study done appropriate for the hypothesis that I was testing? Any inherent biases? Sample size issue? Randomization issues? Any other issues in the study design methodologies?
2. Are my data analysis methodologies sound and in keeping with standard acceptable practices?
3. What gaps exist in the conclusion?
Your takeaway from the study should be, "huh, we should do some more science to start looking into what exactly the mechanisms are that might cause eggs to correlate to a higher incidence of CVD, because this correlation bears some more studying."
What the public policy people do is have a congressional hearing in which they recommend that the average American eat 3 eggs a day or less -- that is, unless the egg lobby has more money than whatever lobby is pushing them to issue the recommendation in the first place. Then people come on MFP, make a post about how "eggs will kill you," link the study, and spike the ball with "science!"0 -
Science is not trusted at all, which isn't that surprising when scientists have been very poor in communicating their subject to the public, and the media haven't bothered to try and report it accurately. Science comms often boils down to 'Clever scientist man said this so you should believe him'.
Surveys of the general public suggest that the most trusted professionals are doctors, and the least trusted are journalists. Yet still we get things like the MMR scare, and conspiracies abound regarding Big Pharma etc. It's all very strange.0 -
I'd like to point out something that is definitely true for my area of expertise (physics) but perhaps in bio-medical sciences that's different, so please correct me if that's not the case. The frontline of research deals with things that are not well understood by scientists and there are conflicting theories and experiments and an ongoing scientific discussion about what's going on. These papers, even though they are important and sometimes ground breaking for the scientific world, are basically useless for general public. Well, the public has no interest for physics research, but high-tech industry is interested for sure. For example, a large part of today's research (in solid state physics) is related to / motivated by the physical realization of quantum computers. There's a huge number of papers published everyday, but there is really no point for people in industry following all these. Once things get clarified a bit, once there are some promising candidates then, yes, for sure. But until then the discussion stays within the scientific circles and for good reason: things are so unclear, there's no practical use of the work done up to now. I'd expect the same would be more or less true for other disciplines. The frontline of research is interesting because of the progress being made, but for practical uses, one needs to go one step back to knowledge that is well founded. If there is an ongoing discussion about an issue, you can't really pick one paper and draw conclusions out of it. You need to understand and follow the work done by the whole community and even then, that doesn't mean you can reach any conclusion.
(Edited for typos)0 -
If you're a newspaper, it likely means "EGGS: THE SILENT KILLER".
If you're a normal person, it might mean, "Eggs can cause CVD!"0 -
I'd like to point out something that is definitely true for my area of expertise (physics) but perhaps in bio-medical sciences that's different, so please correct me if that's not the case. The frontline of research deals with things that are not well understood by scientists and there are conflicting theories and experiments and an ongoing scientific discussion about what's going on. These papers, even though they are important and sometimes ground breaking for the scientific world, are basically useless for general public. Well, the public has no interest for physics research, but high-tech industry is interested for sure. For example, a large part of today's research (in solid state physics) is related to / motivated by the physical realization of quantum computers. There's a huge number of papers published everyday, but there is really no point for people in industry following all these. Once things get clarified a bit, once there are some promising candidates then, yes, for sure. But until then the discussion stays within the scientific circles and for good reason: things are so unclear, there's no practical use of the work done up to now. I'd expect the same would be more or less true for other disciplines. The frontline of research is interesting because of the progress being made, but for practical uses, one needs to go one step back to knowledge that is well founded. If there is an ongoing discussion about an issue, you can't really pick one paper and make conclusions out of it. You need to understand and follow the work done by the whole community and even then, that doesn't mean you can reach any conclusion.
I agree with this completely. I think one benefit of working in hard science research is that there's a certain amount of technical abstraction which would exclude the majority of the public from being able to properly digest the information in the first place. When one works in a "soft-science" realm, such as much of nutritional research and public policy research, that divide is not as great. It reads in a way that is approachable for a layperson, which can be problematic.
Specifically in regards to the state of nutritional research, I think the science has been generally poor over the last 50 or 60 years, so it's difficult to even step-back in that field.0
Categories
- All Categories
- 1.4M Health, Wellness and Goals
- 393.4K Introduce Yourself
- 43.8K Getting Started
- 260.2K Health and Weight Loss
- 175.9K Food and Nutrition
- 47.4K Recipes
- 232.5K Fitness and Exercise
- 426 Sleep, Mindfulness and Overall Wellness
- 6.5K Goal: Maintaining Weight
- 8.5K Goal: Gaining Weight and Body Building
- 153K Motivation and Support
- 8K Challenges
- 1.3K Debate Club
- 96.3K Chit-Chat
- 2.5K Fun and Games
- 3.7K MyFitnessPal Information
- 24 News and Announcements
- 1.1K Feature Suggestions and Ideas
- 2.6K MyFitnessPal Tech Support Questions