New Scientific Study Shows….

Options


This just in. A new scientific study shows that no matter what you say after saying “New scientific study shows” many people will believe it.

With all the claims and counter claims made about what a new scientific study shows there is no wonder that a lot of people are confused. I will attempt to clarify how some of the confusion comes about and hopefully help guide people into becoming better informed and to make better decisions about science in the news.

First of all a scientific study is where researchers look at one item and compare it to another item, Then they derive a statistical estimation of the degree of correlation between the two factors and make a claim based on that. They attempt to isolate the variables under study from other possible variables, this is known as controlling for the variables. It can be done in a variety of methods, some related to how the population is selected and divided into the groups some of these methods can be statistical so account for differences between the two groups.

For example in testing a new drug, the researchers will divide a group of people into two groups, randomly assigning the individuals but trying to maintain a balance of age, gender, ethnic background, etc. so that the two groups are indistinguishable from each other. Then they will administer the test drug to one population and a placebo to the other population. Neither the participants or the researchers will know which group is receiving the real drug.

They will monitor the groups and compare the outcomes. This known a double blind trial. It is the gold standard of scientific studies and with a large enough group, a long enough study period and faithful and accurate observations of enough outcomes a reliable claim can be made regarding the suitability of the new drug for its intended purposes as well as a good idea of possible side effects that could be both positive and negative.

However this sort of study is very expensive and time consuming. In a lot of cases it is impossible to do due to ethical concerns on human experimentation. Most researchers are pressured into a publish or perish cycle where they need to publish as many articles as possible because their incomes depend on it so they tend to pick study methods that are less time consuming and less expensive.

So what happens in a lot of studies is that a small sample size is chosen. The groups are not randomly assigned or balanced. The trial is not double blind but sometimes the groups are self chosen, the administration of the test procedure is not performed by the researchers but self administered. The worst case is where the participants self-identify for the study and pick the group that they are in. The data is based on their recall of events and was never observed by the researchers. An example would be how much did you exercise every day for the last year and then use that to demonstrate that exercise influenced an outcome.

While this sort of study has merit mainly in identifying areas of further research, it is not a dependable or reliable method to make scientific claims.

When I was in grad school taking a research methods course one of our first assignments was to go to the peer reviewed literature and find at least 5 studies with significant errors in the statistical analysis made. I thought that that would be a time consuming task, this was the significant scientific studies, peer reviewed prior to publication.

I went to the library and took out one of the volumes of research studies and started reading. I found 5 articles with major mistakes in the first 5 articles. Since one was repeated I looked at the sixth in the volume and found a different error. The whole task took less than an hour.


Confusion of causation and correlation.

Incorrect discussion on what the statistical indicators means, stuff a second year statistics student should know but missed in peer reviewed studies by people with PhD’s. Just because two indicators move in common, you cannot say statistically that one caused the other.

Over extrapolation of the results.

This study took the results that were age based on a population with ages from 20 to 45 and extrapolated the results and made predictions for the 65 year old group.

Failure to control for the variables.

This study had two groups with widely different ages and no consideration paid to the degree that age impacted he outcomes. One group had average age in the 20's the other in the 40's.

Small sample size

One study used an unusually small sample size and then failed to use the appropriate statistical model for the small size. They used a model better suited for a large sample size without adjustments.

The claim was unsupported by the evidence

In one of the studies the major claim was not what the evidence had indicated. It was like they conducted a study to test a diet then made a claim on the benefits of exercise when they never measured exercise.



In hearing about a claim of a new scientific study there are some steps you should always take to determine what if any action you are going to take as a result of the claims.


I recall one study that was claimed that morning exercise was better. When I looked at the actual study it was based on a small group of elite sprinters and was a recall study. That is they were asked when they trained and that was compared to their performance and showed that there was a correlation between morning training and higher levels of performance.

What works for elite athletes may not work for everyone.

The group was very small, difficult to extrapolate findings to the general public.

The athletes in question were sprinters, does sprinter performance corresponds to better general fitness or not?

Every time I see world class sprinters in action I notice that most of them are of African descent. Did this study focus on athletes of African descent or was it balanced for other ethnic groups? Does this result apply to other ethnic groups? The study never mentioned possible ethnic factors.

Firstly ask, did the study actually say that or did the news get it wrong (again).

Not fake news but an honest mistake made by a generally scientific illiterate reporter on what the study actually said. Remember that the news industry is also based on publish or perish and the reporters are always looking for a lead to make their name. You often see at the end of a study questions for further study. Researchers like to do that because it allows them to write proposals based on their identified need for further study and get grants to continue working. I have seen where the study suggested that there may be a connection between two variables and that is worthy of further study get reported as they found a connection. Lots of terms used in science have vastly different meanings in the general population. For example to a scientist a theory is something supported or at least not disproved by all the evidence available. To a lay man theory means an idea, what a scientist would call an untested hypothesis.

Secondly look at the sample size. Larger is better, no hard and fast rule about sample size but several hundred is better than a couple of dozen, lots of samples are in the thousands of participants. Picking a viable sample size can be a major bit of statistical analysis in itself. This is especially important when looking for a small difference in outcomes.

Third, is the study population representative of the population in general or is it specific to one group be that group be gender, ethnic background, age or other identifying factor. Were adjustments made for the differences?

Fourth, did the study over extrapolate the claims? Did the claim made go outside the study parameters?

Fifth, remember that correlation does not equal causation. Did early morning training cause better performance or did better performers prefer morning training or was it a random result. The better performers just happened to have time in their schedule to train in the morning, they would have had same results for afternoon training.

Sixth, The usual standard is a 95% confidence interval that the results were not random but that means one out of 20 studies found the correlation purely by random factors.

If a study goes against the established wisdom in a field, it is most likely a statistical outlier and not a valid connection. There are metadata studies available that look at a number of related studies and make a consensus conclusion based on the convergent validity of the original studies. (All the studies generally lead to the same or a similar conclusion.)

Let’s look at the different diets out there. There are no end of reported studies on one or the other being better. Lots of them are recall studies. People are asked to recall what they ate over a period of time. This makes the data suspect. Many are self-selecting, people volunteer to participate, people with good experiences in a diet may be more inclined to participate than those who did not lose weight on the study. Weight loss is just one impact of your diet, overall health, energy levels etc are also a consequence of your diet and need to be considered.

The metadata studies show that every possible diet will lead to weight loss if calorie consumption is reduced. To state that a low carb or a high protein diet is better for weight loss is not supported because they both reduce calories, which is what results in the weight loss. The reports on the different studies seldom mention the health impacts of the different diets.

Finally ask yourself who paid for the study. I am old enough to remember the claims made by the tobacco industry that smoking had health benefits. Today the coal industry publishes studies disproving man made global warming. The diet industry publishes claims that the new wonder drug or workout scheme will magically cause weight loss. The exercise industry sells thigh masters or hot yoga or stair climbers or workout machines or ... based on scientific studies showing that they are the best work out around. Who benefits from the study? If it looks more like marketing than science then it is most likely marketing.

Just as you should educate yourself to be a knowledgeable consumer of anything tangible you should also educate yourself to be a knowledgeable consumer of information and scientific studies.

I hope this helps.

Replies

  • JeromeBarry1
    JeromeBarry1 Posts: 10,182 Member
    Options
    If it's got more math than a journalist can understand, basically anything with an e, it's scientific.
  • Themajez
    Themajez Posts: 61 Member
    Options
    Statistics is hard, harder than what most people think.
  • rickdkitson
    rickdkitson Posts: 86 Member
    Options
    If it's got more math than a journalist can understand, basically anything with an e, it's scientific.

    People go into journalism because they are good with words. Anyone good with numbers can make a lot more than all but the top elite journalist.

    That is why science reporting in any mainstream media sucks, no one there can really understand it so they fake it.
  • mkculs
    mkculs Posts: 316 Member
    Options
    @rickdkitson Is this going to be on the exam?

    @AnvilHead That's hilarious; can't believe I've never seen it before. So true!
  • rickdkitson
    rickdkitson Posts: 86 Member
    Options
    mkculs wrote: »
    @rickdkitson Is this going to be on the exam?

    ...

    Yes
  • MaybeLed
    MaybeLed Posts: 250 Member
    Options
    Themajez wrote: »
    Statistics is hard, harder than what most people think.

    Sorry but after 62 years on this planet I am sick of hearing that. All my life I have heard spelling is easy, music is easy, learning a language is easy but I find all that impossible.
    '
    On the other hand numbers speak to me and reveal the underlying truth in them. They are easy to understand and honest in what they are saying to you.

    Everyone finds some things easy and some things harder, however as a society we consider being mathematical illiterate as something to be proud of. I simply don't get it and as a crotchety old guy I will speak up against that now.

    I have worked on my spelling, I have forced myself to learn at least the basics in a few languages, and stopped dreaming of ever becoming a rock star, you can stop hiding behind that excuse and learn what statistics is telling you. You don't have to be statistician but you should be able to discern what you are being told and become a knowledgeable consumer of statistical based information.

    I don't understand why anyone would be proud to be unable to understand basic concepts in any field, especially one that impacts just about everything you do in life.

    Statistics is straight forward, you need to put some effort into understanding what the different terms mean and how they relate and you will at the very least get an appreciation and understanding about what the statisticians are saying. It is never black and white but is very nuanced.

    Don't dive into the deep end and expect to learn how to swim. Start off reading "How to Lie with Statistics" by Huff It is available on Kindle. It is a good explanation of what the terms mean and how some people misuse them to mislead you or out of their own ignorance and stupidity.

    I'm not as proficient at stats as I want to be (because I forget stuff). But many times I've been discouraged from learning or persuing Maths because I'm female. Breaks my heart that there are people with fantastic potential that have been put off.

    Due to my job I have a better working knowledge than most, and yet still people are surprised that I understand the basics (probably the female thing).

    I also find languages and music hard... apart from Music Theory because it's basically maths.
  • rj0150684
    rj0150684 Posts: 227 Member
    Options
    If it's got more math than a journalist can understand, basically anything with an e, it's scientific.

    People go into journalism because they are good with words. Anyone good with numbers can make a lot more than all but the top elite journalist.

    That is why science reporting in any mainstream media sucks, no one there can really understand it so they fake it.

    Google the “Gell-Mann amnesia effect”. It’s not scientific at all (Michael Chrichton - the author - coined the term), but it rings true.
  • AnnPT77
    AnnPT77 Posts: 32,200 Member
    edited June 2018
    Options
    Themajez wrote: »
    Statistics is hard, harder than what most people think.

    At the advanced margins, maybe yes. The basics - the parts a person needs most to avoid the wool being pulled over their eyes (mostly) - are pretty manageable. A non-specialist's basic understanding of statistics is easy, easier than what most people think. (I've taught basics stats in my workplace as part of a quality management initiative.)

    The book "How to Lie With Statistics" (Darrell Huff) is an oldie but goodie, short (144p with lotsa pictures!) and amusing. It's not going to clarify all the mysteries of p-values, but it gives a non-statistician reader enough understanding to see when popular-press accounts don't pass the sniff test. And it's a fun read. Highly recommended.

    You can buy or borrow it in many places, in many formats. The link is just to supply more specifics, not to stump for this seller.

    https://www.amazon.com/How-Lie-Statistics-Darrell-Huff/dp/0393310728/

    BTW: "X is hard" is a path to disempowerment. Always makes me think of that disastrous "Math is hard" talking Barbie doll. ;)

    E.T.A.: Good post, OP! :)
  • stanmann571
    stanmann571 Posts: 5,728 Member
    Options
    AnnPT77 wrote: »
    Themajez wrote: »
    Statistics is hard, harder than what most people think.

    At the advanced margins, maybe yes. The basics - the parts a person needs most to avoid the wool being pulled over their eyes (mostly) - are pretty manageable. A non-specialist's basic understanding of statistics is easy, easier than what most people think. (I've taught basics stats in my workplace as part of a quality management initiative.)

    The book "How to Lie With Statistics" (Darrell Huff) is an oldie but goodie, short (144p with lotsa pictures!) and amusing. It's not going to clarify all the mysteries of p-values, but it gives a non-statistician reader enough understanding to see when popular-press accounts don't pass the sniff test. And it's a fun read. Highly recommended.

    You can buy or borrow it in many places, in many formats. The link is just to supply more specifics, not to stump for this seller.

    https://www.amazon.com/How-Lie-Statistics-Darrell-Huff/dp/0393310728/

    BTW: "X is hard" is a path to disempowerment. Always makes me think of that disastrous "Math is hard" talking Barbie doll. ;)

    And percentages, and statistically significant, are two big ways of bringing those lies to bear.

    For example "20% increase" from what to what.... 0 to 20% is a 20% increase. so is 5-6%. The lie is in the details.

    There was a study earlier this year that correlated(loosely) states, weights, run times and general fitness levels for Soldiers being inducted into the US Army. Interesting until you dig into the details and discover that the range of variability between run times was in the neighborhood of 5-8 seconds per quarter mile over 2 miles and the weight ranges were in the neighborhood of 10-20%. Further the correlation was based on the averages of the weights and run times.

  • ladyhusker39
    ladyhusker39 Posts: 1,406 Member
    Options
    AnnPT77 wrote: »
    Themajez wrote: »
    Statistics is hard, harder than what most people think.

    At the advanced margins, maybe yes. The basics - the parts a person needs most to avoid the wool being pulled over their eyes (mostly) - are pretty manageable. A non-specialist's basic understanding of statistics is easy, easier than what most people think. (I've taught basics stats in my workplace as part of a quality management initiative.)

    The book "How to Lie With Statistics" (Darrell Huff) is an oldie but goodie, short (144p with lotsa pictures!) and amusing. It's not going to clarify all the mysteries of p-values, but it gives a non-statistician reader enough understanding to see when popular-press accounts don't pass the sniff test. And it's a fun read. Highly recommended.

    You can buy or borrow it in many places, in many formats. The link is just to supply more specifics, not to stump for this seller.

    https://www.amazon.com/How-Lie-Statistics-Darrell-Huff/dp/0393310728/

    BTW: "X is hard" is a path to disempowerment. Always makes me think of that disastrous "Math is hard" talking Barbie doll. ;)

    E.T.A.: Good post, OP! :)

    As an Econ Major we had a lot of fun with that book. I need to read it again. It's been more than a few years.
  • CarvedTones
    CarvedTones Posts: 2,340 Member
    Options
    tldr; just dropped in to say I really don't like click bait titled threads