My friend James Krieger posted this on facebook so I wanted to share it here since there's a lot of buzz going around about this.
James is a researcher and quite brilliant. He also recently spoke at the PTC conference in the UK on Non Exercise Activity Thermogenesis (it was outstanding).
Anyway, dude is brilliant so anytime he says something I pay close attention.
"A lot of talk has been going around about the recent Biggest Loser study. I've been discussing it with Spencer Alan and Evelyn Carbsane, and the thing about studies like this is that the devil is in the details.
Those of you who saw my recent presentation in the UK may remember me discussing the measurement of RMR, and why it is absolutely critical that subjects are weight stable when you measure it. RMR is very sensitive to energy surpluses or deficits, and can give an illusion of being higher or lower than normal if your subjects are not truly weight stable. If you look at the data in the Biggest Loser study, you will see that the researchers had the subjects weigh themselves daily at home on a scale that transmitted data back to the researchers. They had 16 days of data, and used statistical regression to see if weight was stable over that time. Basically, they looked at if the slope of the line was different from 0 (a flat line). It was not significantly different from 0. However, the kicker is that the P value was quite low at 0.1, which is not far from being statistically significant (which is considered at 0.05 or less). The thing is, statistical significance is nothing more than an arbitrary threshold, and with small sample sizes like in this study, you can often mistakenly call things "not different" when they are (a type II error).
On average, the subjects were losing 0.5 pound per week. Yeah, it's not large, it may have not met the threshold for statistical significance, but this data doesn't give me much confidence that the subjects were weight stable. It tells me the subjects may have been in an energy deficit when they were measured, which would make RMR appear artificially lower than it really is.
The other thing is that this study is at odds with other research in this area, which has shown that downregulation of NEAT/spontaneous activity is much greater than adaptations in RMR with weight loss. The Biggest Loser study showed no downregulation of physical activity, yet a large reduction in RMR. That makes me suspect that the subjects, knowing they were going to be measured in a follow-up, were actively trying to lose weight and exercising heading into the follow-up. This would explain the lower RMR (because they were in a deficit), yet the lack of reduction in physical activity (because they were exercising).
I've always considered the data out of Rudolph Leibel's lab to the "gold standard" in this area, because he has subjects housed in metabolic wards for long periods of time, matches subjects to controls, and uses formula diets to meticulously control their calorie intake and ensure weight stability. Leibel's work has shown only minor reductions in RMR, with most of the adaptation occuring in NEAT/SPA. Unfortunately, Leibel has never had subjects with such large scale weight losses as the Biggest Loser, so it's still possible that extreme losses will result in more extreme adaptation. Still, I don't think the adaptation is as high as what is being reported in this study, due to the limitations discussed here.
The thing is, even with the large reduction in RMR, total daily energy expenditure did not show any signs of adaptation, and TDEE is what really matters anyway, not RMR.
May 4, 2016 10:15PM
edited May 2016