Newsletter report on this paper by Dr Peter Attia:
I read a study published in Cell Metabolism (September 3, 2019) on alternate day fasting (ADF) that got a lot of lay press attention. One of the things that struck me right away is that this was a literal ADF.
You might think it’s reasonable to assume that any ADF study you pluck out of PubMed is investigating what happens when individuals fast every other day. But many published studies on ADF in humans actually allows for 25% of “total energy needs,” for example. For an individual consuming 3,000 kcal daily to maintain body weight, this means that on “fasting” day, he eats 750 kcal. (Imagine that I told you I fast every other day, and on my fasting days, I include a Double Quarter Pounder with Cheese.) However, in this study participants were basically limited to water, coffee, or tea every other day.
This study included relatively healthy, non-obese participants. Many of the studies in the literature look at people who have obesity, type 2 diabetes, or a history of metabolic disorders, so this is a rare chance to look at what ADF can do in “healthy” folks.
This study was actually two separate studies rolled into one paper. Of the 90 healthy participants enrolled, 30 participants (first study) were instructed to practice ADF for more than 6 months. The other 60 participants (second study) first served as the controls (i.e., 60 people carrying on as usual versus 30 people doing ADF) and then were randomized and allocated to either 4 weeks of ADF or 4 weeks of their usual diet as the control group. To be clear, this is a pretty unusual design and created some problems, at least for me, described below.
Based on the 6-month cohort, the investigators concluded ADF is safe to practice for several months and improved markers of health in healthy, middle-aged humans. The most striking finding of the entire study is that, of the 30 individuals that started ADF, more than 6-months later, there were zero dropouts. In other words, apparently 100% compliance in a regimen that calls for strict fasting every other day for more than 180 days straight.
In the 4-week trial, the results were similar. Two dropouts in the control group and just 1 in the ADF group. That’s a 97% compliance rate (29/30) in the ADF group after fasting every other day for just under a month. And, according to their protocol paper, the investigators put the 4-week ADF group on continuous glucose monitors to verify adherence. What about the 6-month ADF group? Unfortunately, it’s not clear at all how well they stuck to the diet, we just know that they were all accounted for an analyzed at follow-up.
Even more unfortunately, and honestly, I’m at a loss for what’s going on here, the investigators stated that they “do not have access to baseline values of the long-term ADF cohort.” I’m just going to park this right here and come back to it in a moment, but suffice it to say, I wasted a lot of time reading and re-reading the paper to figure out how the hell this could be the case.
Something else that’s puzzling to me is that I can make no sense of Table 1 in terms of many of the deltas (i.e., changes) reported at 4-week follow-up. For example, the control group went from a reported 75.93 kg in body weight at baseline to 76.23 at 4-weeks. What do you think the reported change in body weight was? If you answered −0.196 can you please tell me how you arrived at that number? Seriously, hit me up on Twitter. I’m thinking that there’s something I’m missing here, but I’m also thinking that it shouldn’t be so difficult to interpret Table 1 in what should be a straightforward trial.
When I read a paper, I generally try to think about what the study adds in terms of our knowledge. I want to say that this is an important study showing that ADF appears to be safe and the compliance suggests that a very simple (don’t eat anything on Monday, eat whatever you want on Tuesday…) protocol also looks easy. I’m sure most of you are thinking that not eating anything every other day does not exactly sound easy, and I’m right there with you. It’s not easy, at least for me, but it is simple to understand (versus, say, some elaborate calorie or macro counting plan). In fact, the senior author on the paper says that strict ADF “is one of the most extreme diet interventions. . .” Krista Varady, someone who has investigated a “modified” ADF (of the 25% of total energy needs variety mentioned above) said, “I think [strict ADF] would be really difficult for people to follow.”
But this study certainly suggests that if ADF is efficacious, if it clearly can improve health, we certainly shouldn’t just dismiss the intervention because we don’t think anyone could do it. In other words, it may be both efficacious (it works) and effective (it can actually be adhered to).
However, I must say, after looking over how poor the design is, I’m not comfortable making any strong claim based on this study.
If you look at figure 3A through 3I in the study, it’s clearly comparing (in what are called box-plots):
changes in the control group from baseline to 4-weeks, to
changes in the ADF group from baseline to 4-weeks
This is straightforward and logical. In other words, split up a bunch of similar people, change one variable in one group, run the experiment, plot the results, and see how they compare. Figures 3J through 3T add another element, but it, too, is pretty straightforward.
In stark contrast, there’s Figure 5. This, according to the authors, is comparing “Differences in the Lipid Profile of Healthy, Non-obese Humans on ADF for >6 Months.” This one is more difficult to interpret, and potentially misleading. On the surface, it looks straightforward. It’s comparing the participants that served as the control group (see above) next to the >6-months ADF group. (OK, it might seem more straightforward if you spend your weekend deciphering this!)
Remember, for reasons not disclosed, the investigators said they don’t have access to baseline values of the long-term (i.e., >6-months) ADF cohort. So, Figure 5 (A-Q) is comparing differences in “the Lipid Profile” of these two groups at the 6-month (technically, “>6-months”) timepoint.
Quick thought experiment to express my exasperation. Imagine you want to test the efficacy of a 6-month squat program in terms of improving your 1 rep max (1RM). You complete the program, and at the end of 6-months, you hit 300 lb. Is that good? What was your 1RM when you started the program? Not having access to your baseline values is another way of saying you don’t know where you started. Perhaps, you think, well, I compared myself to a friend, and he registered a 275 lb 1RM after 6-months of his usual routine, and he’s comparable to me in age, body weight, height, and body composition, so I probably improved.
To put it mildly, it doesn’t seem like you have complete information to start comparing “performance” metrics, and to step back into the real world, if the investigators of the study don’t have access to baseline values in the >6-month ADF cohort, how can they say they’re comparing apples to apples (as they suggest in their supplementary information)? I can’t stress how important is to verify that the people you’re comparing in an experiment/study are the same (i.e., homogeneous in the lingo) in the metrics in which you choose to compare over time, and you must know the starting point of each group.
I’m left bewildered: there are more questions than answers in most scientific pursuits, but I want to stress the point here that these aren’t rhetorical questions. What do the investigators mean when they say they don’t have access to baseline values of the long-term ADF cohort? Why doesn’t the math add up in Table 1 (see above)? If readers have some light to shed on the issues above, please send it my way.