Tag Archive for: research

Protein and Longevity: The Unproven Relationship

I think the best way to assess whether the study I’ve been examining this week on protein and longevity is meaningful or not is to examine the interview with the primary author Christopher Proud, PhD. I’m going to give a series of his statements and whether the research addressed the question.

“Science has shown for some time that eating too much, in particular protein, reduces lifespan; and now we know why.”

This statement isn’t exactly true. While there does seem to be a relationship between calorie restriction and longevity in fruit flies and some species of mice, it hasn’t been proven in humans. There’s evidence in longitudinal studies on relationships between animal protein intake and some diseases, but it’s not accepted that high protein intake leads to an early grave. More likely, there are genetic and environmental factors to consider, but to suggest that eating less overall increases longevity for humans is not correct at this time.

“Eating high-fiber carbohydrate, such as those found in fruit, vegetables, and unprocessed grains and seeds, will produce the healthiest benefits. This is similar to the traditional Mediterranean diet which has well-established links to longevity. We already knew that lower food intake extends lifespan.”

Same as before—it’s an overstatement. There are some studies that show a decreased rate of diseases using the Mediterranean diet, but that doesn’t mean it will result in people living longer. It may mean they live better for the time they’re alive.

“Our team demonstrated that increased [protein] nutrient levels speed up protein synthesis within cells. The faster this process occurs, the more errors are made.”

Based on the way I understand the methods, they did impact protein synthesis by knocking out the eEF2K enzyme. As of this writing, I haven’t heard back from Dr. Proud, so I have yet to find how they overfed the cells, flies, or worms to effect that change.


The Bottom Line

I don’t think that the research done in this series of studies proves that high protein intake decreases longevity. As excited as the corresponding author was during the interview, it wasn’t as clear as he made it out to be. The research didn’t do anything to help set a target goal for human protein intake. How is it supposed to help without practical applications?

What is important is that we need to seek balance in our nutritional intake. It may be true that too much protein will impact the correct production of proteins, which would have long-term effects, and it’s hard to go wrong eating more fruits and vegetables. But longevity isn’t tied to a single nutrient or a single habit. We need to strike a balance. Eat less. Eat better. Move more.

What are you prepared to do today?

        Dr. Chet

References: Cell Biology 2019. https://doi.org/10.1016/j.cub.2019.01.029.

Is Protein Bad? The Research

Let’s think about this logically. If we wanted to prove that a high-protein diet would decrease lifespan, we would have to feed some type of animal a diet high in protein until all the animals had died. It would be preferable to have animals that don’t live very long such as rodents. Then we compare the lifespans and causes of death with a control group. Simple and straightforward.

That’s not what the research group did. As I said in Tuesday’s Memo, they identified an enzyme called eukaryotic elongation factor 2 kinase (eEF2K) that slows the rate of protein synthesis. That enzyme is also found in C. elegans, a nematode, as well as in fruit flies and humans. They knocked out the eEf2K enzyme, therefore causing protein synthesis to happen faster. This is supposed to be what happens when too much protein is eaten. They noted more mistakes in protein synthesis as a result.

The methodology for this series of experiments is beyond my expertise. By a lot. Whether the research was on cancer cell lines, the nematodes, or the fruit flies, what I could not find in the Methods section is where they added protein or amino acids to the food for any culture or animal to mimic a high-protein diet. I wrote to the study’s lead author to see if my analysis was correct, but I haven’t gotten a response yet.

What does all this mean? I’ll wrap it up on Saturday.

What are you prepared to do today?

        Dr. Chet

Reference: Cell Biology 2019. https://doi.org/10.1016/j.cub.2019.01.029/

Does Protein Decrease Lifespan?

Just when we’ve accepted that carbohydrates are bad for us and everyone seems to be doing the paleo or ketogenic diets, a new study from an Australian research group created headlines by suggesting that high-protein diets are unhealthy because they decrease longevity.

For years we’ve been told that high-fat diets are bad. Then scientists suggested that it’s carbohydrates that are bad, which led to this keto-everything dietary phase we’re in right now. Now we’re being told that high-protein diets are bad for us as well? What the heck are we supposed to eat? Before we panic, let’s take a look at the research to see if it’s meaningful or not.

Researchers identified an enzyme called eukaryotic elongation factor 2 kinase (eEF2K) that slows the rate of protein synthesis. By so doing, it reduces the number of mistakes made in making or folding proteins. If a long-chain protein such as insulin has mistakes in the location of amino acids, the protein will not work as it should. When you consider the number of proteins the body has to make to function every second, too many mistakes could lead to disease and thus reduce our lifespan.

Is this real? Let’s take a look at how the research was conducted to figure out whether we have to be concerned or not.

What are you prepared to do today?

        Dr. Chet

Reference: Cell Biology 2019. https://doi.org/10.1016/j.cub.2019.01.029/

Parachute or Backpack?

You’re sitting on a plane. The person next to you says he’s conducting a study about parachute safety and asks if you would be willing to be randomly assigned to one of two groups: jumping out of the plane with a parachute or with an empty backpack at its current altitude and speed.

Yes, this is a real study. The researchers included both commercial and private aircraft. The researchers were able to enroll 23 subjects in the study after screening 92 people; 69 people were unwilling to participate or were otherwise excluded from the study. After randomization, the experiment was conducted in two locations in the U.S.; all subjects completed the study. The results indicated that no subject from either group was injured or killed. There were no differences between the groups using the backpack or the parachute.

What?

How is that possible? The subjects who were asked on commercial aircraft at 450 mph and 30,000 feet would not volunteer (and a couple who did were excluded due to mental health concerns.) Those who were asked on a stationary private aircraft at zero altitude and zero speed all agreed to participate.

The Purpose

The researchers wanted to highlight that with or without realizing it, clinical randomized trials can be biased by the subjects who are recruited and the way they’re recruited. You can determine this only by drilling down into the research to see how subjects were selected.

Remember the study on bitter orange and caffeine? All the subjects were young and healthy. That happens often in exercise studies when testing dietary supplements, but people of all ages are active and use products designed to improve performance. Aging brings many differences in muscle mass, hormone levels, and other system changes that may reduce or exaggerate the affects of the supplement. The generalizability to other populations is often limited.

This parachute study was done tongue in cheek; no one would let people jump out of a commercial airplane with just a backpack and no human subject ethics committee would ever approve it. But subject selection can impact results and that calls into question the whole concept of doing a randomized clinical trial in the first place.

What are you prepared to do today?

        Dr. Chet

 Reference: BMJ 2018;363:k5094 http://dx.doi.org/10.1136/bmj.k509.

A Look Back to the Future

I
hope you’ve all had a happy holiday and are looking forward to the New Year.
I’m going to finish this year with my opinion of the most significant research
of 2018, studies that illustrate how diseases will be treated and managed in
the future. How far in the future? Hard to say, but sooner than we think.

https://fccdl.in/WxAqaFtG8c

Just
like last week’s audio Memo, you can listen . . .

We're sorry, but this content is available to Members and Insiders only.

If you're already a DrChet.com Member or Insider, click on the Membership Login link on the top menu. Members may upgrade to Insider by going to the Store and clicking Membership; your membership fee will be prorated automatically.

Absolutely True, Relatively Meaningless

In this final Memo about the retraction of several of Dr. Brian Wansink’s publications, there are several important questions that need to be addressed. What do the errors mean? Did Dr. Wansink intend to deceive? Finally, who was making the accusations?
 

Publication Errors and What They Mean

Let’s take a look at the errors I mentioned on Thursday. The first was continually analyzing the data to come up with new hypotheses—that’s not the way research is supposed to be done. In this case, the data were collected via questionnaires after eating at a buffet in a small town restaurant. The purpose was to see if the price of the buffet influenced whether people felt better or worse about their food choices. This wasn’t the best study idea Wansink ever dreamed up; I don’t see the results of this study impacting the obesity epidemic in any way, even if the data were pristine and analyzed precisely without the data churn that came afterward.

The second error pointed out by the post-article reviewers all related to statistical errors. They questioned the data being carried out to the hundredths of a point and stated that there were errors in calculating the means. These were survey results using a Likert scale and should never have been presented other than in whole numbers or at best, to half a point; going to the hundredths just makes no sense. It would have been better to recommend that to the authors than make a big deal about deceptive statistical errors. They also found that the number of subjects continually changed in some analyses: one test said 122 subjects while another said 124. There may have been a degree of ineptitude but again, no attempt to overtly deceive.

Finally, regarding the plagiarism accusation. Wansink did what many authors do: he used prior text taken from his own prior publications and inserted them into articles as appropriate. They should have been cited, but when you re-use what you’ve already written that’s not plagiarism, that’s an oversight or a bone-headed error.
 

Did Dr. Wansink Attempt To Deceive?

I read Dr. Wansink’s blog post that started this whole mess. What he attempted to do was illustrate how new researchers can get published. He had no attempt to deceive anyone. As the blog got more play in the scientific universe, he took some pretty big hits and not in a nice way. He answered every one with respect, including several from one of the accusers.

I came away thinking that Dr. Wansink didn’t understand the ramifications of continually analyzing data with changing hypotheses. In addition, he was not the best statistician and would have been helped tremendously by help from someone who really understood numbers.
 

Who Were His Accusers?

I checked out three of the primary accusers, two of whom had published the article mentioned in Thursday’s Memo (1). They all seem to be fascinated with numbers and scientific purity.

One was a retired physics professor; in his entire academic career, he published two research papers and that was over 30 years ago. He was primarily a physics teacher and retired 19 years ago. My problem isn’t his age, it’s that he hadn’t done much in his own field, let alone Wansink’s.

A second is a PhD candidate in the social sciences. If anyone understands the mess the use of observation and questionnaire testing can present in behavioral research, it would be him. It’s curious that he wouldn’t make that a key element of his paper.

The final accuser was a PhD/MD candidate who was kicked out of his program by his advisors, according to his blog. That gave him plenty of time to do the most in-depth and longest review of Wansink’s papers.

This may sound cruel but what we have is a never-was, a wannabee with very limited experience, and a never-will-be. Not exactly a stellar cast of accusers. They were absolutely correct, but what they showed is relatively meaningless.

What I didn’t see was a review of Wansink’s paper on the never-ending soup bowl (2), maybe because it was based on actual numbers because the amounts of soup were measured. They also failed to mention that another study was replicated and confirmed by another lab (3); it was on how the names of food influences whether children and young adults will eat more vegetables.
 

The Bottom Line

In spite of the publications being retracted, and with that, the forced retirement of Dr. Wansink from his Lab, there’s still value in the research that he’s published. He seemed to be more an idea guy than a bench scientist. Makes sense: his PhD was in marketing so he tried to research people’s attitudes about food. He just didn’t know how to do it very well from a science perspective.

If you want to control your eating, use a smaller plate, plate your food away from the table and don’t add any more, and keep all snacks out of sight. Proof or no proof, those are still good recommendations.

What are you prepared to do today?

Dr. Chet

 

References:
1. BMC Nutrition. doi.org/10.1186/s40795-017-0167-x.
2. Obes Res. 2005 Jan;13(1):93-100.
3. AMA Intern Med. 2017;177(8):1216-1218. doi:10.1001/jamainternmed.2017.1637

 

The Reasons for Retraction

Publications related to food habits are important if you’re in the weight loss field; I rely on them to help people achieve their weight loss goals. If the studies were poorly done, that’s unfortunate but behavioral science is an inexact science anyway. But if someone intentionally manipulated the data to get a specific outcome, that’s just not right. Let’s see what several scientists found when they examined Dr. Wansink’s data more closely. What were the problems?

It seems there were three. First, as I mentioned on Tuesday, he had a graduate volunteer continue to examine the data to come up with hypotheses that were significant. That means they organized the data differently and kept running statistical analyses until they came up with something that was statistically significant. As I said, that’s a no-no because of the potential of finding something by chance; you get the best answers to the questions you actually ask, so finding something by accident is not as valid among scientists.

Second, there were errors in the way data were displayed. The reviewers made a very big deal of granularity and how the means displayed weren’t possible. I’ll leave that to the people who specialize in statistics.

Finally, they accused him and his colleagues of plagiarism. If there were an absolute violation of science, that would be it.

But as you might expect, not everything is always as clear as people make it out to be, and I’ll explain that on Saturday. Until then, I would still keep the snacks out of sight and continue using that salad plate instead of a dinner plate in order to eat less.

What are you prepared to do today?

Dr. Chet

 

Reference: BMC Nutrition. doi.org/10.1186/s40795-017-0167-x.

 

Scientific Retractions

One of my favorite observational scientists has been Dr. Brian Wansink, former Director of the Food and Brand Lab at Cornell University. I’ve written about his research and used it in presentations several times over the years. One of my favorite tips came from one of his studies: use a salad plate instead of a dinner plate. It cuts down on the food you take at one plate-full by about 25%.

That’s why I was dismayed when I read that several of his papers have been retracted from JAMA and other publications. There are many reasons why a paper can be retracted: problems with data and statistics, questionable research techniques, or unsubstantiated conclusions. Evidently, there was some of all of those accusations, which resulted in the papers being retracted.

Of course I had to check this out. What did he do? How did his papers become suspect to begin with? Who was involved in this process? I’ll answer part of the who right now. It was Wansink himself with a blog post talking about collecting data and then using multiple statistical analyses to get to a hypothesis in a couple of studies. That’s a very big no-no in science.

But due to the nature of his observational research, does it mean all of his work on relationships between habits and food is worthless? We’ll find out this week.

What are you prepared to do today?

Dr. Chet