Select Page

In 2001, when the pediatric allergist Gideon Lack asked a group of some 80 parents in Tel Aviv if their kids were allergic to peanuts, only two or three hands went up. Lack was puzzled. Back home in the UK, peanut allergy had fast become one of the most common allergies among children. When he compared the peanut allergy rates among Israeli children with the rate among Jewish children in the UK, the UK rate was 10 times higher. Was there something in the Israeli environment—a healthier diet, more time in the sun—preventing peanut allergies from developing?

He later realized that many Israeli kids started eating Bamba, a peanut-based snack cookie, as soon as they could handle solid foods. Could early peanut exposure explain it? The idea had never occurred to anyone because it seemed so obviously wrong. For years, pediatricians in the UK, Canada, Australia, and the United States had been telling parents to avoid giving children peanuts until after they’d turned 1, because they thought early exposure could increase the risk of developing an allergy. The American Academy of Pediatrics even included this advice in its infant feeding guidelines.

Lack and his colleagues began planning a randomized clinical trial that would take until 2015 to complete. In the study, published in The New England Journal of Medicine, some children were given peanut protein early in infancy while others waited until after the first year. Children in the first group had an 81 percent lower risk of peanut allergy by age 5. All the past guidelines, developed by expert committees, may have inadvertently contributed to a slow increase in peanut allergies.

As a doctor, I found the results unsettling. Before the findings were released, I had counseled a new parent that her baby girl should avoid allergenic foods such as peanut protein. Looking back, I couldn’t help but feel a twinge of guilt. What if she now had a peanut allergy?

The fact that medical knowledge is always shifting is a challenge for doctors and patients. It can seem as though medical knowledge comes with a disclaimer: “True … for now.”

Medical school professors sometimes joke that half of what students learn will be outdated by the time they graduate. That half often applies to clinical practice guidelines (CPGs), and it has real-life consequences.

A CPG, usually drawn up by expert committees from specialized organizations, exists for almost any ailment with which a patient can be diagnosed. While the guidelines aren’t rules, they are widely referred to and can be cited in medical malpractice cases.

When medical knowledge shifts, guidelines shift. Hormone replacement therapy, for example, used to be the gold-standard treatment for menopausal women struggling with symptoms such as hot flashes and mood changes. Then, in 2013, a trial by the Women’s Health Initiative demonstrated that the therapy may have been riskier than previously thought, and many guidelines were revised. 

Also, for many years, women over 40 were urged to get annual mammograms—until new data in 2009 showed that early, routine screenings were resulting in unnecessary biopsies without reducing breast cancer mortality. Regular mammograms are now suggested mainly for women over 50, every other year.

Medical reversals usually happen slowly, after multiple studies shift old recommendations. Covid-19 has accelerated them, and made them both more visible and more unsettling. Early on, even some medical professionals presented the coronavirus as no more severe than the flu, before its true severity was widely described. For a time, people were told not to bother with masks, but then they were advised to try double-masking. Some countries are extending the intervals between the first and second vaccine doses. Of course the state of the pandemic, and of our knowledge about it, has been shifting constantly. Still, throughout the past year and a half, we’ve all experienced medical whiplash.

It’s too early to say how these reversals will affect the way patients perceive the medical profession. On the one hand, seeing debate among medical experts conducted openly could give people a heightened understanding of how medical knowledge evolves. It could also inculcate a lasting skepticism. In 2018, researchers analyzed 50 years’ worth of polling data on trust in medicine. In 1966, 73 percent of Americans reported having confidence in “the leaders of the medical profession.” By 2012 that number had dropped to 34 percent—in part, the authors surmised, because of the continued lack of a universal health care system.