Christopher M. Johnson's Blog
May 2, 2016
This is a post I wrote some years ago for Maggie Mahar’s excellent blog, HealthBeat. With the now nearly universal adoption of the electronic medical record (EMR) I think it matters more than ever.
The EMR is here to stay. Its adoption was initially slow, but over the past decade those hospitals that do not already have it are making plans for implementing it. On the whole this represents progress: the EMR has the ability to greatly improve patient care. Physicians, as well as all other caregivers, no longer have to puzzle over barely legible handwritten notes or flip through pages and pages of a patient’s paper chart to find important information.
With the EMR, it is easy to see what medications a patient is taking, when they were started, and when they were stopped. Physicians can easily find key vital signs – temperature, pulse, respirations, and blood pressure – plotted over any time frame they wish. All the past laboratory data are displayed succinctly. But it is not all gravy.
I use the EMR every day, and I am old enough to have trained and practiced when everything was on paper. While overall, I am happy to have electronic records, there is a problem: The EMR is trying to serve too many masters. The needs of these various masters are different, and sometimes they are incompatible, even hostile to one another. These masters include other caregivers, the agencies paying for the care, and those interested in medico-legal aspects of care
What can happen, and I have seen it many times, is that the needs of the caregivers take a back seat to the needs of the payers and the lawyers. The EMR is supposed to improve patient care, but sometimes it makes it worse. Physician progress notes illustrate how this happens.
Progress notes are the lifeblood of the medical record. They tell, from day to day, what physicians did to a patient and why. They are a narrative of the patient’s care. Three decades ago we sat down, pulled out a pen, and wrote out our daily progress notes. There were standard ways of doing this, but physicians were free to organize their notes however they liked. That was both a blessing and a curse. It was a blessing because not all patients fit the standard way of note writing, so you could modify how you recorded things; it was a curse because every physician was different, and some wrote very sketchy notes indeed, notes from which it was very difficult to figure out what happened.
I once did a research project for which I was reading physician notes from the nineteen twenties, thirties and forties. I recall one patient in particular who was clearly desperately ill. He had critically abnormal vital signs (which I could tell from the nurses’ graphic chart), needed several blood transfusions, and even stopped breathing once. His progress note for the day, written by a very famous and distinguished physician, was one line: “Mustard plaster didn’t work.”
Physician notes have evolved a great deal since 1930. Certainly in my medical career, which began in 1974, physicians were expected to make some reference to what they were thinking, why they did or did not do what they did. Sometimes the notes were cryptic jottings that made it very hard to follow what was happening. But most of the time you could understand what your colleagues were thinking.
But while this worked reasonably well for physicians, other users of the medical record complained loudly. Payers, such as insurance companies and Medicare, based their reimbursement upon those notes. They were unwilling to pay for anything that was not clearly documented. They also increasingly based their payment structure on the complexity of the medical decision making; if physicians wanted to be paid at a higher rate for managing a complex and difficult patient they needed to show in their note just why that patient was complicated. They needed to show what they were thinking, and what information, such as laboratory data and the physical examination, they used to make their decisions.
Finally, for the lawyers, the operative phrase was “if it’s not documented, it didn’t happen.” In theory, the goals of all three users – caregivers, payers, and lawyers – should be in alignment. But with the EMR the needs of the caregivers, which should be paramount, are losing groun
The EMR, since it is on a computer, can be manipulated in all the ways a computer allows. Hospitals are laying out millions to implement the EMR, and to ensure maximum payment they want to make sure it is easy for the payers to find in the EMR all the things the payers want there. This is accomplished, among other things, through the use of templates and “smart text” for progress notes. For example, a physician writing a progress note in Epic, a popular EMR system, can open a template that has many components of the evaluation already filled in. The program can bring into the note all the previous laboratory values. It has all the categories of the physical examination sitting on the screen for the physician to fill in.
It is easy to “drag and drop” information from previous notes with simple keystrokes. There’s nothing intrinsically wrong with all this. It can make producing a complete progress note quick and easy. But it also can destroy the original purpose of the progress note – to give a narrative of the patient’s progress. It can stifle the conversation between physicians embodied in traditional progress notes
Recently I saw an example of the problems this can cause. A couple of weeks ago I heard I was getting a patient into the pediatric intensive care unit with multiple problems, most acutely a blood problem. One of these lesser issues was a heart problem that required surgery. Because of the other serious problems, though, the surgery had been postponed for the future. I read about all this in the patient’s EMR before she even arrived in the PICU, which is one of the great aspects of the EMR. We no longer have to wait for a clerk pushing a cart around the hospital to deliver the paper chart. The patient had been seen just that morning by her hematologist for the blood issue and the progress note in the EMR told me the plan for her heart problem was surgery sometime in the future when the child’s other problems had improved. It said so right there on the screen. In fact, all the notes had been saying that for over a year.
So imagine my surprise when I went in to see the child and saw an obvious and well-healed surgical scar on her chest, clearly from heart surgery. She had had her heart fixed two months before at another institution. I gave her hematologist the benefit of the doubt and assumed her doctor knew the surgery had been done, and that what had happened (I hope) was that the doctor had used the beguiling convenience of drag and drop on the progress note template to do the note. This particular incident was innocuous, but I think you can see the potential for mischief with this sort of thing.
This is not an isolated event. I have seen many examples– so many that I now cast a suspicious eye on all those uniformly formatted progress notes. The ease with which mounds and mounds of verbiage and laboratory data can be stuffed into a progress note may give the payers what they want, but it often does not give me what I want– and that is some evidence that all this information was processed through a physician’s brain and led to a carefully considered decision about what to do. I want a human voice, and that is getting harder and harder to find in the EMR’s stereotypic and bloodless documentation.
Medicine is about stories – patients’ stories. I was taught forty years ago that most of the time the history gives us the diagnosis. Osler reputedly said: “Listen to the patient. He is telling you the diagnosis.” (That attribution has been questioned, but the spirit is definitely Osler’s.)
Of course these days our wonderful scientific tools often give us the answer, and I certainly do not wish to toss all those things aside to go back to using only what Osler had. But medicine is not really a science. It is based on science, uses science, and is increasingly more scientific. But medicine also contains large measures of intuition, educated guessing, and blind luck. I do not think that aspect of medicine will ever completely disappear. When I read (or wade) through a patient’s record, I look for the story. When I cannot find a coherent story, I cannot give the best care.
For myself, even though I of course use the EMR, I refuse to use all those handy smart text templates. It takes me longer, but I type out my progress notes, organized as I did when I used a pen and chart paper. It takes me a little longer, but it makes me think things through. No billing coder has ever complained. More than a few colleagues have told me, that when we share patients, that they search through the EMR to find one of my notes to understand what is happening with the patient.
My advice to other doctors is this: don’t let the templates get in your way. Tell the story.
April 28, 2016
A recent editorial in the New England Journal of Medicine asked a good question: Is it acceptable for children to play tackle football? The background for this question is the emerging understanding that repeated blows to the head, even a helmeted head, can cause brain injury. That injury is cumulative over time. There has recently been a lot media attention about this in the wake of well documented examples of professional football players suffering from an entity now called chronic traumatic encephalopathy (CTE). The disorder has actually been known about for a long time. Previously, however, it was believed mainly to be associated with boxers, who of course sustain many hundreds of blows to their exposed heads over the course of their careers. The presumption is that CTE results from the cumulative effects of a long string of concussions, even minor ones.
Although we now know professional football players are at risk for CTE, it is less clear where the risk begins — how many blows to the head it takes to put a person at risk. In other words, is there a safe threshold? Another key question is if a child’s brain has unique properties that affect risk. Some have said teaching “safe” tackling techniques will reduce the risk, but we have no information if this is true or not. We do have some recent data regarding the question of the effects in normal adolescent boys of playing football.
Investigators fitted special helmets on high school football players that collected data on all head impacts, whether or not the individual experienced an actual concussion. Key aspects of brain cellular microstructure were studied before, during, and after the end of the football season. The results are concerning. In the authors’ words:
Our findings add to a growing body of literature demonstrating that a single season of contact sports can result in brain changes regardless of clinical findings or concussion diagnosis.
Of course this is potentially a huge issue, since millions of children and adolescents play tackle football. Knowing what we know now, is this safe? Or, if not totally safe, is the risk for brain damage sufficiently tiny that parents who want their children to play football can realistically allow it? The American Academy of Pediatrics has issued a statement about it, but the recommendations waffle on the big question: it recommends more adult supervision, teaching of proper tackling techniques, and strength training, particularly of the neck muscles. (The article is also good because it reviews all of what we know about the question.) The editorial from the New England Journal is more blunt. It nearly, but not quite, comes down on the side of recommending tackling be eliminated. This reticence is understandable because it’s a potentially inflammatory opinion. Yet the editorialist makes some good points:
The AAP committee shied away from endorsing the elimination of tackling in youth football, because doing so would fundamentally change the way the game is played. Yet evidence indicates that tackle football in its current form is inconsistent with the AAP mission “to attain optimal physical, mental, and social health and well-being for all infants, children, adolescents and young adults.” Repetitive brain trauma can have serious short- and long-term consequences, including cognitive and attention deficits, headaches, mood disorders, sleep disturbances, and behavioral problems. To significantly reduce the incidence of brain trauma in young people, I believe that physicians should consider endorsing strategies that alter the way football is played.
My own view is that we know repetitive blows to the head, and certainly multiple concussions, are associated with permanent brain damage. We also know that the brain of a child or early adolescent takes longer to recover from a concussion than does that of an adult. We don’t know how many are safe, particularly how many severe ones. It appears even a single season of play causes changes in the brain, but we don’t know if these changes are transient or significant. But for these reasons I wouldn’t let my son play tackle football. That’s a personal decision. Other parents can make their own choices. I would not at all be surprised if we learn with further research that tackle football is unacceptably dangerous for children.
April 21, 2016
There are over 400 pediatric intensive care units (PICUs) in the USA, as most recently estimated by the Society of Critical Care Medicine. These units vary widely in size, from 4 or 5 beds to fifty or more. The smaller units are generally found in community hospitals; the larger ones are usually in academic medical centers, often in designated children’s hospitals, of which there are 220. Given this size range, it is not surprising the services provided at PICUs vary widely. There are no defined standards for what a PICU should be, although the American Academy of Pediatrics (AAP) suggested some over a decade ago. The AAP also suggested dividing PICUs into two categories, Level I and Level II, although in my experience no one pays much attention to the distinction. A great many of the recommendations are about what the equipment and staffing for a PICU should be. There is little if anything about the crucial issue of range of practice. Right now a PICU cares for whatever patients the facility wishes to care for. I don’t think this is the best way to do things, and there are a couple of examples from other specialties for which there are solid recommendations regarding appropriate scope of intensive care practice.
The oldest example is neonatology, which is practiced in neonatal intensive care units (NICUs) by pediatric specialists known as neonatologists. NICUs care for sick newborn babies, the overwhelming majority of which are infants born prematurely. The neonatal guidelines date back to 1976, when the March of Dimes Foundation spearheaded an effort to classify and sort out newborn care. They proposed 3 levels of units: level I was for normal newborns, level III was for the sickest babies, and level II was somewhere in between. These designations have been broadly adopted, carrying along with them the specific expectations of just what care a level III NICU should be able to offer. The guidelines were revisited and reaffirmed in 2012. Importantly, the guidelines also stated that Level III NICUs had an obligation to provide outreach and training to their surrounding region to help smaller hospitals resuscitate and stabilize sick infants for transfer to a Level III unit.
This system has been successful in that it has been associated with greatly improved outcomes for premature infants. A new category, Level IV, has more recently been added, indicating an even higher level of care for the sickest of the sick. All of these NICUs are in major medical centers. In the beginning, this was also true of Level III units. Since then, however, neonatology has expanded from its origins into medium-sized community hospitals, many of which now have Level III units. This has been good for babies, in that it brought the skills of neonatologists to more infants. But there are some concerns it may dilute the expertise of providers, since there is good evidence that units with more admissions have better outcomes. The optimal number of admissions and NICU size is still a matter of debate. Even so, the situation is a lot more worked out than is the case with older children and PICUs. An unspoken subtext in this discussion is that NICUs, for reasons I won’t get into here, are generally money makers for hospitals, tempting them to start one for mainly that reason. PICUs generally break even financially at best, and often not.
Trauma surgery is another example of a classification system that has improved patient care and outcomes. Trauma center classification is the opposite of NICUs: Level I is the highest, ranging down to Level V for facilities that are only equipped to stabilize patients and send them on to a higher level center. Those in between have progressively more capability until they reach Level I. Unlike NICUs, there is a process of certifying the qualifications for trauma centers. A state can designate a trauma center wherever and however it likes, but the American College of Surgeons verifies that the facility meets its criteria. So it’s a two stage process. There are standard guidelines for what sorts of patients the various levels can care for, and centers lower than a Level I must have an ongoing relationship with higher centers to assure smooth transfer when needed. As with NICUs, trauma centers engage in outreach and teaching to help their regions improve care. Trauma centers also have extensive programs of quality control and outcome measurement to see how they are doing and how they measure up to national benchmarks. There are parallel pathways for adult and pediatric trauma centers. That is, a facility can be Level I for both. It can also be mixed. For example, it might be Level I for adults and Level II for children.
In contrast, PICUs, which care for critically ill and injured children from infancy up to adolescence, are kind of a disorganized mess. There is no classification system accompanied by guidelines to sort facilities into different groups according to what patients are most appropriate where. Large PICUs at children’s hospitals are de facto highest level facilities. But how should we stratify smaller places? For example, one major determinant would be if the facility offers pediatric heart surgery. Many, even most PICUs don’t. The smaller PICUs I know have a more or less worked out arrangement with a higher level PICU to transfer children they cannot care for. But, unlike the case with trauma centers, there is no requirement to have this. There’s no requirement for anything, really.
I think it is past time for some order in this chaos. Many of my colleagues believe the same. The obvious leader for such a process would be the American Academy of Pediatrics. The AAP has been active in this for decades in NICUs. There already exists an AAP Section on Critical Care, of which I am a member. Perhaps some AAP-sanctioned group is already working on the issue. If there is, I haven’t heard anything about it.
February 21, 2016
The debate over the safety of giving birth at home, both for the mother and for the infant, has been debated for years. I’ve written about the issue myself. From time immemorial until about 75 years ago or so most babies were born at home. Now it’s around 1% in the USA, although it’s much higher than that in many Western European countries. The shift to hospital births paralleled the growth of hospitals, pediatrics, and obstetrics. With that shift there has been a perceived decrease in women’s autonomy over their healthcare decisions. There has also been an unsurprising jump in the proportion of caesarian section deliveries, an operative procedure, and various other medical interventions in labor and delivery, even though current data suggests the recent jump in caesarian delivery (now around 30%) has not added any benefits. The debate over whether the dominance of hospital births is a good thing or a bad thing (or neither) is much more than a medical debate; it is also a social and political one. It is also to some extent an issue of medical power, a struggle between physician obstetricians who deliver babies in the hospital and nurse midwives who often deliver babies at home. I’m very interested in the social and political aspects, but as a pediatrician I’m particularly concerned with the safety question: Is it more dangerous for your baby to be born at home?
One problem in answering this question is that most of the studies about the safety of home birth came from abroad. But now we have some data from the USA, published in a recent issue of the prestigious New England Journal of Medicine, entitled “Planned out of hospital births and birth outcomes.”
One big problem with evaluating previous data has been that vital statistics from birth certificates counted home births and hospital births, but did not identify as a separate category those women who planned to deliver at home, but then were admitted to a hospital to deliver there because of some issue with the labor. Such women were just counted as hospital births. Also, the recent growth of birthing centers has introduced a location kind of intermediate between home and hospital. A recent large study from Oregon using the years 2012 and 2013 gives some useful information.
The bottom line is that children born to women who intended to give birth at home had an infant mortality rate of 3.9 deaths per 1,000 deliveries. This was significantly higher than the death rate of infants born in a hospital, which was 1.8 deaths per 1,000 deliveries. Not surprisingly, women who delivered in the hospital had a far high rate of some kind of intervention, such as caesarian section.
What should we make of this? Thinking about risk can be difficult, and it is important to understand the difference between relative and absolute risk. (I’ve written about that, too.) Media reports often obscure this key point. For example, in this study the risk of infant mortality increased 100% with home birth. 100%!! But twice a very small number is still a very small number. The absolute risk of a baby dying in a home delivery is very small. Still, it is higher.
What this means is that a woman deciding to deliver at home should understand all the facts. Some will not want to accept this increased risk, however small it is in absolute terms. Some will accept it. The same issue of the Journal had a good editorial discussing how to think about the issue. It’s a very good summary of the fundamental question. It’s all about the issue of acceptable risk, and how that varies with the person. The conclusion:
Ultimately, women’s choices for place of delivery will be determined by the extent of their tolerance for risk and which risks they most want to avoid.
February 14, 2016
This has been a topic of discussion for years. The white coat has long been traditional garb for physicians. Medical schools often have some sort of ceremony in which students receive their coats, as shown in the picture above. Medical student coats are really short jackets — they get longer ones when they graduate. It’s a quick way of telling the students from the doctors. This emphasizes that, to a large degree, the white coat is a symbolic thing, part of the regalia of being a physician. It is interesting to me how, over the past several decades, this symbolism has been diluted a bit by lots of health care workers who are not physicians wearing long coats — everybody from dietitians to technicians who draw blood. But still, a long white coat with one’s name embroidered over the left chest pocket is still a standard marker for physicians.
I have to say that white coats can be handy. They have lots of pockets, for one thing, and these days physicians have to lug around a lot of stuff: papers, phones, pagers, badges, etc. It is nice to be able to answer a phone call and have the tools immediately available to scribble down messages and notes and then stuff these in a pocket.
It is also true that hospital dress codes have changed dramatically over the 30 plus years I have been in medicine. Nurses no longer wear starched white dresses, white hose, and caps. When I was in training we were forbidden to wear scrub (surgical) attire outside the operating room area; now everybody wears scrubs everywhere.
The problem is that articles of clothing can carry germs. Nobody wears the same shirt day after day, but many physicians wear the same white coat for days, even weeks. One study showed that most physicians change their coats only monthly at best. That can’t be a good thing. If you want to wear your coat you should at least have enough of them in your closet so you can get them cleaned frequently.
What do I do? Although my picture on the top of the page shows me in a white coat, I don’t wear them much these days. I work in several hospitals, one of which is a university hospital, and there I wear one out of tradition, I suppose. But I change it every other day or so. At the hospital where I spend most of my time I don’t wear one anymore. As a pediatrician, I also think white coats tend to scare kids, so that’s an added reason.
December 20, 2015
Asthma is a common childhood condition. Some estimates are that around 10% of all children have it. The incidence has been steadily increasing for many years, but some recent data suggest the burden of the disease in children may have leveled off over the past couple of years. That’s encouraging, but the number of children with asthma is still huge. The best way to think of asthma is that of an exaggerated reaction of the small airways in the lungs to common irritants, making them constrict and reduce airflow. These include viral infections, environmental triggers, and poorly understood things intrinsic to the individual. There is a strong familial tendency to developing asthma. Additionally, some things predispose to it, including sedentary lifestyle and obesity.
A big push in pediatric practice over the past decade or so has been to try to keep kids with asthma out of the hospital. This can be accomplished by a good asthma care plan for the family to use when a child’s symptoms get worse. Another key component is a team approach to managing this chronic disease. Asthma is the most common preventable reason for a child to be admitted to the hospital. A recent report from the American Academy of Pediatrics shows how effective these care plans can be.
The investigators looked at 3,510 children with asthma treated over the years at Primary Children’s Hospital in Utah. The notion was to see if increased compliance with asthma control measures by the family would reduce the number of hospital admissions. That turned out to be the case, significantly so. Interestingly, one of the biggest problems for the research project was to get physicians to accept and go along with the best current evidence-based information about how to manage asthma. I’m actually not surprised by this. Asthma management has changed over the years and current best practice is not what I was taught years ago. Things change, but many physicians don’t.
The key for any parent who has a child with asthma is to have a clear understanding of exactly what to do if your child has worse breathing problems. Many visits to the hospital could be headed off if all parents had such a plan, as well as a resource person to call if the plan is not working.
What is asthma, and how do all those different asthma medicines work to relieve asthma symptoms?
Why is asthma increasing among children?
Adoption of smoke-free legislation reduces childhood asthma
December 8, 2015
I posted about this last year when I was once again wading through physician credentialing. I recently had occasion to do it again because I’m helping out a friend at a new hospital, and, if anything, the process is even worse.
Everyone wants to be sure their physician is competent and appropriately trained. The way this is done is through credentialing. A new applicant for privileges to practice at a hospital or other healthcare facility fills out an application and submits a curriculum vitae that details when and where a physician trained and the certifications obtained, such as specialty boards, and a work history (if any). Copies of key documents — medical degrees, residency certificates, and the like — accompany the application. The applicant also provides the names of professional references who can attest to competency. Also required are declarations that the applicant has never been fired (or asked to resign) from a medical job for competency issues. The applicant also must also swear to a long list of other things. These include not being a drug addict (who would answer yes to that?), a convicted felon, or to have been disciplined for questionable or illegal activity. A committee then reviews the application and grants (or not) privileges to practice medicine at that facility.
Before the committee grants privileges, however, all the information gets verified. This makes perfect sense because, regrettably, there are more than a few documented instances of people embellishing or even outright lying on their applications. I have been on enough selection committees to know that folks occasionally stretch the truth. Flagrant examples of this occasionally make the news. The job of credentialing departments is to check up on all this. Interestingly, in the example I just linked to, the guy hoodwinked all the verifiers; it was only picked up later by accident.
It gets more complicated because not just hospitals and healthcare facilities want their practitioners credentialed. All of the people who pay the bills, such as insurance companies and the government — Medicare, Medicaid — want to make sure they are paying legitimate costs to legitimate practitioners. So they have their own credentialing departments, all different in how they do things. A typical physician has to be credentialed by every single one of the payers covering every single one of his or her patients. That can mean a dozen payers or more. So, for example, besides having privileges at the hospitals at which I practice, my background is verified by all the people who pay the bills for my patients. And believe me, the requirements of all these entities are not the same and all have their own sheaf of forms to fill out and supporting documents to submit.
This situation cries out for a central clearing house for credentialing information. Some examples of this exist, such as this one, if nothing else because collecting all this information is tedious and expensive. Credentialing departments at many facilities are getting larger all the time. Credentialing is also a major industry, with overwhelmed facility credentialing staffs farming out the process to outside contractors. The problem is that, in our disorganized healthcare “system,” no facility or entity wants to surrender the right to collect their own data in their own way. Attempts to institute a more global process, at least in my experience, have simply added another layer of bureaucracy to slog through. The convenience, or even the sanity, of the physicians wrestling with this unholy mess is not their concern. For physicians like me, who practice at several hospitals in different parts of the country with little overlap in who the regional payers are, the expense and hassle of it all are large. And even when you think you’re done, you’re not: many entities require frequent updates, often meaning a whole new application. One that I deal with demands this every three months.
Okay — rant over. But what prompted this was my agreeing recently to help out some people for a few weeks at a new hospital. I’m now four months into the credentialing “process.” During that time I’ve dealt with three separate organizations, none of which communicate with each other. I’ve worn out my fax machine submitting extraneous document after document. Nearly every day my email inbox has strident demands for still something else IMMEDIATELY! If I hadn’t promised my time to people I like, I think at this point I would just say: no, I’m done — good luck.
I’ve been practicing medicine for over 35 years. For my first job I just showed up for work. People checked that I had graduated from medical school, done a residency, and passed my exams, but that was about it. I realize physicians have to some extent brought all this on ourselves by a few of us scamming the system over the years or just lying. I recall a case some years ago of a physician lying about a five year gap in his work history, a gap that turned out to be because he was serving time in prison for third-degree murder. (I looked for a link to this incident but couldn’t find one — it most likely was pre-Google.)
Anyway, I think this credentialing mess has got to get better organized somehow. We need a central authority of some sort, accepted by all. The current trajectory is unsustainable. Healthcare is expensive enough, and all this adds many millions to the total costs for little benefit.
The credentialing process for physicians has become a cumbersome, chaotic, and unholy mess
Huge costs hidden in plain sight: the enormous burden of healthcare administration
How are doctors trained, anyway? (Part 2 — the process)
November 29, 2015
The link between salt intake and high blood pressure has been known for decades. That’s why, if you have high blood pressure, your doctor will tell you to reduce your salt intake. The reason is that excess salt makes your body retain more water. And more water in your circulation means more fluid in your vascular pipes, making the pressure in the pipes higher. That’s also why one of the first line treatments for high blood pressure is a diuretic, a drug that makes your kidneys release more water into the urine. It’s one thing to put down the salt shaker and also reduce (or eliminate) your consumption of salty snacks. But there is a hidden source of salt we often don’t think about — the salt in processed foods. When you are cooking raw food you control how much salt you add. But you are not in control of how much salt is in processed foods, such as prepared things you might get in the frozen food section or out of a box. Food companies add quite a bit of salt to these since the perception is that doing so makes the foods tastier. Do they need to do that? How much different would foods taste if they didn’t? A recent study gives some answers to those questions.
The research was published in a recent edition of the British Medical Journal. The investigators used a national survey database to look at changes in the incidence of high blood pressure, stroke, and heart attacks over the past decade and found improvements in all of these. There are several possible explanations, including better treatment for these conditions. However, the investigators also had access to urinary salt values in many patients. The improvement particularly in blood pressure correlated with lower salt consumption. Of note, since 2003 the amount of salt in processed foods has been gradually reduced in the United Kingdom. Overall salt consumption fell by 15%. It is reasonable to conclude that at least part of the reduction in cardiovascular disease they observed was because of this salt reduction.
Although it would be difficult to accomplish, I don’t see any reason such improvements couldn’t be carried out in the USA. From everything we know, particularly high blood pressure is a long-term killer, and most people with it don’t know they have it because blood pressure needs to become very high before a person has any symptoms that would bring them to the doctor.
October 17, 2015
A recent article in the journal Pediatrics is both intriguing and sobering. It is intriguing because it lays bare something we don’t talk much about or teach our students about; it is sobering because it describes the potential harm that can come from it, harm I have personally witnessed. The issue is overdiagnosis, and it’s related to our relentless quest to explain everything.
Overdiagnosis is the term the authors use to describe a situation in which a true abnormality is discovered, but detection of that abnormality does not benefit the patient. It is not the same as misdiagnosis, meaning the diagnosis is inaccurate. It is also distinct from overtreatment or overuse, in which excessive treatment is given to patients for both correct and incorrect diagnoses. Overdiagnosis means finding something which, although abnormal, doesn’t help the patient in any way.
Some of the most controversial, and compelling examples of overdiagnosis come from cancer research. Two of the most common cancers, prostate cancer for men and breast cancer for women, run smack into the issue. It is certainly generally true early diagnosis and treatment of cancer is better than late diagnosis and treatment . . . usually, not always. A problem can arise when we use screening tests for early cancer as a mandate to treat them aggressively when we find them. The PSA (prostate-specific antigen) blood test was developed when researchers noticed its value went up in men with prostate cancer. From that observation is was a short, but significant leap, to use the test in men who were not known to have cancer to screen for its presence. The problem is at least two-fold. There is overlap between cancer and normal, and many small prostate cancers do not progress quickly. Since the treatment for prostate cancer is seriously invasive and has several bad side effects, the therapy may be worse than the disease, especially in older men who will likely die of something else first. You can read more about the PSA controversy here. There are similar questions about screening for breast cancer; you can read a nice summary here. The controversy has caused fierce debates.
Children don’t get cancer very often, but there are plenty of examples of overdiagnosis causing mischief with them, too. The linked article above describes several common ones. A usual scenario is getting a test that, even if abnormal, will not lead to any meaningful effect on the child’s health. Additionally, an abnormal test then typically leads to getting other tests, which can lead to other tests, and so on down the rabbit hole. I have seen that many times. As the authors state:
Medical tests are more accessible, rapid, and frequently consumed than ever before. Discussions between patients [or their parents] and providers tend to focus on the potential benefits of testing, with less regard for the potential harms. Yet a single test can give rise to a cascade of events, many of which have the potential to harm.
This is kind of a new frontier in medicine, and the issue grows larger as the huge number of diagnostic tests we have mushrooms every year. For a parent, a good rule of thumb is to ask the doctor not just what the benefits of a proposed test are, but also the risks. Importantly, ask what the doctor will actually do with the result. We are prone to think more information is always a good thing, but that clearly is not the case. And never, ever get a test just because you (or your doctor) are merely curious.
October 7, 2015
California has recently ended most exemptions from childhood vaccinations. Only exemptions for medical conditions remain, and such exemptions must be certified by a physician. The requirement applies to children attending elementary or secondary school, as well as day-care centers; home schooled children are not included. A recent editorial in the New England Journal of Medicine reviews the politics behind passage of the new law.
Clearly the recent outbreak of measles in the state played a large role in convincing the legislature to pass the law. That, plus the progressive fall in the percentage of children vaccinated. Epidemiological research has shown that when the percentage of the population that is vaccinated falls below a certain number, what is termed herd immunity no longer functions. That concept is that, if the great majority of the population is immune to a disease, the few who are not are protected by the overall rarity of the infection. The particular threshold for herd immunity varies with the disease, but it is usually in the neighborhood of 80-95%. The more infectious the disease, the higher the percentage of immune people needs to be to prevent spread. If sufficient herd immunity can be maintained for long enough, the disease can actually be eradicated. Thus far only smallpox and rinderpest (a disease of cattle) have been eliminated in this way. Perhaps the purest example of the importance of herd immunity is whooping cough, or pertussis. The people most prone to contract severe, even lethal infection are small infants. Yet they cannot begin to get the vaccine (it takes several doses) until they are several months old because it doesn’t work before that age. They are entirely dependent upon not encountering older persons who have the disease.
In my view, vaccine requirements are lawful applications of the state’s interest in public health. Adults have a right to do whatever they like to their bodies (although not their children’s) as long as their actions don’t affect others. In the case of vaccines, not participating in maintenance of herd immunity has significant and potentially serious effects on the health of others.