Wednesday, August 16, 2017

How about ranking how well hospitals serve their communities?

I'm sure that many fabulously talented, skillful, compassionate physicians work at the Cleveland Clinic. If I lived in Cleveland or a nearby town and suffered from a rare or life-threatening disease, I would strongly consider going to the Clinic for specialty care. Maybe I would even work there. But ranking the Cleveland Clinic the #2 hospital in America, as U.S. News and World Report did last week, is outrageous. (Full disclosure: I once blogged for U.S. News.)

My use of the term "outrageous" has little to do with deficiencies in U.S. News's ranking methodology, whose past versions have been criticized for relying more on subjective reputation rather than objective data on safety and quality, and having no correlation with other ratings such as those on Medicare.gov's Hospital Compare website. As Elisabeth Rosenthal has previously reported, hospital rankings are mostly about hype, and it's questionable how much impact they really have on patient choices when every academic or community hospital can probably find at least one high-ranked specialty or service line to brag about.

No, I think this top-notch ranking is outrageous because it only accounts for the patient care that the hospital and its affiliated practices provide, rather than including the health status of the surrounding community - which is awful. Although the Clinic may provide excellent care to patients who walk or are wheeled through its doors, Dan Diamond's recent article in Politico sharply contrasted the overflowing wealth of the medical institution with the barren, crumbling neighborhoods that surround it:

Yes, the hospital is the pride of Cleveland, and its leaders readily tout reports that the Clinic delivers billions of dollars in value to the state. ... But it’s also a tax-exempt organization that, like many hospitals, fought to preserve its not-for-profit status in the years leading up to the Affordable Care Act. As a result, it doesn’t have to pay tens of millions of dollars in taxes, but it is supposed to fulfill a loosely defined commitment to reinvest in its community. That community is poor, unhealthy and — in the words of one national neighborhood-ranking website — “barely livable.”

Hospitals and health systems can't be expected to shoulder the entire burden of improving a community's economic prospects, and many hospitals were originally located in poor neighborhoods because that's where more sick people live. But according to Diamond, its financial figures indicate that the Cleveland Clinic hasn't been doing nearly enough for the community to offset the tax benefits it receives:

[The Clinic's] hospital system cleared $514 million in profit last year and $2.7 billion the past four years, when accounting for investments and other sources of revenue. And since the ACA coverage expansion took full effect, the Clinic’s been able to spend a lot less to cover uninsured patients; its annual charity care costs fell by $106 million from 2013 to 2015. But its annual community benefit spending only went up $41 million across the same two-year period, raising a $65 million question: Did the Clinic just pocket the difference in savings?

“I think we have more than fulfilled our duties,” [Clinic CEO Toby] Cosgrove said in response, pointing to the system’s total community benefit spending, which was $693 million in 2015. The majority of that spending, however, wasn’t free care or direct investments in community health; about $500 million, or more than 70 percent, represented either Medicaid underpayments — the gap between the Clinic’s official rate, which is usually higher than the rate insurers pay, and what Medicaid pays — or Clinic staffers’ own medical education.


It's not that the Cleveland Clinic is blind to the health crisis occurring outside of its doors. Like all nonprofit hospitals, it is required to perform a community health needs assessment (CHNA) every three years. The 189-page document it issued in 2016 provides a dismal accounting of all of the ways in which its local neighborhoods have worse indicators of health than other counties in Ohio and the vast majority of the nation. When Diamond suggested that the Clinic consider increasing its investments in population health, "where fixing community problems like lead exposure and food deserts are viewed as equally important as treating heart attacks," CEO Cosgrove sounded doubtful about what his hospital could or should do about these problems:

"That’s a good direction to go," he allowed. “But how much can we do in population health? We don’t get paid for this, we’re not trained to do this, and people are increasingly looking to us to deal with these sorts of situations,” Cosgrove added. “I say that society as a whole has to look at these circumstances and they can’t depend on just us.”

Judging from readers' comments posted at the end of the article, Cosgrove is far from alone in thinking that it isn't the place of medical institutions to solve the problems of distressed neighborhoods. Physicians and health executives have long believed that the responsibility of medicine is solely to provide health care, not social services or economic benefits outside of employment. But it's 2017, not 1967. As Susan Heavey reported for the Association of Health Care Journalists, in many parts of the U.S. health professionals have successfully partnered with advocates, local officials, and housing developers to "reinvent neighborhoods with [an] eye on health." If the leaders of the Cleveland Clinic wanted a road map for how to help rebuild the surrounding community, they could review one of 10 recent case studies posted by the Build Healthy Places Network, an organization whose mission "is to catalyze and support collaboration across the health and community development sectors, together working to improve low-income communities and the lives of people living in them."

On a national level, rather than allowing CHNAs to gather dust on a shelf (or the online equivalent), health policymakers could use them to allocate public funding for graduate medical education where it is needed most, rather than where it is currently going. As Dr. Melanie Raffoul, one of my past Policy Fellows, wrote recently in an analysis of Texas CHNAs and regional health partnership plans in the Journal of the American Board of Family Medicine:

Many [CHNAs] mentioned problems such as “low literacy,” “food deserts,” or “high levels of teen pregnancy.” Many of these concerns cannot be meaningfully addressed by hospitals, but they can be tackled through increased access to primary care and mental health services, and residency training sites are one way to provide this to the community. This should increase institutions' thinking about their role in larger community strategies to tackle community issues that affect health. Workforce gaps similarly need to be seen in this context—a community resource meant to resolve community needs. ... Community assessments could help refocus the use of publicly funded physician training as part of a broader hospital-community partnership for resolving health needs.

I began by stating that I didn't think that the Cleveland Clinic deserved to be ranked the #2 hospital in the nation, but since U.S. News and World Report already put it on that pedestal, the Clinic should live up to it by not only providing the best health care for their patients, but getting serious about improving the health of their community.

Thursday, August 10, 2017

On liberty and health reform in America

Since 2007, I've participated in more than a dozen American Civil War battlefield tours sponsored by the Smithsonian Associates. Even though a handful of Chinese Americans fought on both sides of the Civil War, none of my ancestors did, and friends and family are often perplexed by my endless fascination with this conflict. In Civil War museums and sites thronged by overwhelmingly white tourists, I'm even more of an oddity than the rare African American. This realization got me wondering why so few African Americans are passionate about the history of the war that freed so many of their ancestors from slavery. To Atlantic columnist and fellow Civil War buff Ta-Nehisi Coates, this antipathy stems from the efforts of white Americans over the past 150 years to write them out of the story:

For my community, the message has long been clear: the Civil War is a story for white people—acted out by white people, on white people’s terms—in which blacks feature strictly as stock characters and props. We are invited to listen, but never to truly join the narrative, for to speak as the slave would, to say that we are as happy for the Civil War as most Americans are for the Revolutionary War, is to rupture the narrative. Having been tendered such a conditional invitation, we have elected—as most sane people would—to decline.

As reflected in the Presidential election of 2016, economic and racial divisions are always resurfacing, with the perennial Republican versus Democratic contest being portrayed in the media as a battle between the "rich" and the "poor," or white citizens versus those of every other color. But these stereotypes ignore the inconvenient facts that plenty of low-income rural whites who bear no racial grudges and a few minority voters in heavily Democratic states and the District of Columbia dependably vote Republican.


In his most recent book, subtitled "Why the Civil War Still Matters," historian James McPherson shed some light on this present-day paradox by explaining that liberty meant two different things to Southern and Northern leaders in 1861. To white Democrats in the pre-Civil War South (slaveholders or not - and the vast majority were not), liberty meant "freedom from" interference by a distant federal government. Historical figures such as Confederate general Robert E. Lee traced their cause back to the Virginian Founding Fathers and slaveholders George Washington and Thomas Jefferson, whose Revolutionary War was fought to break away from a distant British ruler whose arbitrary actions offended colonial sensibilities.

On the other hand, the Republican Party in the North viewed liberty as "freedom to," arguing that it's hard to achieve anything noteworthy when one is penniless, starving, or a slave. Even though the North won the Civil War, achieving full citizenship for African Americans took nearly a century after passage of the the Fifteenth Amendment to the U.S. Constitution. Only after the hard-won passage of the 1965 Voting Rights Act, which prohibited poll taxes and gave the federal government the power to end various discriminatory practices that prevented most Black citizens in Southern states from registering to vote, did African Americans finally gain freedom to participate in the political process.

The more recent history of how and why African Americans turned away from the party of Lincoln to embrace the party of their former oppressors is too long to recount here, but these differing views of personal liberty - "freedom from" versus "freedom to" - go a long way toward explaining the two political parties' diametrically opposed views of the Affordable Care Act. For the most part, Republican governors have resisted health insurance exchanges and rejected Medicaid expansions because they and their constituents have perceived these provisions of the law as encroachments on freedom by the Washington bureaucracy, while Democratic governors have recognized that it's hard to have freedom to achieve personal success if one is too ill, or too worried about the financial implications of unexpected illness or injury, to plan confidently for the future.

Not only can you find the roots of modern medicine in the American Civil War, but the roots of our current national health policy debate, too.

**

This post first appeared on Common Sense Family Doctor on October 2, 2015.

Tuesday, August 1, 2017

Pushing back against prescription drug price gouging

Sometimes missed in the headlines about the stratospheric costs of new specialty drugs is the contribution of price hikes for older, established drugs, including generics, to prescription spending increases. In an editorial in the July 1 issue of American Family Physician, Dr. Allen Shaughnessy described several situations that drug manufacturers exploit to raise prices excessively (also known as price gouging):

- Limited to no alternatives
- Older products with few producers
- Same product, different use
- Single producer, no generic available
- Evergreening (minor changes to gain patent exclusivity)
- Pay for delay (paying generics manufacturers not to sell a generic version of an off-patent drug)

In the United States, Dr. Shaughnessy observed, "The biggest driver of the cost hike is, simply put, that pharmaceutical companies can charge whatever they want. Drugs cost what the market will bear. Many medications could be a lot less expensive, but because an insurance company, the government, or a patient is willing to pay the asking price, there is no push to lower the costs."

Price gouging has become such a problem for patients and insurers that the Maryland General Assembly recently passed legislation to discourage price gouging on essential off-patent or generic drugs. As explained by Drs. Jeremy Greene and William Padula in the New England Journal of Medicine:

The law authorizes Maryland’s attorney general to prosecute firms that engage in price increases in noncompetitive off-patent–drug markets that are dramatic enough to “shock the conscience” of any reasonable consumer. ... To establish that a manufacturer or distributor engaged in price gouging, the attorney general will need to show that the price increases are not only unjustified but also legally unconscionable. ... A relationship between buyer and seller is deemed unconscionable if it is based on terms so egregiously unjust and so clearly tilted toward the party with superior bargaining power that no reasonable person would freely agree to them. This standard includes cases in which the seller vastly inflates the price of goods.

The scope of the Maryland law is limited. It restricts action to off-patent drugs that are being produced by three or fewer manufacturers, and requires that manufacturers be given an opportunity to justify a price increase before legal proceedings are initiated. It is too early to know if the law will be effective against price gouging, or if it will be copied by other states that are also struggling to contain prescription drug cost increases in their Medicaid programs.

In the meantime, what can family physicians do to help patients lower their medication costs? In a 2016 editorial on the why and how of high-value prescribing, Dr. Steven Brown recommended five sound strategies: be a healthy skeptic, and be cautious when prescribing new drugs; apply STEPS and know drug prices; use generic medications and compare value; restrict access to pharmaceutical representatives and office samples; and prescribe conservatively.

**

This post first appeared on the AFP Community Blog.

Tuesday, July 25, 2017

Community health workers can complement primary care

Several years ago, I attended an academic meeting where the subject of community health workers came up in a discussion. Earlier that year I had read about Vermont's ambitious blueprint for medical homes integrated with community health teams, so I volunteered that we needed fewer specialists and more trained laypersons with ties to their communities to implement prevention strategies. Another physician objected that while community health workers might work well in lower-income countries like India, we didn't need to deploy them in America, where people already know from their doctors that they should eat healthy foods, watch their weight, exercise, and not smoke and don't need others nagging them about it.

But should community health workers be viewed merely as extensions of medical institutions when large proportions of the population will not visit a doctor in a given year? An alternative model, wrote Health Affairs editor Alan Weil,

views CHWs as part of the communities in which they work. The roles of community health workers are defined by the community and CHWs through a process of community engagement. CHWs are valued for their contribution to community health, not for the savings they generate for health plans or providers. CHWs are embedded in the community, not in a clinician’s office or hospital. Advocacy is required to effect a transfer of resources out of clinical care into the community.

On the other hand, a New England Journal of Medicine commentary observed that the absence of connections between community health workers and family physicians can leave them working at cross-purposes:

CHW services are commonly delivered by community-based organizations that are not integrated with the health care system — for example, church-based programs offering blood-pressure screening and education. Without formal linkages to clinical providers, these programs face many of the same limitations — and may produce the same disappointing results — as stand-alone disease-management programs. CHWs cannot work with clinicians to address potential health challenges in real time, and clinicians can't shift nonclinical tasks to more cost-effective CHWs. Indeed, clinicians often don't recognize the value of CHWs because they don't work with them.


How can we bridge this gap? A review in the Annals of Family Medicine provided a list of structure, process, and outcome factors to consider for patient-centered medical homes to partner with peer supporters (a.k.a. community health workers).

For complex patients with multiple health conditions, care coordination is a key role where community health workers could potentially be more successful and cost-effective than expensive projects led by registered nurses or physicians. Reviewing the past decade of Medicare demonstration projects, researchers from the Robert Graham Center drew five lessons for future coordinated care models:

(1) Minimize expenses by sharing resources and avoiding cost ineffective interventions
(2) Concentrate on high utilizers
(3) Foster relationships with both providers and patients
(4) Track patients across the medical neighborhood in real time
(5) Extend rather than duplicate the efforts of primary care practices

Although optimal integration between the roles of community health workers and primary care teams is easier to describe than to achieve, moving both groups toward the common goal of communities of solution will be essential to protecting the health of the whole population.

**

This post first appeared on Common Sense Family Doctor on September 11, 2015.

Thursday, July 20, 2017

Unequal treatment: disparities in how physicians are paid

As a family physician and medical school faculty member, I'm naturally a big booster of primary care. America needs more generalist physicians, not fewer, and much of my professional activity involves encouraging medical students to choose family medicine, or, failing that, general pediatrics or general internal medicine. But it's an uphill battle, and I fear that it's one that can't be won without major structural changes in the way that generalist physicians are paid and rewarded for their work.

In a recent Medicine and Society piece in the New England Journal of Medicine, Dr. Louise Aronson (a geriatrician) described visits with two of her doctors, a general internist and an orthopedist. The primary care physician worked in a no-frills clinic, often ran behind schedule, and devoted much of the visit and additional post-visit time to electronic documentation. The orthopedist worked in a newer, nicer office with an army of medical and physician assistants; generally ran on time; and was accompanied by a scribe who had competed most of the computer work by the end of the visit. Although there are undoubtedly a few family doctors with income parity to lower-earning orthopedists, according to Medscape's 2017 Physician Compensation Report, the average orthopedist makes $489,000 per year, while an average general internist or family physician makes around $215,000 per year. Here's what Dr. Aronson had to say about that:

It would be hard, even morally suspect, to argue that the salary disparities among medical specialties in U.S. medicine are the most pressing inequities of our health care system. Yet in many ways, they are representative of the biases underpinning health care’s often inefficient, always expensive, and sometimes nonsensical care — biases that harm patients and undermine medicine’s ability to achieve its primary mission. ...

Those structural inequalities might lead a Martian who landed in the United States today and saw our health care system to conclude that we prefer treatment to prevention, that our bones and skin matter more to us than our children or sanity, that patient benefit is not a prerequisite for approved use of treatments or procedures, that drugs always work better than exercise, that doctors treat computers not people, that death is avoidable with the right care, that hospitals are the best place to be sick, and that we value avoiding wrinkles or warts more than we do hearing, chewing, or walking.


Medical students are highly intelligent, motivated young men and women who have gotten to where they are by making rational decisions. For the past few decades, as the burden of health care documentation has grown heavier and the income gap between primary care physicians and subspecialists has widened, they have been making a rational choice to flee generalist careers in ever-larger numbers.

The cause of these salary disparities - and the reason that more and more primary care physicians are choosing to cast off the health insurance model entirely - is a task-based payment system that inherently values cutting and suturing more than thinking. I receive twice as much money from an insurer when I spend a few minutes to freeze a wart than when I spend half an hour counseling a patient with several chronic medical conditions. That's thanks to the Resource-Based Relative Value Scale, a system mandated by Congress and implemented by Medicare in 1992 in an attempt to slow the growth of spending on physician services. Every conceivable service that a physician can provide is assigned a number of relative value units (RVUs), which directly determines how much Medicare (and indirectly, private insurance companies) will pay for that service.

As new types of services are developed and older ones modified, the RVUs need to be updated periodically. Since the Centers for Medicare and Medicaid Services (CMS) chose not to develop the in-house expertise to do this itself, it farms out the updating task to the Relative Value Scale Update Committee (RUC), a 31-member advisory body convened by the American Medical Association (AMA) and nominated by various medical specialty societies. Here is where the fix is in. Only 5 of the 31 members represent primary care specialties, and over time, that lack of clout has resulted in an undervaluing of Evaluation and Management (E/M) and preventive services (the bulk of services provided by generalist physicians) compared to procedural services. Although an official AMA fact sheet pointed out that some RUC actions have increased payments for primary care, a 2013 Washington Monthly article countered that these small changes did little to alter the "special deal" that specialists receive:

In 2007, the RUC did finally vote to increase the RVUs for office visits, redistributing roughly $4 billion from different procedures to do so. But that was only a modest counter to the broader directionality of the RUC, which spends the vast majority of its time reviewing, updating—and often increasing—the RVUs for specific, technical procedures that make specialists the most money. Because of the direct relationship between what Medicare pays and what private insurers pay, that has the result of driving up health care spending in America—a dynamic that will continue as long as specialists dominate the committee.


We teach our medical students to recognize that inequities in where patients live, work and play are far more powerful in determining health outcomes than the health care we provide. A child living in a middle-class suburb has built-in structural advantages over a child living in a poor urban neighborhood or rural community, due to disparities in economic and social resources. The same goes for how physicians are paid in the U.S. Until the RUC is dramatically reformed or replaced with an impartial panel, the $3 trillion that we spend on health care annually (20 percent of which pays for physician services) will continue to produce shorter lives and poorer health compared to other similarly developed nations.

Monday, July 17, 2017

Self-monitoring doesn't improve control of type 2 diabetes

"Have you been checking your sugars?" I routinely ask this question at office visits involving a patient with type 2 diabetes, whether the patient is recently diagnosed or has been living with the disease for many years. However, the necessity of blood glucose self-monitoring in patients with type 2 diabetes not using insulin has been in doubt for several years.

A 2012 Cochrane for Clinicians published in American Family Physician concluded that "self-monitoring of blood glucose does not improve health-related quality of life, general well-being, or patient satisfaction" (patient-oriented outcomes) and did not even result in lower hemoglobin A1C levels (a disease-oriented outcome) after 12 months. In their article "Top 20 Research Studies of 2012 for Primary Care Physicians," Drs. Mark Ebell and Roland Grad discussed a meta-analysis of individual patient data from 6 randomized trials that found self-monitoring improved A1C levels by a modest 0.25 percentage points after 6 and 12 months of use, with no differences observed in subgroups. Based on these findings, the Society of General Internal Medicine recommended against daily home glucose testing in patients not using insulin as part of the Choosing Wisely campaign.

Still, the relatively small number of participants in trials of glucose self-monitoring, and the persistent belief that it could be useful for some patients (e.g., recent type 2 diabetes diagnosis, medication nonadherence, changes in diet or exercise regimen), meant that many physicians have continued to encourage self-monitoring in clinical practice. In a 2016 consensus statement, the American College of Endocrinology stated that in patients with type 2 diabetes and low risk of hypoglycemia, "initial periodic structured glucose monitoring (e.g., at meals and bedtime) may be useful in helping patients understand effectiveness of medical nutrition therapy / lifestyle therapy."

In a recently published pragmatic trial conducted in 15 primary care practices in North Carolina, Dr. Laura Young and colleagues enrolled 450 patients with type 2 non-insulin-treated diabetes with A1C levels between 6.5% and 9.5% and randomized them to no self-monitoring, once-daily self-monitoring, or once-daily self-monitoring with automated, tailored patient feedback delivered via the glucose meter. Notably, about one-third of participants were using sulfonylureas at baseline. After 12 months, there were no significant differences in A1C levels, health-related quality of life, hypoglycemia frequency, health care utilization, or insulin initiation. This study provided further evidence that although glucose self-monitoring may make intuitive sense, it improves neither disease-oriented nor patient-oriented health outcomes in patients with type 2 diabetes not using insulin. So why are so many clinicians still encouraging patients to do it?

**

This post first appeared on the AFP Community Blog.