NCQA's Quality Compass Points to Plan Differences
MANAGED CARE November 1997. ©1997 Stezzi Communications
The first comparative report of health plan performance produces surprising revelations — and a starting point for assessing a plan's quality. But some people caution not to read too much into the numbers alone.
In the nation's most competitive managed care markets, health plans claw one another for members by touting their quality of care. One plan after the next proclaims itself better than the rest. And now, we find that some are right — and some just have good ad agencies.
The National Committee for Quality Assurance released figures in October revealing wide differences among plans in terms of performance on Health Plan Employer Data and Information Set (HEDIS) measures. Published in NCQA's Quality Compass 1997, the numbers suggest that however forcefully managed care touts preventive treatment, some plans do a demonstrably better job of execution than others.
Of course, preventive care is not all that employers demand in plans. For that reason, Quality Compass may not give purchasers all they need to make informed decisions. For instance, the relatively small populations of some plans preclude statistically valid outcomes comparisons.
Also, users evaluating a plan on only one or two measures risk developing a distorted picture of its value. As such, Quality Compass is a powerful tool not only for use, but for misuse. That is important, because its data have significant industry and public policy implications. Further, NCQA expects that within two years, 60 percent of employers will use HEDIS to make health plan decisions.
Quality Compass contains HEDIS 3.0 data reported by 329 managed care plans. A summary report released publicly, The State of Managed Care Quality, charts member satisfaction and plan performance on eight HEDIS measures (see chart on page 52A). Employers may purchase the full Quality Compass, listing data from HEDIS and NCQA's health plan accreditation program, to track plans on 51 measures including preventive care, utilization and customer satisfaction.
"This is the first time we have been able to share standardized information," says NCQA Executive Vice President Cary Sennett, M.D., Ph.D. "We found significant variation across plans."
That variance may be most starkly illustrated by Quality Compass statistics on beta blocker use. Studies demonstrate that post-discharge beta blocker therapy can prevent second heart attacks. Blue Cross and Blue Shield of Maine ranked best, employing beta blocker treatment in 100 percent of patients for whom it was not contraindicated. The lowest-scoring plan (which chose not to reveal its identity) administered beta blockers 13.9 percent of the time. If industrywide performance were 85 percent on average instead of the current 62 percent, say NCQA officials, 1,600 cardiac deaths could be avoided annually.
"If managed care organizations are aggressive about improving quality of care — inexpensively — then this is something they can easily do," observes Stephen Soumerai, Sc.D., a researcher at Harvard Medical School and Harvard Pilgrim Health Plan in Cambridge, Mass. Soumerai's study demonstrating beta blocker effectiveness in people over age 75, published in the Journal of the American Medical Association [JAMA 1997,
277;115—121], prompted NCQA to change its beta blocker measurement to include the elderly.
Harvard Community Health Plan and Pilgrim Health Care, both part of Harvard Pilgrim, scored high on beta blocker use — 90.7 and 87.4 percent respectively. Soumerai says Harvard Pilgrim cardiologists "make it their job to be sure primary care doctors know about ways in which beta blockers can be prescribed. Good integration between specialists and generalists may explain why some HMOs have high beta blocker utilization."
Prevention — a hallmark of managed care — is at the core of seven measures in The State of Managed Care. By the numbers, most plans outperform fee-for-service medicine; for six of them, the average of all managed care plans is higher than what fee-for-service can boast (based on published clinical studies). For instance, 84.5 percent of maternity patients in managed care saw a doctor during the first trimester, compared with 76 percent of fee-for-service patients.
But the data also suggest that many plans don't keep up with fee-for-service care. On the prenatal care question, at least 40 plans notched below fee-for-service, scoring as low as 40 percent.
Beyond raw numbers
Some argue that not all figures are what they seem. A plan's low score on any given measure may reflect quality of care, or may be symptomatic of technicalities — like those of data collection.
Group Health Cooperative of Puget Sound's two HMO networks, Group Health Northwest, in eastern Washington, and Group Health Cooperative, in the western half of the state, ranked 95.0 and 83.9 percent respectively on prenatal care. But GHC's Options Health Care point-of-service plan — whose members utilize the same two networks — clocked in at 55.5 percent.
Peter Frawley, GHC's manager of external performance, says there simply wasn't enough time to review Options members' charts to meet NCQA's reporting deadline. "That meant visiting physicians across eastern Washington, a mostly rural area, and pulling charts. Because of this and because of complexities in integrating Group Health Northwest data, we weren't able to look at enough charts in time to move the rate up."
Other plans learned a hard lesson: No matter how good your care, how you document it counts. Pat Scheer, accreditation process manager for Springfield, Mass.-based Health New England, says her HMO's 59.3-percent prenatal care rating reflects an NCQA requirement that when data are based on medical record review, those documents must be signed and dated by a physician.
"Our records show that first trimester visits occurred in 90 percent of cases. But we could not count them all because dates were not exactly as required — maybe they said '1/97,' as opposed to '1/1/97' — or because a doctor forgot to sign the record. The rate and what occurred differ because in reporting data, we did what NCQA required."
Others believe NCQA's efforts are a good start, but more is needed. Gerri Dallek, health policy director for the consumer group Families USA, argues that "we need to look at how members with chronic needs fare in managed care. It's easy to provide good care to healthy people. We need to measure outcomes for people with chronic, disabling conditions."
Teri Ferguson, manager of health plans at Danbury, Conn.-based Union Carbide, looks for information about utilization, preventive care and case management when evaluating plans. To that end, she believes, Quality Compass can be useful to her. But she's also very interested in outcomes data. For that, she will have to continue to rely on internal efforts.
"That information is hard to come by. We have a team reviewing outcomes data that we do have, while also trying to identify other data we can use to measure plan outcomes," says Ferguson.
Like HEDIS, Quality Compass will be fine-tuned. While no one tool can address everyone's desires, Quality Compass seems on the way to becoming a standard for evaluating plans.
Report card unveils good, bad, ugly
The State of Managed Care Quality, NCQA's summary of Quality Compass data, reveals marked differences among managed care plans. When it comes to performance on seven HEDIS preventive measures, some plans report 100-percent compliance, while others' scores fall below rates for fee-for-service medicine. Fee-for-service calculations are based on published clinical studies. Figures are rounded to the nearest whole number.
In prevention, New England leads the nation
Managed care plans in New England, on average, score consistently high when rated on HEDIS clinical measures. In fact, New England boasts the highest average regional score for six of the seven preventive measures reported in The State of Managed Care Quality. At the other end of the spectrum, the South Central United States had the lowest average for six of the seven. Figures below are a composite score of all plans in a region; local culture and public policy can influence individual plan performance.
In this chart, the bars indicate by what percent each region exceeds or falls behind the national average
Who's happy with plans?
Overall plan member satisfaction was highest in New England. NCQA examined factors including choice, service and access.