HOW SURVEYS ANSWER A KEY QUESTION: Are Consumers Satisfied With Managed Care?


The recent profusion of health care public opinion surveys hasn’t quieted debate over the public’s level of essential satisfaction with managed care. Different headlines trumpet different conclusions.

As a health care survey professional, I’d like to suggest some ways to make sense of opinion surveys that appear to conflict, and to argue that they in fact convey certain messages in common. Bear in mind that I’m not a disinterested witness. Of six major studies that have recently sought to compare patient satisfaction in managed care and fee-for-service medicine, I worked on one of them myself: a survey by the Harvard School of Public Health and Louis Harris and Associates for the Robert Wood Johnson Foundation in Princeton, N.J.

Our survey, which I’ll call number 1, was one of three that reported, in some categories, a significantly higher level of problems with managed care than with fee-for-service medicine. The others in this “negative” group (oversimplifying for categorization’s sake) were a study done in Boston, Los Angeles and Miami by Louis Harris and Associates for the Commonwealth Fund (2) and a survey of Californians made by the Los Angeles Times (3).

There was better news for managed care organizations in three other studies: a survey of Minnesotans by Maritz Marketing Research for the Minnesota Health Data Institute (4); a review of 40 opinion polls conducted by six survey organizations for the Coalition for Medicare Choices (5), and an analysis of data from the National Research Corp. published by the Group Health Association of America (6). These three “positive” reports appeared to indicate that on the whole, managed care patients were at least as satisfied as fee-for-service patients. (For a detailed breakdown of these six studies, see the table below.)

The easiest conclusion to draw in looking at nearly any survey of patients in the United States about their medical care is that the vast majority are satisfied with the care they get. And overall satisfaction doesn’t change much with the type of coverage people have: fee-for-service, managed care, Medicare or even no insurance at all.

Though useful in some ways, an overall measure of satisfaction (“In general, how satisfied are you with the medical services used by your family, or with your health plan, in the past year?”) is for other purposes too broad a brush. That’s why it’s probably a good idea to think of satisfaction with medical care as something that is made up of several component parts, from ease of making an appointment to comfort level with providers to availability of tests and treatments. It’s not hard, for example, to imagine a physician’s office where someone gets along fine with the doctor but thinks the secretary or nurse doesn’t listen.

It is in such details that, not surprisingly, we start to see more variation in people’s attitudes about their health care and health insurance. The major differences among our six studies result because of the different ways researchers are looking at this issue of patient satisfaction. Just as the radiologist and the pathologist know that another view or another cross-sectional slice of tissue might reveal a problem unseen in a first view, so people who do surveys know that a different set of questions might yield some new answers.

On detailed investigation, however, what is striking is not the differences among these six surveys, but the fairly consistent picture they paint together. Even with different study methods and approaches, three things are clear. First, most Americans are well satisfied with many aspects of their health care, regardless of the type of coverage they have. Second, several studies have observed an overall difference between people who are generally well and those who have a major illness or disability in satisfaction with a few aspects of their care. Third, where this difference is observed, it tends to favor fee-for-service medicine over managed care plans. The difference is relatively small, but still statistically significant. Complaints tend to involve waiting times for primary and specialty care, access to specialists and other services and choice of physicians.

In studies by the Harvard School of Public Health in 1994, only about 35 percent of Americans said they knew what the term “managed care” meant, while 66 percent reported knowing what an HMO was. This limited knowledge exists in an environment where 65 percent of working people are estimated to have insurance coverage with some managed care features or arrangements. In a time when distinctions between types of insurance plans are blurring even for experts, it’s hard to be sure that the average person knows what type of coverage he or she has.

The complexities of the health insurance market make it difficult to do surveys comparing the views of people in different specified types of health insurance arrangements. On this point the Coalition for Medicare Choices faults the Harvard/Johnson survey (1), as does GHAA President Karen M. Ignagni. “Contrasting the views of people in ‘fee-for-service’ health plans with those in what is broadly defined as ‘managed care’ plans,” says Ignagni, “is quite like thinking you know the public’s views on the effectiveness of trains, planes, roller blades and tricycles merely by asking users’ views on transportation.”

She is correct that the Harvard study relies on a broad definition of managed care. That definition comes from a survey question that asks, “Do you have a fee-for-service plan that allows you to go to almost any doctor or hospital and then reimburses you for all or part of the cost or do you have an HMO or PPO or other type of plan that significantly limits your choice of doctors and hospitals?” Sixteen percent did not know.

The Commonwealth Fund study (2) used very similar questions and also reports results for all people in “managed care plans.” In the Harvard study, we combined all managed care respondents for two reasons: There were not enough respondents who were nonelderly and sick to do an analysis by specific type of plan, and we did not observe large differences between plan types in the general population. The Commonwealth Fund study checked respondents’ classification of plans by following up with a question asking respondents to name their plan and found that respondents did not do a very good job of distinguishing between HMOs and PPOs. The Los Angeles Times poll (3) felt secure enough about definitions to publish findings for HMO members, other managed care, and traditional insurance. It had sufficient respondents–more than 3,000–to do these analyses, and did find some differences by type of plan.

By contrast, the Minnesota (4) and GHAA (6) studies took the approach of surveying people whose names had been taken from lists of plan members. This method has the advantage of making it easier to define what type of plan someone is in, but it is a costly process that would be nearly impossible to accomplish on a national scale, given the number of insurance plans.

Who’s being studied

Just as the definition of the type of insurance makes a difference, so does the selection of people to be surveyed. In surveys of satisfaction with insurance, there are several options: Employers who buy coverage, employees of firms, enrollees in plans, users of services or some general population of a city, state or region. One or another type might be more helpful, depending on what kinds of questions you want to ask and who is most knowledgeable to answer them.

Of course, one of the problems with surveys of specific plans’ enrollees–such as 4 and 6–is that it is reasonable to expect that some people who were unhappy or dissatisfied with a plan have switched coverage and are no longer on the list to be interviewed. A survey of employees might be more useful in this regard if an employer offers multiple coverage options and enrollees and disenrollees from different plans can be compared.

Another problem with surveys of enrollees is that these are not necessarily the users of services. These surveys almost never include children under 18, for example, although some, like the Commonwealth Fund’s (2), ask parents to report on getting care for their children. The enrollee’s name on a list, or the employee’s name in a survey of employees, might not identify the person who actually makes health care arrangements for people in the household.

One way to make sure you are surveying people familiar with services, of course, is to poll users of those services. This way, responses of those who have not been to a doctor or facility are filtered out. Survey 1, having been criticized by the Coalition for Medicare Choices and by GHAA for being too broad in its definition of managed care, gets low marks from the coalition for being too specific in its choice of respondents: “This survey is of a very particular subset of the population,” the group complains.

Why target for analysis a group of high users of care? On one hand, it might be expected that people who use health services more often are likely to say favorable things about their care. They know their providers better, get used to negotiating their offices, laboratories and other facilities, and are generally grateful for the help they receive. On the other hand, they may be harsher critics because they are looking more closely at their care. They confront obstacles in insurance coverage and access to care when they don’t feel well or are in crisis. They test many aspects of the health system’s capacity. As high-cost cases, they are also more likely to be the target of case management and other cost-containment initiatives.

One caution about surveying users of care. Some patient satisfaction surveys are done only of users of a service at the point of service or soon after. While these surveys can provide valuable information about service performance, they will not yield useful information about access to services, since one has to get a service to be in the study. If you are interested in measuring whether people get the care they need, you can’t focus only on users of services, but need to include those who might have tried to use the service.

Several of the studies discussed here can also be distinguished by their regional scope. Given the wide variation in insurance markets, local and regional studies can be very useful. But a good deal of policy in health care is made at the national level, and in that arena, national data, though broad-brush, are frequently requested and cited by lawmakers in Washington.

What’s being asked

Another difference among these six surveys has to do with the exact questions posed, and the way they were formulated. Many studies offer a series of items, asking people to say that a particular aspect of care was “excellent, very good, good, fair or poor” or that they were “extremely, very, somewhat, not very, not at all (or some variation) satisfied.” Others require a simple yes or no: “Did you wait a long time or not?” “Was the care appropriate or not?” There are long academic discussions about the best form of these questions, especially given that most people cluster near the “excellent” end of the scale no matter how you ask. It is useful to know the whole range of answer options given so you can evaluate whether the question was balanced.

Since I’ve suggested some of the ways the “positive” surveys achieve their favorable reports on managed care, it is only fair to point out a possible bias on the other side, too. In the Harvard and Commonwealth Fund studies (1 and 2), a few questions ask if something was “a major problem, a minor problem, or not a problem at all.” The phrasing of these response choices makes them somewhat presumptive of a problem. Be aware that it is easy for researchers like me to combine major and minor and report simply “a problem.”

The lesson here is not to presume that someone is trying to deceive you. Just be aware that how the question gets asked can be a major factor in the answer that comes back. As a general rule, most survey professionals will tell you what the question was, and let you decide. If researchers don’t want to tell you the question, beware their interpretation of the answer.

Surveys 12 and 3, using different methods, point to a higher level of reported problems in managed care than in fee-for-service medicine among the small minority of people with disabilities or chronic illness. In my judgment, surveys 45 and 6 don’t really contradict this finding; they merely emphasize the larger population’s generally equal level of satisfaction with managed care and fee-for-service medicine.

It’s no secret that much of the money to fund surveys 4 through 6 came from managed care organizations with an interest in discovering and promoting a high level of patient satisfaction with managed care. Did these sponsors frame their research so as to encourage results favorable for managed care? Of course they did. But it does not follow that their arguments about managed care’s ability to satisfy the vast majority should be dismissed. Other studies have suggested vast overutilization in the U.S. health care sector at the same time that health care’s share of the gross national product grows ever larger. Patient satisfaction is a subjective thing, and much of what these surveys reflect is the expectations against which care is judged. We all want the world-renowned specialist and the umpteenth “second opinion”–hang the cost–if we’re the patients and the problem is dire.

Still, levels of dissatisfaction among sick people in managed care plans are comparable to those reported for patients in Canada, where care is explicitly rationed. That indicates to me that there are problems in managed care that need to be solved. I can’t accept the conclusion that the findings less favorable to managed care–surveys 12 and 3–should be dismissed because these surveys pay disproportionate attention to sick people. After all, if a survey were to test the effectiveness of the fire department, interest would focus on the opinions of those whose homes had been on fire–even though they’re only a small minority of the population. We need to ask how health care works when people need it most.

The author is a senior research associate and instructor in the Department of Health Policy and Management at the Harvard School of Public Health in Boston, and deputy director of the Harvard Program on Public Opinion and Health Care.

Surveying the surveys

Is there a sacrifice in patient satisfaction when health care becomes “managed”? Different surveys suggest contrasting answers to that question, but survey professional Karen Donelan, Sc.D., of the Harvard School of Public Health argues that their results have more in common than you might think. Here’s a summary of the six surveys discussed in her article:

TITLE: Sick People in Managed Care Have Difficulty Getting Services and Treatment, New Survey Reports (Robert Wood Johnson Foundation press release, June 1995)

SPONSOR: The Robert Wood Johnson Foundation, self-described as “the nation’s largest health care philanthropy devoted exclusively to the health of and health care for all Americans”

AUTHORS: The Harvard School of Public Health (Robert J. Blendon, Sc.D. and Karen Donelan, Sc.D.) and Louis Harris and Associates, New York. Blendon, a professor of health policy, is a nationally known expert on public opinion about health issues.

RESEARCH METHOD: Telephone survey asking about aspects of primary care, specialty care, hospital care, and other particulars of insurance plan use

RESPONDENTS: 473 nonelderly U.S. adults ages 18 to 65 who said they were in fair or poor health, had a disability, chronic disease or handicap that limited daily activities or had been hospitalized in the prior year. There were 219 in fee-for-service plans and 254 in “managed care,” defined for respondents as plans such as HMOs, PPOs or others that “significantly limited their choice of doctors and hospitals.” This group is a subset of a larger group of 2,374 adults surveyed.

BOTTOM LINE: Negative for managed care. Of 53 items reported, 16 showed statistically significant differences between managed care and fee-for-service plans. All but two were negative for managed care. Managed care enrollees complained about waiting times and access to tests and specialty care.

TITLE: Choice Matters: Enrollees’ Views of their Health Plans (published in Health Affairs, Summer 1995)

SPONSOR: The Commonwealth Fund, the fourth oldest foundation in the United States, whose mission is to “look for new opportunities to help Americans live healthy and productive lives, and to assist specific groups with serious and neglected problems.”

AUTHORS: Karen Davis, Ph.D.; Karen Scott Collins, M.D., M.P.H. and colleagues, Louis Harris and Associates. Davis, an economist, is president of the Commonwealth Fund, a former professor and chairman of the Department of Health Policy and Management at Johns Hopkins School of Public Health and president of the Association for Health Services Research.

RESEARCH METHOD: Telephone survey asking a series of questions about satisfaction with, choice of and stability of coverage, cost of and access to services

RESPONDENTS: 3,348 nonelderly (ages 18–64) citizens in Boston, Los Angeles and Miami who had employment-based insurance. Not included were people in fee-for-service plans whose employers offered only that option, as the focus was on people in managed care plans who did and did not have the option of choosing a fee-for-service plan.

BOTTOM LINE: Negative for managed care. Managed care enrollees were more likely to rate overall plan, quality of services and doctors adversely than fee-for-service enrollees. They also complained about access to specialists and emergency services, as well as waiting times for appointments.

TITLE: Health Care in California

SPONSOR/AUTHORS: Los Angeles Times Poll. Many media outlets conduct polls and surveys, but usually by contracting the work out to a survey firm. The Los Angeles Times is one of a few media organizations to maintain its own polling unit.

RESEARCH METHOD: Telephone survey

RESPONDENTS: 3,297 California residents, 2,750 with health insurance. Of the insured, 2,012 were in some form of managed care plan, 738 in indemnity plans. Of the managed care enrollees, 1,381 were in HMOs, the remainder in other plans.

BOTTOM LINE: Negative for managed care. On overall satisfaction measures, HMOs fared well compared with fee-for-service plans. However, people in HMOs and other managed care plans who reported themselves to be in poor health were more likely to cite problems than people in fee-for-service or indemnity plans.

TITLE: You and Your Health Plan: 1995 Statewide Survey of Minnesota Consumers (Minnesota Health Data Institute report)

SPONSOR: Minnesota Health Data Institute, a partnership of public agencies and private companies with a 20-member board of directors mandated by state law to represent principal “stakeholders” in health care in Minnesota

RESEARCHERS/AUTHORS: Minnesota Health Data Institute and Maritz Marketing Research, Minneapolis

RESEARCH METHOD: Survey (mode of interview unspecified)

RESPONDENTS: 17,500 Minnesotans, 400 from each of 46 plans in the state

BOTTOM LINE: Positive for managed care. Medicare HMO members were significantly more likely to say they were “very” or “extremely” satisfied with their plans than people with Medicare indemnity or basic Medicare. In the nonelderly commercial market, HMO members were significantly more likely to say that they were “very” or “extremely” satisfied with their plan than indemnity members, with closed-panel HMOs faring better than point-of-service plans.

TITLE: American Attitudes Toward Managed Care: A Review (report)

SPONSOR: The Coalition for Medicare Choices, described in the report as an “ad hoc organization formed to promote managed care as a coverage option for Medicare beneficiaries.” Founding members include the Group Health Association of America, Blue Cross Blue Shield Association, the Health Insurance Association of America and the Alliance for Managed Care.

AUTHORS: The Mellman Group, American Viewpoint, Peter D. Hart Research Associates, Luntz Research Companies and Public Opinion Strategies. All are research, polling, and/or political analysis and strategy groups that have worked for one or more of the founding members of the coalition. Although working for a common purpose on this issue, these organizations and those they represent have at times been at opposite ends of the political spectrum in their client bases and their institutional objectives.

RESEARCH METHOD: Review of 40 public opinion polls on managed care issues, 1993 to the present

RESPONDENTS: Some studies are national random samples of adults, some of managed care enrollees, some HMO enrollees.

BOTTOM LINE: Positive for managed care. In the authors’ words, “The bulk of the credible evidence suggests that managed care participants are, in the main, as satisfied with their personal health care arrangements as those who are in traditional fee-for-service.”

TITLE: Regardless of Health Status, HMO and FFS Patients Report Similar Levels of Satisfaction (published in GHAA News, July 1995)

SPONSOR/AUTHORS: Group Health Association of America, the trade organization for HMOs, with 380 member organizations covering about 40 million people in the U.S. Karen Ignagni, president, was
formerly with the AFL-CIO and is a highly respected health care analyst and advocate.

RESEARCH METHOD: Secondary analysis of Healthcare Market Guide V Survey conducted by National Research Corp.

RESPONDENTS: 132,014 nonelderly (18–64) respondents; 14,695 elderly (over 65) respondents

BOTTOM LINE: Positive for managed care. On a measure of general satisfaction with their health care, HMO members and fee-for-service patients over and under 65 rate their plans equally regardless of their health status. The survey also finds that HMO and fee-for-service plans care for similar proportions of patients in fair or poor health or with chronic health conditions.

What patients don’t like about managed care

Apparently conflicting surveys of patients’ attitudes toward managed care are actually more consistent than one might think, contends Karen Donelan, Sc.D., of the Harvard School of Public Health. Especially among the sick, she says, they point to three main areas of patient unhappiness:

  • Waiting times for primary and specialty care
  • Access to specialists and other services
  • Choice of physicians

How to do a patient satisfaction survey

The same principles that govern large polling firms’ opinion sampling work are also important for managed care organizations that want to collect patient satisfaction data to demonstrate their commitment to quality service. If you’re considering a patient satisfaction survey, here are nine questions to ask yourself before you start:

1. How do you plan to use the survey data?

Be as specific as you can about what you want to get out of this effort. Patient surveys can tell you what patients think about everything from the quality of care to the insurance plans offered in your area. Be creative.

2. Whose opinions are you looking for?

Generally, patient satisfaction surveys are done with patients who have seen a physician in some specific time period, such as a month or a year. Perhaps your organization put a policy change in effect six months ago and you want to see what people think. Or maybe you’re only interested in people who haven’t seen their physicians lately.

3. Should you direct your survey to a sample or a census?

This depends on the degree of precision you want in your results. A sample, or subset, in which there is a known or equal chance of selecting each person in the group that you are interested in, can give you confidence that your respondents represent your patient population. A “convenience” sample (whoever picks up the questionnaire in the waiting room) is not representative, but it may pick up some important individual concerns. A census (a survey of all people in the group) might be necessary if you have a relatively small population of interest, but can be costly or overwhelming if you have a large patient base.

4. How many respondents will you need?

In a sample survey, the number of respondents determines the margin of error on their answers. If you have 50 respondents, you have a margin of error of up to ±14 points; a sample of 400 reduces the error to ±3–5 points. You’ll need more respondents if you want to analyze subgroups of your surveyed group by different characteristics. With 400 respondents, if you decide to look at women (50 percent of the group) who were hospitalized last year (10 percent of that 50 percent), you are left with 20 responses, and the margin of error is so large as to make the data of little use.

5. What response rate should you expect?

Strictly speaking, the response rate is the number of completed surveys divided by the number of attempted contacts. Patient satisfaction surveys vary widely in response rates, from 20 percent up to 90 percent. You can increase the rate with a nice cover letter sent in advance, ample assurance of confidentiality, up-to-date patients’ mailing addresses and phone numbers and multiple mailings or phone calls. Feel great about a response rate over 70 percent, good in the 55–70 percent range, cautious in the 35–55 percent rate and really insecure about anything lower.

6. What format do you want results in?

You may just want to ask a few questions and read through the answers yourself. Or you may want things tabulated by patients’ age, gender, length of time as your patient or other variables. If you want the latter, your data should be processed for analysis in a data base management, spreadsheet or statistical software package.

7. What questions will you ask, and who will write them?

Don’t reinvent the wheel. Several organizations have good, publicly available questionnaires that make a good starting point. A fairly simple questionnaire is available from the AMA, and a good, more detailed one was developed for the Group Health Association of America. You can always add or subtract questions.

8. Are you following these basic rules?

Ask one question at a time. Don’t ask, “Are you satisfied with the way the nurse and the doctor treated you?” Patients may love the nurse and be lukewarm about the physician.

Structure questions so that all expected response options are included. This makes it easier to compare responses. “Would you say the quality of the coffee in the waiting room is excellent, good, fair or poor?”, not “How would you rate the coffee?”

Keep items evenly spaced and balanced. (“Very satisfied, somewhat satisfied, neutral, somewhat dissatisfied, very dissatisfied” is good balance. “Very satisfied, somewhat satisfied, quite dissatisfied” is not good balance.)

Try to use neutral language that does not presume an answer. Don’t ask, “How many cigarettes a day do you smoke?”, ask, “Do you currently smoke cigarettes?”, and if they answer “yes,” ask how many.

Keep vocabulary simple and sentence structure easy to understand. Remember that the average adult has a high school education.

9. Can you do the survey yourself or do you need help?

If you want data tabulated and, perhaps, summarized in a report for you, a survey firm can mail to several hundred people fairly inexpensively and process and tabulate the data for you. Unless you have the capacity for keeping track of the mailings, the questionnaire production, the data processing and analysis, you might want to consider this option. Make sure you write a cover letter reassuring patients that the survey is confidential and you are interested in aggregate responses only. Then make sure the survey firm you use has clear procedures for protecting patient identities.

On the other hand, if you want to ask a few basic questions and read through the responses yourself, a mailing of 200–500 people may not be hard to accomplish. A few cautions if you do it yourself:

a) If you decide on a phone survey, don’t have physicians’ staff members make the calls.
b) Always send a cover letter.
c) Be prepared for good and bad surprises.
d) Don’t allow questionnaires to be sent with bills.
e) It is always more work than you think it will be.

–Karen Donelan, Sc.D.

Our most popular topics on