Profile cover photo
Profile photo
Ron Hays
18 followers
18 followers
About
Ron's posts

Post has attachment
Public
https://www.ncbi.nlm.nih.gov/pubmed/24006034

It is striking that an imprecise study like this one merits publication in a top medical journal. The authors of the article say that the primary outcome was change in patient satisfaction, but the study does not assess patient satisfaction with care. Instead, the study uses several items from the Consumer Assessment of Healthcare Providers and Systems (CAHPS) Clinician and Group Survey. CAHPS measures patient experiences with care and never asks patients whether they are satisfied with care (see message from the CAHPS Project Officer appended below).

In addition, Table 3 says that it reports the percentage of patients who agree/disagree that services are adequate before and after program implementation. None of the C-G CAHPS question as patients to agree or disagree about their care experiences.

Finally, the paper says that the survey used was a " a shortened and modified version of the Consumer Assessment of Healthcare Providers and Systems Clinician and Group Survey....The survey was modified after a pilot instrument suggested that patients had difficulty completing the full survey" (p. 1696). It is amazing that the no evidence for this assertion was required by this prestigious journal. In fact, the investigators bastardized the CAHPS survey, violating the scientific principles upon which the survey is based. For example, they used not applicable response options rather than only asking questions to the appropriate subset of patients. This is analogous to the well known problem of asking husbands a question such as "how often do you beat your wife?" In addition, they dropped the well-tested CAHPS 0-10 overall rating of care in favor of a response scale (excellent, good, fair, poor) that was rejected after extensive testing by the CAHPS consortium.

------------------------------------------------------------------------------
From: Ginsberg, Caren (AHRQ/CQuIPS) [mailto:Caren.Ginsberg@ahrq.hhs.gov]
Sent: Friday, February 10, 2017 10:21 AM
Subject: Use of the terms "patient experience" and "patient satisfaction"

Recently we've seen a trend in journal articles and commentaries using the terms "patient experience" and "patient satisfaction" interchangeably. Given that these are distinct concepts, we'd like to clarify their respective meanings and ask your journal to consider adopting an editorial policy regarding the accurate use of these terms, if one doesn't currently exist.

Since 1995, the Agency for Healthcare Research and Quality (AHRQ) has led the development and maintenance of the Consumer Assessment of Healthcare Providers and Systems (CAHPS(r)) surveys. These patient experience surveys gather patients' reports about whether or how often something occurred during and subsequent to an episode of care. The surveys ask patients to report about objective features of care, such as how often doctors explained things in a way that was easy to understand and whether they received discharge instructions when leaving the hospital. CAHPS surveys ultimately shed light on the degree to which care is patient-centered, a key facet of overall care quality, and provide feedback that is useful for designing quality improvement efforts. Patient experience of care has a demonstrated relationship to healthcare quality. Evidence shows that the experiences measured by CAHPS and similar surveys are associated with adherence to recommended prevention and treatment processes, clinical outcomes, patient safety, and health care utilization.

In contrast, patient satisfaction surveys ask patients to evaluate the degree to which their interactions with the healthcare system met their expectations. Rather than asking whether or how often something happened, patient satisfaction surveys ask about the degree to which patients are satisfied with their care. Patient satisfaction surveys have their roots in customer satisfaction research, which is designed to understand customer loyalty and return business. They use a marketing or service-oriented approach to understand what makes patients satisfied with the care that they receive. Unlike reports of patient experience, expressions of satisfaction have ambiguous implications for care quality because they primarily reflect patient expectations and feelings.

Conflating the terms "patient experience" and "patient satisfaction" contributes to the misperception that CAHPS and other patient experience surveys assess patients' opinions about their care, information that has no clear relevance to health care quality. It may also lead to a misunderstanding of patient-centered care and what it takes to achieve this pillar of high-quality care.

We hope you will consider this important distinction in terminology in your editorial review policies. Please do not hesitate to contact me if you have any questions or would like further clarification. I am happy to address this in more detail at your convenience.

With best regards,

Caren Ginsberg, Ph.D.
Director, CAHPS Division
Center for Quality Improvement & Patient Safety
Agency for Healthcare Research & Quality
Caren.Ginsberg@ahrq.hhs.gov
301-427-1894


Quigley, D. D., Predmore, Z. S., Chen A., & Hays, R. D. (2017). Implementation and sequencing of practice transformation in urban practices with underserved patients. Quality Management in Healthcare, 26 (1), 7-14.
Background: Patient-centered medical home (PCMH) has gained momentum as a model for primary-care health services reform. Methods: We conducted interviews at 14 primary care practices undergoing PCMH transformation in a large urban federally qualified health center in California and used grounded theory to identify common themes and patterns. Results: We found clinics pursued a common sequence of changes in PCMH transformation: Clinics
began with National Committee for Quality Assurance (NCQA) level 3 recognition, adding care coordination staff, reorganizing data flow among teams, and integrating with a centralized quality improvement and accountability infrastructure. Next, they realigned to support continuity of care. Then, clinics improved access by adding urgent care, patient portals, or extending hours. Most then improved planning and management of patient visits. Only a handful worked explicitly on improving access with same day slots, scheduling processes, and test result communication.
The clinics’ changes align with specific NCQA PCMH standards but also include adding physicians and services, culture changes, and improved communication with patients. Conclusions: NCQA PCMH level 3 recognition is only the beginning of a continuous improvement process to become patient centered. Full PCMH transformation took time and effort and relied on a sequential approach, with an early focus on foundational changes that included use of a robust quality improvement strategy before changes to delivery of and access to care.

Paz, S. H., Jones, L., Calderón, J., & Hays, R. D. (2016 epub). Readability of the Geriatric Depression Scale and the PROMIS® physical function items and comprehensibility by older African Americans and Latinos. The Patient: Patient-Centered Outcomes Research.

Abstract
BACKGROUND:

Depression and physical function are particularly important health domains for the elderly. The Geriatric Depression Scale (GDS) and the Patient-Reported Outcomes Measurement Information System (PROMIS®) physical function item bank are two surveys commonly used to measure these domains. It is unclear if these two instruments adequately measure these aspects of health in minority elderly.
OBJECTIVE:

The aim of this study was to estimate the readability of the GDS and PROMIS® physical function items and to assess their comprehensibility using a sample of African American and Latino elderly.
METHODS:

Readability was estimated using the Flesch-Kincaid and Flesch Reading Ease (FRE) formulae for English versions, and a Spanish adaptation of the FRE formula for the Spanish versions. Comprehension of the GDS and PROMIS® items by minority elderly was evaluated with 30 cognitive interviews.
RESULTS:

Readability estimates of a number of items in English and Spanish of the GDS and PROMIS® physical functioning items exceed the U.S. recommended 5th-grade threshold for vulnerable populations, or were rated as 'fairly difficult', 'difficult', or 'very difficult' to read. Cognitive interviews revealed that many participants felt that more than the two (yes/no) GDS response options were needed to answer the questions. Wording of several PROMIS® items was considered confusing, and interpreting responses was problematic because they were based on using physical aids.
CONCLUSIONS:

Problems with item wording and response options of the GDS and PROMIS® physical function items may reduce reliability and validity of measurement when used with minority elderly.

PMID:
27599978
DOI:
10.1007/s40271-016-0191-y 

J Evid Based Complementary Altern Med. 2016 Aug 18.
Change in Health-Related Quality-of-Life at Group and Individual Levels Over Time in Patients Treated for Chronic Myofascial Neck Pain.
Brodsky M1, Spritzer K2, Hays RD2, Hui KK2.

BACKGROUND:
This study evaluated change in health-related quality of life at the group and individual levels in a consecutive series of patients with chronic myofascial neck pain.
METHODS:
Fifty patients with chronic neck pain self-administered the Short Form-36 Version 2 (SF-36 v2) before treatment and 6 weeks later. Internal consistency reliability was estimated for the 8 scale scores and Mosier's formula was used to estimate reliability of the physical and mental health composite scores. Significance of group-level change was estimated using within-group t statistics. Significance of individual change was evaluated by reliable change index.
RESULTS:
Statistically significant (P < .05) group mean improvement over time was found for all SF-36 scores. At the individual level, 20% of the possible changes were statistically significant (17% improvement, 3% decline).
CONCLUSIONS:
Estimating the significance of individual change in health-related quality of life adds important information in comparing different treatment modalities for chronic myofascial neck pain.

Hays, R. D., Chawla, N., Kent, E. E., & Arora, N. K. (in press).  Measurement equivalence of the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Medicare Survey Items between Non-Hispanic Whites and Asians.  Quality of Life Research.

Abstract
Purpose:  Asians report worse experiences with care than Whites. This could be due to true differences in care received, expectations about care, or survey response styles.  We examine  responses to the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Medicare survey items by Whites and Asians, controlling for underlying level on the CAHPS constructs. 

Methods: We conducted multiple group analyses to evaluate measurement equivalence of CAHPS Medicare survey data between White and Asian Medicare beneficiaries for CAHPS reporting composites (communication with personal doctor, access to care, plan customer service) and global ratings of care using pooled data from 2007-2011. Responses were obtained from 1,326,410 non-Hispanic Whites and 40,672 non-Hispanic Asians (hereafter referred to as Whites and Asians).  The median age for Whites was 70, with 24% 80 or older, and 70 for Asians, with 23% 80 or older.  Fifty-eight percent of Whites and 56% of Asians were female. 

Results:  A model without group-specific estimates fit the data as well as a model that included 12 group-specific estimates (7 factor loadings, 3 measured variable errors, and 2 item intercepts): Comparative Fit Index = 0.947 and 0.948; Root Mean Square Error of Approximation = 0.052 and 0.052, respectively).  Differences in latent CAHPS score means between Whites and Hispanics estimated from the two models were similar,  differing by 0.053 SD or less.

Conclusions:  This study provides support for measurement equivalence in response to the CAHPS Medicare survey composites (communication, access, customer service) and global ratings between White and Asian respondents, supporting comparisons of care experiences between the two groups.
 

Post has attachment
Response to "What is Important to Patients? Quantity or Quality of Life?
By Allen Nissenson, MD posted in Quality" (http://blogs.davita.com/allen/index.php/2012/05/08/what-is-important-to-patients-quantity-or-quality/
Nissenson's blog says the following: “As shown recently by Tracy Mayne, an international authority in this area and a member of DaVita Clinical Research® (DCR®). KDQOL cannot be validated as truly predictive of QOL with current ESRD patients.”

My response (comment) to this statement was submitted to Nissenson's blog on June 26, 2016 @ 2:10 pm (does not appear that it was accepted as a post yet by Nissenson).  The response is reproduced below.

Not sure what you are referring to by this vague comment, but this presentation of results from 32,926 KDQOL-36 surveys reveals a superficial understanding of psychometric methods: http://www.davitaclinicalresearch.com/wp-content/pdfs/ASN_2010/SA-PO2616-ASN2010_KDQOL_Valid_8Nov10DVW.pdf

The poster summarizes results of a factor analysis, item-scale correlations, and internal consistency reliability. The summary of results states that the data “do not confirm 2-factor solution” for the SF-12 in dialysis patients.

The methodology is described as a confirmatory factor analysis, but apparently this was really an exploratory principal components analysis. Despite the claim that a 2-factor solution was not confirmed, the scree plot for the SF-12 in Figure 1 can be interpreted as 2 principal component eigenvalues prior to the “scree.” The third principal component eigenvalue is 1.03–barely larger than eigenvalue > 1.0 criteria for possible number of factors (i.e., Guttman’s weakest lower bound rule of thumb).

Similarly, the second principal component eigenvalue for the burden scale is 1.09, again not much larger than Guttman’s weakest lower bound and the scree plot supports a single underlying dimension.
In addition, it is stated that a Varimax (uncorrelated) factor rotation was conducted despite the fact that the SF-12 physical and mental health scores are know to be associated significantly and substantially (correlations of 0.40-0.60).

The statement that MCS explains only 12% of the scale variance is inaccurate. The second principal component eigenvalue accounts for that proportion of variance but the second component is not the same as the MCS.

Cronbach’s alphas are reported for the SF-12 physical and mental health summary scores, but coefficient alpha is not appropriate for a weighted combination of items. The fact that negative item-total correlations are reported suggests a potential problem with how the data were scored.

There is a good distribution of scores and ceiling effects do not appear to be excessive, despite the note about “significant” floor and/or ceiling effects.

Alpha is mispelled as “alfa” a few times. The term, “Internal Validity,” in the title does not make sense.

This study does not provide information on validity (whether the KDQOL-36 measures what it is supposed to assess).

Ron Hays

Martino, S. C., Elliott. M. N., Hambarsoomain, K., Weech-Maldonado, R., Gaillot, S., Haffer, S. C., & Hays, R. D. (2016 epub). Racial/ethnic disparities in Medicare beneficiaries’ care coordination experiences. Medical Care.

Background: Little is known about racial/ethnic differences in the experience of care coordination. To the extent that they exist, such differences may exacerbate health disparities given the higher
prevalence of some chronic conditions among minorities.
Objective: To investigate the extent to which racial/ethnic disparities exist in the receipt of coordinated care by Medicare beneficiaries.
Subjects: A total of 260,974 beneficiaries who responded to the 2013 Medicare Consumer Assessment of Healthcare Providers and
Systems (CAHPS) survey.
Methods: We fit a series of linear, case-mix adjusted models predicting Medicare CAHPS measures of care coordination from race/ethnicity.
Results: Hispanic, black, and Asian/Pacific Islander (API) beneficiaries reported that their personal doctor had medical records and other relevant information about their care significantly less often than did non-Hispanic white beneficiaries (-2 points for Hispanics, -1 point for blacks, and -4 points for APIs on a 100-point scale). These 3 groups also reported significantly greater difficulty getting timely follow-up on test results than non-Hispanic white beneficiaries
(-9 points for Hispanics, -1 point for blacks, -5 points for APIs). Hispanic and black beneficiaries reported that help was provided in managing their care significantly less often than did non-Hispanic white beneficiaries (-2 points for Hispanics, -3 points for blacks). API beneficiaries reported that their personal
doctor discussed their medications and had up-to-date information on care from specialists significantly less often than did non-Hispanic white beneficiaries (-2 and -4 points, respectively).
Discussion: These results suggest a need for efforts to address racial/ethnic disparities in care coordination to help ensure high quality care for all patients. Public reporting of plan-level performance data by race/ethnicity may also be helpful to Medicare
beneficiaries and their advocates.


Objective: Self-reports of ‘hearing handicap’ are available, but a comprehensive measure of health-related quality of life (HRQOL) for
individuals with adult-onset hearing loss (AOHL) does not exist. Our objective was to develop and evaluate a multidimensional HRQOL
instrument for individuals with AOHL. Design: The Impact of Hearing Loss Inventory Tool (IHEAR-IT) was developed using results of
focus groups, a literature review, advisory expert panel input, and cognitive interviews. Study sample: The 73-item field-test instrument
was completed by 409 adults (22–91 years old) with varying degrees of AOHL and from different areas of the USA. Results: Multitrait
scaling analysis supported four multi-item scales and five individual items. Internal consistency reliabilities ranged from 0.93 to 0.96 for the scales. Construct validity was supported by correlations between the IHEAR-IT scales and scores on the 36-item Short Form Health Survey, version 2.0 (SF-36v2) mental composite summary (r¼0.32–0.64) and the Hearing Handicap Inventory for the Elderly/Adults (HHIE/HHIA) (r0.70). Conclusions: The field test provides initial support for the reliability and construct validity of the IHEAR-IT for evaluating HRQOL of individuals with AOHL. Further research is needed to evaluate the responsiveness to change of the IHEAR-IT scales and identify items for a short-form.
-> Carren J. Stika & Ron D. Hays (2016): Development and psychometric evaluation of a health-related quality of life instrument for individuals with adult-onset hearing loss, International Journal of Audiology, DOI: 10.3109/14992027.2016.1166397

J Clin Epidemiol. 2016 Mar 9. pii: S0895-4356(16)30012-9. doi: 10.1016/j.jclinepi.2015.08.039. [Epub ahead of print]
Validity of PROMIS® Physical Function Measures in Diverse Clinical Samples.
Schalet BD, Hays RD, Jensen SE, Beaumont JL, Fries JF, Cella D.
Abstract
OBJECTIVE:
To evaluate the validity of the PROMIS® Physical Function measures using longitudinal data collected in six chronic health conditions.
STUDY DESIGN AND SETTING:
Individuals with rheumatoid arthritis (RA), major depressive disorder (MDD), back pain, chronic obstructive pulmonary disease (COPD), chronic heart failure (CHF), and cancer completed the PROMIS Physical Function computerized adaptive test (CAT) or fixed-length short form (SF) at baseline and at the end of clinically-relevant follow-up intervals. Anchor items were also administered to assess change in physical function and general health. Linear mixed effects models and standardized response means were estimated at baseline and follow-up.
RESULTS:
1415 individuals participated (COPD n = 121; CHF n = 57; back pain n = 218; MDD n = 196, RA n = 521; cancer n = 302). The PROMIS Physical Function scores improved significantly for treatment of CHF and back pain patients, but not for patients with MDD or COPD. Most of the patient subsamples that reported improvement or worsening on the anchors showed a corresponding positive or negative change in PROMIS Physical Function.
CONCLUSION:
This study provides evidence that the PROMIS Physical Function measures are sensitive to change in intervention studies where physical function is expected to change and able to distinguish among different clinical samples. The results inform the estimation of meaningful change, enabling comparative effectiveness research.

Post has attachment
John Peipert, Ron D. Hays, and Dave Cella
 
We agree with Porter et al. (The New England Journal of Medicine, February 11, 2016, “Standardizing Patient Outcomes Measurement”) that the use of patient-reported outcomes (PROs) in assessing health care value needs to be expanded and standardized. However, in their call for standardization of PROs across fields, they did not comment on the significant progress toward this end already made by the Patient Reported Outcomes Measurement Information System (PROMIS®) (www.nihpromis.org). PROMIS brought together experts in outcome measure development and multiple substantive clinical areas to develop a “best-of-the-best” set of general (not condition specific) health-related quality of life (HRQOL) measures. These measures cover a comprehensive set of HRQOL concepts (e.g., physical functioning, depressive symptoms, satisfaction with participation in social roles), can be compared on a standard T-score metric to the U.S. general population,3 and can be administered using computer adaptive testing to yield reliable scores efficiently. Multiple language translations are available and more are underway. In short, to advance the goal advocated by Porter et al., PROMIS provides a standard set of psychometrically sound HRQOL measures that can be used to measure health care value from the patient’s perspective.
Wait while more posts are being loaded