Profile

Cover photo
Ron Hays
Works at UCLA
Attended UC Riverside
Lives in Cerritos
17 followers|29,276 views
AboutPostsPhotosVideosReviews

Stream

Ron Hays

Shared publicly  - 
 
Response to "What is Important to Patients? Quantity or Quality of Life?
By Allen Nissenson, MD posted in Quality" (http://blogs.davita.com/allen/index.php/2012/05/08/what-is-important-to-patients-quantity-or-quality/
Nissenson's blog says the following: “As shown recently by Tracy Mayne, an international authority in this area and a member of DaVita Clinical Research® (DCR®). KDQOL cannot be validated as truly predictive of QOL with current ESRD patients.”

My response (comment) to this statement was submitted to Nissenson's blog on June 26, 2016 @ 2:10 pm (does not appear that it was accepted as a post yet by Nissenson).  The response is reproduced below.

Not sure what you are referring to by this vague comment, but this presentation of results from 32,926 KDQOL-36 surveys reveals a superficial understanding of psychometric methods: http://www.davitaclinicalresearch.com/wp-content/pdfs/ASN_2010/SA-PO2616-ASN2010_KDQOL_Valid_8Nov10DVW.pdf

The poster summarizes results of a factor analysis, item-scale correlations, and internal consistency reliability. The summary of results states that the data “do not confirm 2-factor solution” for the SF-12 in dialysis patients.

The methodology is described as a confirmatory factor analysis, but apparently this was really an exploratory principal components analysis. Despite the claim that a 2-factor solution was not confirmed, the scree plot for the SF-12 in Figure 1 can be interpreted as 2 principal component eigenvalues prior to the “scree.” The third principal component eigenvalue is 1.03–barely larger than eigenvalue > 1.0 criteria for possible number of factors (i.e., Guttman’s weakest lower bound rule of thumb).

Similarly, the second principal component eigenvalue for the burden scale is 1.09, again not much larger than Guttman’s weakest lower bound and the scree plot supports a single underlying dimension.
In addition, it is stated that a Varimax (uncorrelated) factor rotation was conducted despite the fact that the SF-12 physical and mental health scores are know to be associated significantly and substantially (correlations of 0.40-0.60).

The statement that MCS explains only 12% of the scale variance is inaccurate. The second principal component eigenvalue accounts for that proportion of variance but the second component is not the same as the MCS.

Cronbach’s alphas are reported for the SF-12 physical and mental health summary scores, but coefficient alpha is not appropriate for a weighted combination of items. The fact that negative item-total correlations are reported suggests a potential problem with how the data were scored.

There is a good distribution of scores and ceiling effects do not appear to be excessive, despite the note about “significant” floor and/or ceiling effects.

Alpha is mispelled as “alfa” a few times. The term, “Internal Validity,” in the title does not make sense.

This study does not provide information on validity (whether the KDQOL-36 measures what it is supposed to assess).

Ron Hays
1
Add a comment...

Ron Hays

Shared publicly  - 
 
J Clin Epidemiol. 2016 Mar 9. pii: S0895-4356(16)30012-9. doi: 10.1016/j.jclinepi.2015.08.039. [Epub ahead of print]
Validity of PROMIS® Physical Function Measures in Diverse Clinical Samples.
Schalet BD, Hays RD, Jensen SE, Beaumont JL, Fries JF, Cella D.
Abstract
OBJECTIVE:
To evaluate the validity of the PROMIS® Physical Function measures using longitudinal data collected in six chronic health conditions.
STUDY DESIGN AND SETTING:
Individuals with rheumatoid arthritis (RA), major depressive disorder (MDD), back pain, chronic obstructive pulmonary disease (COPD), chronic heart failure (CHF), and cancer completed the PROMIS Physical Function computerized adaptive test (CAT) or fixed-length short form (SF) at baseline and at the end of clinically-relevant follow-up intervals. Anchor items were also administered to assess change in physical function and general health. Linear mixed effects models and standardized response means were estimated at baseline and follow-up.
RESULTS:
1415 individuals participated (COPD n = 121; CHF n = 57; back pain n = 218; MDD n = 196, RA n = 521; cancer n = 302). The PROMIS Physical Function scores improved significantly for treatment of CHF and back pain patients, but not for patients with MDD or COPD. Most of the patient subsamples that reported improvement or worsening on the anchors showed a corresponding positive or negative change in PROMIS Physical Function.
CONCLUSION:
This study provides evidence that the PROMIS Physical Function measures are sensitive to change in intervention studies where physical function is expected to change and able to distinguish among different clinical samples. The results inform the estimation of meaningful change, enabling comparative effectiveness research.
1
Add a comment...

Ron Hays

Shared publicly  - 
 
McLeod, L. D., Cappelleri, J. C., & Hays, R. D.  (2016, epub).  Best (but oft forgotten) practices: Expresssing and interpreting associations and effect sizes in clinical outcome assessments.  American Journal of Clinical Nutrition.
1
Add a comment...

Ron Hays

Shared publicly  - 
 
Introducing the New CAHPS Clinician & Group Survey 3.0
Thursday, September 17
1 – 2:30 p.m. ET

This free Webcast from the Agency for Healthcare Research and Quality (AHRQ) will highlight the 3.0 version of the CAHPS Clinician & Group (CG-CAHPS) Survey.  Members of the CAHPS Consortium will discuss the rationale for updating the survey, review the changes, present findings from a comparison of the 6-month and 12-month reference periods, and suggest directions for future research.
Speakers include:
Julie Brown, RAND Survey Research Group, Santa Monica, CA
Lee Hargraves, PhD, American Institutes for Research, Waltham, MA
Ron D. Hays, PhD, RAND Corporation; University of California, Los Angeles
Dale Shaller, CAHPS Database; Shaller Consulting Group, Stillwater, MN (Moderator)

For more information about the CG-CAHPS Survey 3.0, go to https://www.cahps.ahrq.gov/surveys-guidance/cg/index.html.
If you have any questions or comments, please contact the CAHPS User Network at cahps1@westat.com or 1-800-492-9261.
About CAHPS
Consumer Assessment of Healthcare Providers and Systems (CAHPS®) surveys ask consumers about their experiences with health care. The CAHPS program at the U.S. Agency for Healthcare Research and Quality (AHRQ) supports the development and promotion of CAHPS surveys, instructional materials, and comparative databases, and provides technical assistance to users. Learn more about AHRQ’s CAHPS program at: www.cahps.ahrq.gov.
AHRQ’s CAHPS Database receives data voluntarily submitted by users that have administered either the CAHPS Health Plan Survey or the CAHPS Clinician & Group Survey. The CAHPS Database aggregates the data to facilitate comparisons of CAHPS survey results by users, researchers, and other interested organizations. Learn more about AHRQ’s CAHPS Database at: www.cahpsdatabase.ahrq.gov.
1
Add a comment...

Ron Hays

Shared publicly  - 
 

Mayer, L. A., Elliott, M. N., Hass, A., Hays, R. D., & Weinick, R. M.  (in press).  Less use of extreme response responses by Asians to standardized care scenarios may explain some differences in CAHPS scores.  Medical Care.


Background: Asian Americans (hereafter “Asians”) generally report worse experiences with care than non-Latino whites (hereafter “whites”), which may reflect differential use of response scales. Past studies indicate that Asians exhibit lower Extreme Response Tendency (ERT) – they less frequently use responses at extreme ends of the scale than whites.
Objective: To explore whether lower ERT is observed for Asians than whites in response to standardized vignettes depicting patient experiences of care and whether ERT might in part explain Asians reporting worse care than whites.
Procedure: A representative U.S. sample (n=575 Asian; n=505 white) were presented with five written vignettes describing doctor-patient encounters with differing levels of physician responsiveness. Respondents evaluated the encounters using modified CAHPS communication questions.
Results: Case-mix-adjusted repeated-measures multivariate models show that Asians provided more positive responses than whites to several vignettes with less-responsive physicians but less positive responses than whites for the vignette with the most physician responsiveness (p<0.01 for each). While all respondents provided more positive ratings for vignettes with greater physician responsiveness, the increase was 15% less for Asian than white respondents.
Conclusions: Asians exhibit lower ERT than whites in response to standardized scenarios. Because CAHPS reponses are predominantly near the positive end of the scale and the most responsive scenario is most typical of the score observed in real-world settings, lower ERT in Asians may partially explain observations of lower observed mean CAHPS scores for Asians in real-world settings. Case-mix adjustment for Asian race/ethnicity or its correlates may improve quality of care measurement.
1
Add a comment...

Ron Hays

Shared publicly  - 
 
Hays, R. D., Liu, H., & Kapteyn, A.  (in press).  Use of internet panels to conduct surveys.  Behavior Research Methods.

Abstract: Use of internet panels to collect survey data is increasing because it is cost-effective, enables access to large and diverse samples quickly, takes less time than traditional methods to get data back for analysis, and the standardization of data collection process makes studies easy to replicate.  A variety of probability-based panels have been created including Telepanel/CentERpanel, Knowledge Networks (now GFK KnowledgePanel®), the American Life Panel, the LISS Panel, and the Understanding American Study panel.  Despite the advantage of having a known denominator (sampling frame), the probability-based internet panels often have low recruitment participation rates and some have argued that there is little practical difference between opting out of a probability sample and opting into a non-probability (convenience) internet panel.  This paper provides an overview of both probability-based and convenience panels, discussing potential benefits and cautions for each method, and summarizing approaches used to weight panel respondents to better represent the underlying population.  Challenges in using internet panel data such as false answers, answering too fast, giving the same answer repeatedly, getting multiple surveys from the same respondent, and panelists being members of multiple panels are discussed.  There is more to be learned about internet panels generally and web-based data collection along with opportunities to evaluate data collected using mobile devices and social media platforms. 
1
Add a comment...
Have him in circles
17 people
Janel Hanmer's profile photo
Mike Sonksen's profile photo
Tom Potts's profile photo
Richard Hector's profile photo
ken kaneki's profile photo
Tom Potts's profile photo
Jamie Durity's profile photo
Mohir Ahmedov's profile photo
ALLIANCE VOICE SOLUTIONS's profile photo

Ron Hays

Shared publicly  - 
 
Martino, S. C., Elliott. M. N., Hambarsoomain, K., Weech-Maldonado, R., Gaillot, S., Haffer, S. C., & Hays, R. D. (2016 epub). Racial/ethnic disparities in Medicare beneficiaries’ care coordination experiences. Medical Care.

Background: Little is known about racial/ethnic differences in the experience of care coordination. To the extent that they exist, such differences may exacerbate health disparities given the higher
prevalence of some chronic conditions among minorities.
Objective: To investigate the extent to which racial/ethnic disparities exist in the receipt of coordinated care by Medicare beneficiaries.
Subjects: A total of 260,974 beneficiaries who responded to the 2013 Medicare Consumer Assessment of Healthcare Providers and
Systems (CAHPS) survey.
Methods: We fit a series of linear, case-mix adjusted models predicting Medicare CAHPS measures of care coordination from race/ethnicity.
Results: Hispanic, black, and Asian/Pacific Islander (API) beneficiaries reported that their personal doctor had medical records and other relevant information about their care significantly less often than did non-Hispanic white beneficiaries (-2 points for Hispanics, -1 point for blacks, and -4 points for APIs on a 100-point scale). These 3 groups also reported significantly greater difficulty getting timely follow-up on test results than non-Hispanic white beneficiaries
(-9 points for Hispanics, -1 point for blacks, -5 points for APIs). Hispanic and black beneficiaries reported that help was provided in managing their care significantly less often than did non-Hispanic white beneficiaries (-2 points for Hispanics, -3 points for blacks). API beneficiaries reported that their personal
doctor discussed their medications and had up-to-date information on care from specialists significantly less often than did non-Hispanic white beneficiaries (-2 and -4 points, respectively).
Discussion: These results suggest a need for efforts to address racial/ethnic disparities in care coordination to help ensure high quality care for all patients. Public reporting of plan-level performance data by race/ethnicity may also be helpful to Medicare
beneficiaries and their advocates.

1
Add a comment...

Ron Hays

Shared publicly  - 
 
Objective: Self-reports of ‘hearing handicap’ are available, but a comprehensive measure of health-related quality of life (HRQOL) for
individuals with adult-onset hearing loss (AOHL) does not exist. Our objective was to develop and evaluate a multidimensional HRQOL
instrument for individuals with AOHL. Design: The Impact of Hearing Loss Inventory Tool (IHEAR-IT) was developed using results of
focus groups, a literature review, advisory expert panel input, and cognitive interviews. Study sample: The 73-item field-test instrument
was completed by 409 adults (22–91 years old) with varying degrees of AOHL and from different areas of the USA. Results: Multitrait
scaling analysis supported four multi-item scales and five individual items. Internal consistency reliabilities ranged from 0.93 to 0.96 for the scales. Construct validity was supported by correlations between the IHEAR-IT scales and scores on the 36-item Short Form Health Survey, version 2.0 (SF-36v2) mental composite summary (r¼0.32–0.64) and the Hearing Handicap Inventory for the Elderly/Adults (HHIE/HHIA) (r0.70). Conclusions: The field test provides initial support for the reliability and construct validity of the IHEAR-IT for evaluating HRQOL of individuals with AOHL. Further research is needed to evaluate the responsiveness to change of the IHEAR-IT scales and identify items for a short-form.
-> Carren J. Stika & Ron D. Hays (2016): Development and psychometric evaluation of a health-related quality of life instrument for individuals with adult-onset hearing loss, International Journal of Audiology, DOI: 10.3109/14992027.2016.1166397
1
Add a comment...

Ron Hays

Shared publicly  - 
 
John Peipert, Ron D. Hays, and Dave Cella
 
We agree with Porter et al. (The New England Journal of Medicine, February 11, 2016, “Standardizing Patient Outcomes Measurement”) that the use of patient-reported outcomes (PROs) in assessing health care value needs to be expanded and standardized. However, in their call for standardization of PROs across fields, they did not comment on the significant progress toward this end already made by the Patient Reported Outcomes Measurement Information System (PROMIS®) (www.nihpromis.org). PROMIS brought together experts in outcome measure development and multiple substantive clinical areas to develop a “best-of-the-best” set of general (not condition specific) health-related quality of life (HRQOL) measures. These measures cover a comprehensive set of HRQOL concepts (e.g., physical functioning, depressive symptoms, satisfaction with participation in social roles), can be compared on a standard T-score metric to the U.S. general population,3 and can be administered using computer adaptive testing to yield reliable scores efficiently. Multiple language translations are available and more are underway. In short, to advance the goal advocated by Porter et al., PROMIS provides a standard set of psychometrically sound HRQOL measures that can be used to measure health care value from the patient’s perspective.
1
Add a comment...

Ron Hays

Shared publicly  - 
 
Stucky, B. D., Hays, R. D., Edelen, M. O., Gurvey, J., & Brown, J. A.  (in press).  Possibilities for shortening the CAHPS clinician and group survey.  Medical Care.

Background: The Consumer Assessment of Healthcare Providers and Systems (CAHPS) Clinician & Group adult survey (CG-CAHPS) includes 34 items used to monitor the quality of ambulatory care from the patient's perspective. CG-CAHPS includes items assessing access to care, provider communication, and courtesy and respect of office staff. Stakeholders have expressed concerns about the length of the
CG-CAHPS survey.
Objectives: This paper explores the impact on reliability and validity of the CAHPS domain scores of reducing the numbers of items used to assess the three core CG-CAHPS
domains (Provider Communication, Access to Care, and Courteous and Helpful Office Staff).
Research Design: CG-CAHPS data reported here consist of 136,725 patients across four datasets including ambulatory clinics, patient-centered medical homes, and accountable care organizations. Analyses are conducted in parallel across the four settings to allow evaluations across data source.
Analyses: Multiple regression and ANOVA techniques were used to evaluate reliability for shorter sets of items. Site-level correlations with the overall rating of the provider were compared to evaluate the impact on validity. The change in practices' rank ordering as a function of domain revision is also reported.
Results: Findings suggest that the Provider Communication (6-items) and Access (5-items) domains can be reduced to as few as two-items each and Office Staff (2-items) can be reduced to a single item without a substantial loss in reliability or content.
Conclusions. The performance of several of the reduced-length options for CG-CAHP domains closely matches the full versions and may be useful in healthcare settings where the full-length survey is impractical due to time or cost constraints.
1
Add a comment...

Ron Hays

Shared publicly  - 
 
http://medpac.gov/…/june-2015-report-to-the-congress-medica…
“If a link between patient-reported outcomes and clinical outcomes could be established and if the statistical and administrative concerns that the Commission raised in the context of the HOS could be mitigated, then a tool like the 10-item PROMIS Global Health Scale may have value as a population-based outcome measure to compare performance across FFS Medicare, accountable care organizations, and MA plans. Further research is needed before reaching conclusions about the use of HRQOL measures in Medicare” (p. 215)

Hays, R. D., Bjorner, J., Revicki, D. A., Spritzer, K., & Cella, D. (2009). Development of physical and mental health summary scores from the Patient-Reported Outcomes Measurement Information System (PROMIS) global items. Quality of Life Research, 18, 873-80.
1
Add a comment...

Ron Hays

Shared publicly  - 
 
http://www.ncbi.nlm.nih.gov/pubmed/25832617
J Gen Intern Med. 2015 Apr 2. [Epub ahead of print]
U.S. General Population Estimate for "Excellent" to "Poor" Self-Rated Health Item.
Hays RD, Spritzer KL, Thompson WW, Cella D.
Abstract
BACKGROUND:
The most commonly used self-reported health question asks people to rate their general health from excellent to poor. This is one of the Patient-Reported Outcomes Measurement Information System (PROMIS) global health items. Four other items are used for scoring on the PROMIS global physical health scale. Because the single item is used on the majority of large national health surveys in the U.S., it is useful to construct scores that can be compared to U.S. general population norms.
OBJECTIVE:
To estimate the PROMIS global physical health scale score from the responses to the single excellent to poor self-rated health question for use in public health surveillance, research, and clinical assessment.
DESIGN:
A cross-sectional survey of 21,133 individuals, weighted to be representative of the U.S. general population.
PARTICIPANTS:
The PROMIS items were administered via a Web-based survey to 19,601 persons in a national panel and 1,532 subjects from PROMIS research sites. The average age of individuals in the sample was 53 years, 52 % were female, 80 % were non-Hispanic white, and 19 % had a high school degree or lower level of education.
MAIN OUTCOME MEASURES:
PROMIS global physical health scale.
KEY RESULTS:
The product-moment correlation of the single item with the PROMIS global physical health scale score was 0.81. The estimated scale score based on responses to the single item ranged from 29 (poor self-rated health, 2.1 SDs worse than the general population mean) to 62 (excellent self-rated health, 1.2 SDs better than the general population mean) on a T-score metric (mean of 50).
CONCLUSIONS:
This item can be used to estimate scores for the PROMIS global physical health scale for use in monitoring population health and achieving public health objectives. The item may also be used for individual assessment, but its reliability (0.52) is lower than that of the PROMIS global health scale (0.81).
1
Add a comment...
People
Have him in circles
17 people
Janel Hanmer's profile photo
Mike Sonksen's profile photo
Tom Potts's profile photo
Richard Hector's profile photo
ken kaneki's profile photo
Tom Potts's profile photo
Jamie Durity's profile photo
Mohir Ahmedov's profile photo
ALLIANCE VOICE SOLUTIONS's profile photo
Work
Occupation
Professor
Employment
  • UCLA
    Professor, present
  • RAND
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Cerritos
Previously
Long Beach
Story
Introduction

Ron Hays (PhD, University of California, Riverside, Psychology) is Professor of Medicine at UCLA and a Senior Health Scientist at RAND.    He is one of the Principal Investigators for CAHPS®, a project that has developed measures to assess consumer evaluations of hospitals, nursing homes, group practices, and individual physicians as well as tools to report these assessments to health care providers and consumers.  Dr. Hays has published 430 research articles and 36 book chapters.  He is a member of the special methodology panel for the Journal of General Internal Medicinea former editor-in-chief of Quality of Life Research, and former deputy editor of Medical Care

Education
  • UC Riverside
Basic Information
Gender
Male
One problem was that the in-room heating system was set on automatic and woke me up when it came on in the wee hours of the morning. The front desk said the next day that maintenance could have been called to change it, but when one is tired and not dressed the last thing you want is to have a visit at 2:30am. Upon checkout the front desk staff gave me a complimentary breakfast voucher, but there was only 10 minutes before my shuttle to a meeting and the wait staff wasn't able to even get me coffee that fast.
• • •
Quality: Very GoodFacilities: GoodService: Very Good
Public - 3 years ago
reviewed 3 years ago
1 review
Map
Map
Map