Technical Issues in the National Healthcare Disparities Reports
(March 12, 2009; rev. May 22, 2011)
A great deal of material on this site is highly critical of the measurement approach of the Agency for Healthcare Research and Quality (AHRQ) in the National Healthcare Disparities Report. The main criticism involves the agency’s reliance on relative differences in outcome rates without recognizing the extent to which changes in relative differences are functions of the overall prevalence of an outcome – specifically, that the rarer an outcome, the greater tends to be the relative difference in experiencing it and the smaller tends to be the relative difference in avoiding it. The failure to recognize or address this tendency makes it impossible for the agency to distinguish a meaningful change in a disparity from one that is solely a function of overall changes in the prevalence of an outcome or otherwise to appraise the comparative size of different disparities.
AHRQ relies on whichever relative differences (in the favorable or the adverse outcome) is larger, using the disadvantaged group as the numerator in the fraction to determine which relative difference is larger. In normal data, as a favorable outcome becomes more common, the decreasing relative in the favorable outcome tends to be larger than the relative difference in the increasing adverse outcome until the point where the advantaged group’s rate reaches 50%. Thus, in situations of improvements in outcomes, AHRQ will tend to find decreasing disparities until the point where the advantaged group’s favorable outcome rate reaches and thereafter find disparities to be increasing. Most things that AHRQ examines are in the latter range.
National Center for Health Statistics always measures disparities in terms of relative differences in the adverse outcome and hence tends generally to find improvements in health associated with increasing health disparities. For most things that AHRQ studies, AHRQ will tend to find reach conclusions as to directions of changes in disparities that NCHS would reach. The Centers for Disease Control and Prevention (CDC) usually measures disparities in terms of absolute differences between rates (as in its January 14, 2011 report CDC Health Disparities and Inequalities Report – United States, 2011. As discussed in the introduction to the Scanlan’s Rule page, absolute differences between rates tend to change in the opposite direction of the larger of the two relative differences. Thus, AHRQ will tend to reach conclusions as to the direction of changes in disparities that are the opposite that CDC would reach.
For further information on this issue, see Section A.4 of the Measuring Health Disparities page and Section A.6 of the Scanlan’s Rule page and the references mentioned there, in particular the 2007 APHA presentation and that March 2008 Addendum to that presentation.
Review of the 2006 and earlier reports for the referenced APHA presentation also led to the discovery of ways in which, apart from the measurement issues just described, the 2006 report was inaccurate or misleading. Such matters include a misdescription of the first disparity presented in the Highlights section of the report, where, by reporting that figures involved the receipt of care as soon was wanted when in fact they involved the failure to receive care, the report gave the impression that a disparity was adverse to whites rather than to blacks. The matters also include a universal confusion of percentage point changes with percent or percentage changes. Thus, the reports highlights as a 7.9% yearly decrease in the black-white disparity in new AIDs cases what was in fact less than a 1% yearly decrease. As a result of the same confusion, but with opposite implications, in the case of the above-mentioned misdescribed disparity concerning receipt of care as soon as wanted, what is termed a 9.8% yearly increase is actually a 65% yearly increase.[i]
Prior to the issuance of the 2007 report, I brought these matters to the attention of AHRQ suggesting that, as appropriate, the matters be addressed in the 2007 report and that an errata sheet be added to the on-line version of the 2006 report.
Only one of the matters may have been addressed in the 2007 report. And, though an errata was sheet was at some point posted for the 2006 report, it does not address the errors in that report.
In order to maintain a publicly available record of these issues, I provide as items 1 and 2 below links to edited versions of emails to the staff of AHRQ regarding these matters. Because the 2006 report was to be the subject of a presentation at the 2007 conference of the American Public Health Association, I reviewed the report with some care. I have given only limited attention to the 2007 report, save to determine whether it addressed certain of the matters I brought to AHRQ’s attention regarding the 2006 report. The limited review of the document, however, did identify certain technical issues in the 2007 of the same or similar to nature to those addressed with AHRQ regarding the 2006 report. In an email of March 11, 2009, I then brought those issues the attention of the same AHRQ staff member with whom I addressed similar issues with the 2006 report. Since those issues involve matters of which users of the 2007 report should be made aware, the March 2009 email to AHRQ is made available by means of the third link below.[ii]
1. Email to AHRQ, October 29, 2007:
http://www.jpscanlan.com/images/10-29-07_note_to_AHRQ.pdf
2. Email to AHRQ, November 12, 2007:
http://www.jpscanlan.com/images/11-12-07_note_to_AHRQ.pdf
3. Email to AHRQ, March 11, 2009:
http://www.jpscanlan.com/images/03-12-09_note_to_AHRQ.pdf
[i] In section A. 1 of the November 12, 2007 email to AHRQ (and in the version of this document that was posted between March 12, 2009 and December 6, 2009), I described these changes as being as a 122% yearly increase and a 0.9% yearly decrease. In doing that I had simply used the methodology AHRQ describe note xix (at 5) of the 2006 NHDR, where it was clearly discussing percentage point changes rather than percent changes (though it termed these changes as percent changes in the text). The methodology in appropriate for translating an all-years percentage point change into yearly changes, since changes in the baseline from year-to-years are not relevant. But those changes are relevant with regard to the relationship between an all-years and he yearly changes underlying it. For example, a 20% % yearly increase would translate into a 73% increase over three years; a 20% yearly decrease would translate into a 49% decrease over three years. And the larger the yearly change the greater the degree to which yearly changes will differ from an all years figure. In any case, the 122% figure was based on an all-years 363% increase in the relative difference divided by three years but the correct figure is close to 65%. Because the percent yearly decrease in the relative difference in new AIDS cases was small, the properly calculated figure differed little from the figure derived by dividing an all-years 3.3% change by four years. See discussion of this issue in the Percentage Points sub-page of the Vignettes page of jpscanlan.com
[ii] I have not reviewed the 2009 report carefully. But I did note that in describing the 1% change that it would regard as meaningful (at page 28), AHRQ uses the same language that it uses in the National Healthcare Quality Report and thus describes a 1% change in an outcome rate. Presumably it means a 1% change in a relative disparity (though in generally describing changes in disparities it discusses percentage point changes though referring to them as percent changes).