James P. Scanlan, Attorney at Law

Home Page

Curriculum Vitae

Publications

Published Articles

Conference Presentations

Working Papers

page1

Journal Comments

Truth in Justice Articles

Measurement Letters

Measuring Health Disp

Outline and Guide to MHD

Summary to MHD

Solutions

page3

Solutions Database

Irreducible Minimums

Pay for Performance

Between Group Variance

Concentration Index

Gini Coefficient

Reporting Heterogeneity

Cohort Considerations

Relative v Absolute Diff

Whitehall Studies

AHRQ's Vanderbilt Report

NHDR Measurement

NHDR Technical Issues

MHD A Articles

MHD B Conf Presentations

MHD D Journal Comments

Consensus/Non-Consensus

Spurious Contradictions

Institutional Corresp

page2

Scanlan's Rule

Outline and Guide to SR

Summary to SR

Bibliography

Semantic Issues

Employment Tests

Case Study

Case Study Answers

Case Study II

Subgroup Effects

Subgroup Effects NC

Illogical Premises

Illogical Premises II

Inevitable Interaction

Interactions by Age

Literacy Illustration

RERI

Feminization of Poverty S

Explanatory Theories

Mortality and Survival

Truncation Issues

Collected Illustrations

Income Illustrations

Framingham Illustrations

Life Table Illustrations

NHANES Illustrations

Mort/Surv Illustration

Credit Score Illustration

Intermediate Outcomes

Representational Disp

Statistical Signif SR

Comparing Averages

Meta-Analysis

Case Control Studies

Criminal Record Effects

Sears Case Illustration

Numeracy Illustration

Obesity Illusration

LIHTC Approval Disparitie

Recidivism Illustration

Consensus

Algorithm Fairness

Mortality and Survival 2

Mort/Survival Update

Measures of Association

Immunization Disparities

Race Health Initiative

Educational Disparities

Disparities by Subject

CUNY ISLG Eq Indicators

Harvard CRP NCLB Study

New York Proficiency Disp

Education Trust GC Study

Education Trust HA Study

AE Casey Profic Study

McKinsey Achiev Gap Study

California RICA

Nuclear Deterrence

Employment Discrimination

Job Segregation

Measuring Hiring Discr

Disparate Impact

Four-Fifths Rule

Less Discr Alt - Proc

Less Discr Altl - Subs

Fisher v. Transco Serv

Jones v. City of Boston

Bottom Line Issue

Lending Disparities

Inc & Cred Score Example

Disparities - High Income

Underadjustment Issues

Absolute Differences - L

Lathern v. NationsBank

US v. Countrywide

US v. Wells Fargo

Partial Picture Issues

Foreclosure Disparities

File Comparison Issues

FHA/VA Steering Study

CAP TARP Study

Disparities by Sector

Holder/Perez Letter

Federal Reserve Letter

Discipline Disparities

COPAA v. DeVos

Kerri K. V. California

Truancy Illustration

Disparate Treatment

Relative Absolute Diff

Offense Type Issues

Los Angeles SWPBS

Oakland Disparities

Richmond Disparities

Nashville Disparities

California Disparities

Denver Disparities

Colorado Disparities

Nor Carolina Disparitie

Aurora Disparities

Allegheny County Disp

Evansville Disparities

Maryland Disparities

St. Paul Disparities

Seattle Disparities

Minneapolis Disparities

Oregon Disparities

Beaverton Disparities

Montgomery County Disp

Henrico County Disparitie

Florida Disparities

Connecticut Disparities

Portland Disparities

Minnesota Disparities

Massachusetts Disparities

Rhode Island Disparities

South Bend Disparities

Utah Disparities

Loudoun Cty Disparities

Kern County Disparities

Milwaukee Disparities

Urbana Disparities

Illinois Disparities

Virginia Disparities

Behavior

Suburban Disparities

Preschool Disparities

Restraint Disparities

Disabilities - PL 108-446

Keep Kids in School Act

Gender Disparities

Ferguson Arrest Disp

NEPC Colorado Study

NEPC National Study

California Prison Pop

APA Zero Tolerance Study

Flawed Inferences - Disc

Oakland Agreement

DOE Equity Report

IDEA Data Center Guide

Duncan/Ali Letter

Crim Justice Disparities

U.S. Customs Search Disp

Deescalation Training

Career Criminal Study

Implicit Bias Training

Drawing Inferences

Diversion Programs

Minneapolis PD Investig

Offense Type Issues CJD

Innumerate Decree Monitor

Massachusetts CJ Disparit

Feminization of Poverty

Affirmative Action

Affirm Action for Women

Other Affirm Action

Justice John Paul Stevens

Statistical Reasoning

The Sears Case

Sears Case Documents

The AT&T Consent Decree

Cross v. ASPI

Vignettes

Times Higher Issues

Gender Diff in DADT Term

Adjustment Issues

Percentage Points

Odds Ratios

Statistical Signif Vig

Journalists & Statistics

Multiplication Definition

Prosecutorial Misconduct

Outline and Guide

Misconduct Summary

B1 Agent Cain Testimony

B1a Bev Wilsh Diversion

B2 Bk Entry re Cain Call

B3 John Mitchell Count

B3a Obscuring Msg Slips

B3b Missing Barksdale Int

B4 Park Towers

B5 Dean 1997 Motion

B6 Demery Testimony

B7 Sankin Receipts

B7a Sankin HBS App

B8 DOJ Complicity

B9 Doc Manager Complaints

B9a Fabricated Gov Exh 25

B11a DC Bar Complaint

Letters (Misconduct)

Links Page

Misconduct Profiles

Arlin M. Adams

Jo Ann Harris

Bruce C. Swartz

Swartz Addendum 2

Swartz Addendum 3

Swartz Addendum 4

Swartz Addendum 7

Robert E. O'Neill

O'Neill Addendum 7

Paula A. Sweeney

Robert J. Meyer

Lantos Hearings

Password Protected

OIC Doc Manager Material

DC Bar Materials

Temp Confidential

DV Issues

Indexes

Document Storage

Pre 1989

1989 - present

Presentations

Prosec Misc Docs

Prosec Misc Docs II

Profile PDFs

Misc Letters July 2008 on

Large Prosec Misc Docs

HUD Documents

Transcripts

Miscellaneous Documents

Unpublished Papers

Letters re MHD

Tables

MHD Comments

Figures

ASPI Documents

Web Page PDFs

Sears Documents

Pages Transfer


 

Technical Issues in the National Healthcare Disparities Reports

(March 12, 2009; rev. May 22, 2011)

 

A great deal of material on this site is highly critical of the measurement approach of the Agency for Healthcare Research and Quality (AHRQ) in the National Healthcare Disparities Report.  The main criticism involves the agency’s reliance on relative differences in outcome rates without recognizing the extent to which changes in relative differences are functions of the overall prevalence of an outcome – specifically, that the rarer an outcome, the greater tends to be the relative difference in experiencing it and the smaller tends to be the relative difference in avoiding it.  The failure to recognize or address this tendency makes it impossible for the agency to distinguish a meaningful change in a disparity from one that is solely a function of overall changes in the prevalence of an outcome or otherwise to appraise the comparative size of different disparities. 

 

AHRQ relies on whichever relative differences (in the favorable or the adverse outcome) is larger, using the disadvantaged group as the numerator in the fraction to determine which relative difference is larger.  In normal data, as a favorable outcome becomes more common, the decreasing relative in the favorable outcome tends to be larger than the relative difference in the increasing adverse outcome until the point where the advantaged group’s rate reaches 50%.   Thus, in situations of improvements in outcomes, AHRQ will tend to find decreasing disparities until the point where the advantaged group’s favorable outcome rate reaches and thereafter find disparities to be increasing.  Most things that AHRQ examines are in the latter range.

 

National Center for Health Statistics always measures disparities in terms of relative differences in the adverse outcome and hence tends generally to find improvements in health associated with increasing health disparities.  For most things that AHRQ studies, AHRQ will tend to find reach conclusions as to directions of changes in disparities that NCHS would reach.  The Centers for Disease Control and Prevention (CDC) usually measures disparities in terms of absolute differences between rates (as in its January 14, 2011 report CDC Health Disparities and Inequalities Report ­– United States, 2011.  As discussed in the introduction to the Scanlan’s Rule page, absolute differences between rates tend to change in the opposite direction of the larger of the two relative differences.  Thus, AHRQ will tend to reach conclusions as to the direction of changes in disparities that are the opposite that CDC would reach. 

 

For further information on this issue, see Section A.4 of the Measuring Health Disparities page and Section A.6 of the Scanlan’s Rule page and the references mentioned there, in particular the 2007 APHA presentation and that March 2008 Addendum to that presentation.

 

Review of the 2006 and earlier reports for the referenced APHA presentation also led to the discovery of ways in which, apart from the measurement issues just described, the 2006 report was inaccurate or misleading.  Such matters include a misdescription of the first disparity presented in the Highlights section of the report, where, by reporting that figures involved the receipt of care as soon was wanted when in fact they involved the failure to receive care, the report gave the impression that a disparity was adverse to whites rather than to blacks.  The matters also include a universal confusion of percentage point changes with percent or percentage changes.  Thus, the reports highlights as a 7.9% yearly decrease in the black-white disparity in new AIDs cases what was in fact less than a 1% yearly decrease.  As a result of the same confusion, but with opposite implications, in the case of the above-mentioned misdescribed disparity concerning receipt of care as soon as wanted, what is termed a 9.8% yearly increase is actually a 65% yearly increase.[i] 

 

Prior to the issuance of the 2007 report, I brought these matters to the attention of AHRQ suggesting that, as appropriate, the matters be addressed in the 2007 report and that an errata sheet be added to the on-line version of the 2006 report. 

 

Only one of the matters may have been addressed in the 2007 report.  And, though an errata was sheet was at some point posted for the 2006 report, it does not address the errors in that report.

 

In order to maintain a publicly available record of these issues, I provide as items 1 and 2 below links to edited versions of emails to the staff of AHRQ regarding these matters.  Because the 2006 report was to be the subject of a presentation at the 2007 conference of the American Public Health Association, I reviewed the report with some care.  I have given only limited attention to the 2007 report, save to determine whether it addressed certain of the matters I brought to AHRQ’s attention regarding the 2006 report.  The limited review of the document, however, did identify certain technical issues in the 2007 of the same or similar to nature to those addressed with AHRQ regarding the 2006 report.  In an email of March 11, 2009, I then brought those issues the attention of the same AHRQ staff member with whom I addressed similar issues with the 2006 report.  Since those issues involve matters of which users of the 2007 report should be made aware, the March 2009 email to AHRQ is made available by means of the third link below.[ii] 

 

1.  Email to AHRQ, October 29, 2007:

http://www.jpscanlan.com/images/10-29-07_note_to_AHRQ.pdf

 

2.  Email to AHRQ, November 12, 2007:

http://www.jpscanlan.com/images/11-12-07_note_to_AHRQ.pdf

 

3.  Email to AHRQ, March 11, 2009:

http://www.jpscanlan.com/images/03-12-09_note_to_AHRQ.pdf



[i]  In section A. 1 of the November 12, 2007 email to AHRQ (and in the version of this document that was posted between March 12, 2009 and December 6, 2009), I described these changes as being as a 122% yearly increase and a 0.9% yearly decrease.  In doing that I had simply used the methodology AHRQ describe note xix (at 5) of the 2006 NHDR, where it was clearly discussing percentage point changes rather than percent changes (though it termed these changes as percent changes in the text).  The methodology in appropriate for translating an all-years percentage point change into yearly changes, since changes in the baseline from year-to-years are not relevant.  But those changes are relevant with regard to the relationship between an all-years and he yearly changes underlying it.  For example, a 20% % yearly increase would translate into a 73% increase over three years; a 20% yearly decrease would translate into a 49% decrease over three years.  And the larger the yearly change the greater the degree to which yearly changes will differ from an all years figure.  In any case, the 122% figure was based on an all-years 363% increase in the relative difference divided by three years but the correct figure is close to 65%.  Because the percent yearly decrease in the relative difference in new AIDS cases was small, the properly calculated figure differed little from the figure derived by dividing an all-years 3.3% change by four years.  See discussion of this issue in the Percentage Points sub-page of the Vignettes page of jpscanlan.com   

 

[ii]  I have not reviewed the 2009 report carefully.  But I did note that in describing the 1% change that it would regard as meaningful (at page 28), AHRQ uses the same language that it uses in the National Healthcare Quality Report and thus describes a 1% change in an outcome rate.  Presumably it means a 1% change in a relative disparity (though in generally describing changes in disparities it discusses percentage point changes though referring to them as percent changes).