Prefatory note: This page serves as a repository of links to formal correspondence to institutions whose missions are compromised by failure to understand patterns by which measures of differences between outcome rates (proportions) tend to be affected by the prevalence of an outcome (as discussed, among other places, on the Measuring Health Disparities, Scanlan’s Rule, Mortality and Survival, Lending Disparities, and Discipline Disparities pages of this site). As suggested by the fact that this page was made a sub-page to the Measuring Health Disparities page, originally the institutions were limited to those involved with health and healthcare disparities research issues. But it was expanded to include letters to other types of institutions in April 2012.
The page serves two purposes. First, it makes available electronic copies of the items of correspondence with links to materials they reference, thus facilitating recipients’ review of referenced materials. Second, it creates a record of the correspondence that may ultimately be useful in addressing the willingness and ability of institutions to respond to information indicating that some of things they do in pursuit of their missions are deeply flawed. For example, the Department of Education and the Department of Justice have been for some time leading the public to believe that large racial disparities in discipline rates result from stringent discipline policies, the exact opposite of the case. So this page may one day address how those institution reacted when confronted with the fact of their misperceptions. See the Duncan/Ali Letter sub-page of the Discipline Disparities page and the Holder/Perez Letter sub-page of the Lending Disparities page.
To date, most institutions have done nothing significant in response to this correspondence (or other correspondence either in hard copy or email form). One notable exception involves the National Center for Health Statistics (NCHS). But its response was by no means a useful one, as illustrated by the following circumstances. Among the health disparities issues included in the Race and Health Initiative instituted by Department of Health and Human Services (HHS) in February 1998 was the racial disparity in immunization rates. In an October 26, 1998 Progress Review: Black Americans, HHS reported a substantial decrease in racial differences in pneumococcal immunization rates among persons over 65. In consequence of my raising the measurement issues addressed on this site with HHS and NCHS beginning in 1998, NCHS, rather than addressing those issues and perhaps providing useful guidance to the increasing numbers of health and healthcare disparities researchers, simply determined, beginning in 2004, that disparities in things like immunization henceforth would always be measured in terms of relative differences in adverse outcomes. Thus, the substantial reduction in an immunization disparity identified by HHS in 1998 would now be deemed a substantial increase in the disparity. See Table 3 of Harvard University Applied Statistics Workshop 2012. See also Table 4 of the workshop and Comment on Morita Pediatrics 2008 regarding a situation where the authors found dramatic decreases in immunization disparities in circumstances where NCHS would find dramatic increases in disparities. But the Duncan/Ali Letter sub-page also discusses the way that, on recognizing the statistical patterns described on this site, patient.co.uk revised its guidance on calculating number-needed-treat in light of the points made on the Subgroup Effects and Illogical Premisessub-pages of the Scanlan’s Rule page.
Many thousands of institutions in the United States and around the world engage in activities that involve appraising differences between the rates at which two groups experience an outcome and evaluating the bearing of such appraisal on a range of issues in the law and the social and medical sciences. Such institutions include governmental entities, universities, research institutes, and a variety of scientific and other scholarly journals. With very minor exception, however, the manner in which these institutions appraise differences between outcome rates is fundamentally flawed as a result of the failure to recognize the way that standard measures of differences between outcome rates tend to be affected by the overall prevalence of an outcome, as discussed in the Measuring Health Disparities (MHD), Scanlan’s Rule (SR), and Mortality and Survival pages, among other pages, on this site and in the references made available by those pages, including “Can We Actually Measure Health Disparities?” (Chance, Spring 2006), “Race and Mortality” (Society, Jan/Feb 2000), “Divining Difference” (Chance, Fall 1994), “The Perils of Provocative Statistics” (Public Interest, Winter 1991), and “The Misinterpretation of Health Inequalities in the United Kingdom” (British Society for Population Studies, 2006) – and now addressed most comprehensively in the Harvard University Measurement Letter listed with the items at the end of the text on this page.
The most notable of the ways standard measures of differences between outcome rates are affected by the overall prevalence of an outcome is the pattern whereby the rarer an outcome the greater tends to be the relative difference in experiencing it and the smaller tends to be the relative difference in avoiding it. Thus, among many comparable examples:
·When test scores are lowered (or test performance improves), relative differences in failure rates tend to increase while relative differences in pass rates tend to decrease.
·When poverty declines relative differences in poverty rates tend to increase while relative differences in rates of avoiding poverty tend to decrease.
·When mortality declines, relative differences in mortality rates tend to increase while relative differences in survival rates tend to decrease.
·When overall rates of receiving beneficial health procedures or care (e.g., mammography, immunization, prenatal care, adequate hemodialysis etc.) increase, relative differences in rates of receiving such procedures or care tend to decrease while relative differences in rates in failing to receive them tend to increase.
·Banks with relatively liberal lending policies tend to show larger relative differences in mortgage rejection rates but smaller relative differences in mortgage acceptance rates than banks with less liberal lending policies.
·Relative differences in adverse outcome rates tend to be large among comparatively advantaged subpopulations (e.g., the college-educated, British civil servants), where such outcomes tend to be rare, while relative differences in the opposite outcomes tend to be small among those subpopulation.
·More lenient school discipline policies will tend to yield larger relative differences in discipline rates, though smaller relative differences in rates of avoiding discipline, than more stringent policies.
Absolute differences between rates and differences measured by odds ratios tend also to be affected by the overall prevalence of an outcome, though in a more complicated way than the two relative differences, as described most precisely in the introduction to SR. Roughly, as uncommon outcomes (those with rates of less than 50% for both groups) become more common, absolute differences between rates tend to increase; as common outcomes (those with rates of more than 50% for both groups) become even more common absolute differences tend to decrease. Differences measured by odds ratios tend to change in the opposite direction of absolute differences.[i] Other common measures that are functions of dichotomies, and hence in some manner affected by overall prevalence, are discussed in various places – e.g., longevity (BSPS 2006), the Gini coefficient (Gini Coefficient sub-page of MHD), the concentration index (Concentration Index sub-page of MHD), the phi coefficient (Section A.13 of SR), Cohen’s Kappa Coefficient (Section A.13a of SR).[ii]
One point of clarification is in order. When a study finds, for example, that a factor increases some outcome rate from 1% to 3%, whether one states that the factor increased the outcome by 200% or by 2 percentage points or states that the opposite outcome decreased by 2% (i.e., 99% reduced to 97%) or 2 percentage points, all such characterizations would be correct, and none would implicate the issues described in the prior paragraphs.[iii] But if one were to attempt to compare the size of the effect in the circumstance where an outcome rates increased from 1% to 3% with one where, say, a factor increases a rate of 2% to 5% – or if one were more abstractly to attempt to characterize the difference between 1% and 3% as a large one or a small one – the referenced issues are implicated. And none of the institutions whose activities involve the appraisal of differences in outcome rates recognizes these issues much less knows how to address them.
Most of the materials made available on this site involve the analysis of health and healthcare disparities, particularly with regard to whether race/ethnic or socioeconomic disparities are increasing or decreasing over time or otherwise are larger in one setting than another. But as suggested by the bulleted examples set out above, the same issues are involved in any interpretation of the size of differences between outcome rates.
As reflected in the discussion of the works of Carr-Hill and Chalmers-Dixon, Houweling et al., Eikemo et al., and Day et al. in Section E.7 of MHD, more thought has been given to these issues in Europe than in the United States. But with respect to the extent to which the overwhelming majority of work implicating these issues is fundamentally flawed, the situation in Europe is indistinguishable from that in the United States and elsewhere around the world.
The extent of the failure to recognize these issues is perhaps best illustrated by the many journal articles that, particularly with regard to disparities in cancer outcomes, discuss relative differences in survival and relative differences in mortality interchangeably without recognizing that the two relative differences tend to change systematically in opposite directions as cancer survival increases (as discussed on the Mortality and Survival page). The failure is also well illustrated by the way the Departments of Justice and Education have been encouraging or pressuring banks and public schools to relax lending standards and discipline policies, plainly believing that doing so should reduce relative differences in mortgage rejection rates and relative differences in discipline rates, when the exact opposite is the case (as discussed on the Lending Disparities, and Discipline Disparities pages and the recent ’Disparate Impact’: Regulators Need a Lesson in Statistics” American Banker, June 5, 2012) and “Racial Differences in School Discipline Rates” (The Recorder, June 22, 2012). Indeed, though the Department of Justice has been pressing employers to lower test cutoffs for close to fifty years because lowering cutoffs reduces relative differences in pass rates, it is unclear whether anyone in the Department even knows that lowering test cutoffs increases relative differences in failure rates.
From time to time, I have contacted various researchers or institutions about these issues, in recent years usually by email, suggesting that they reevaluate the ways they or those in some manner affiliated with them (as in the case of editors of scientific journal and the authors who publish in those journals) appraise differences between outcome rates. But even when the emails have been read carefully enough for that the recipient to recognize that there may be a serious problem with current methods, such communications have had limited effect. With the hope that formal letters would have greater effect, in 2009 I began to send such letters to the some of the more influential institutions involved in activities implicating the issues described above. When sending hard copy letters, it is my practice to include links to referenced materials and to post electronic versions of the letters on this site in order to facilitate the recipients’ review of referenced materials. Thus, as letters are sent, links to the letters will be made available below.
Some of the eventual recipients of the letters are already discussed in various pages on this site, including the National Center for Health Statistics (NCHS) and the Agency for Healthcare Research and Quality (including among many other places Section E.4 of MHD, Section A.6 of the SR, and the 2007 APHA presentation). (As noted in the introductory material, earlier contacts to NCHS, while causing their statisticians to recognize problems with existing practices, failed to yield useful results.) Other institutions may be mentioned only in passing, as in the case of Health Care Policy Department of Harvard Medical School (see Pay for Performance sub-page of MHD), though the work of such entities may be frequently addressed in the comments collected under Section D of MHD. Those comments, by their critique of so much research in major medical and health policy journals published in the United States and Europe, also inferentially implicate the editorial practices of those journals. The Mortality and Survival page does that more directly with regard to journals that publish articles on disparities in cancer mortality and survival, as discussed above, typically without recognizing, for example, that increasing mortality disparities tend to be associated with decreasing survival disparities.
Whether a particular institution receives attention on this site or receives one of the letters to be listed below typically will have little to do with the institution’s level of understanding of these issues. For similar misunderstandings exist at essentially all institutions. Institutions (and researchers) that indicate a recognition that determinations as to the size of a disparity between outcome rates may turn on the measure chosen may seem to reflect a greater understanding of the matter.[iv] But unless such entities also recognize the way each measure is systematically affected by the overall prevalence of an outcome or the need to find a measure that is not so affected, their recognition that different measures may yield different results is of limited value.
Letters to institutions are listed below with links provided to the letter:
[ii] As discussed in the introduction to the Solutions sub-page of MHD and in the February 23, 2009 update to the Comment on Morita , a probit analysis yields the same results as the more mechanically derived estimate effect size (EES) described on the Solutions sub-page of MHD and thus is theoretically unaffected by the overall prevalence of an outcome. The points made on the Solutions page regarding the strengths and weaknesses of the EES apply to the probit analysis as well. See also the Truncation Issues sub-page of SR and the Cohort Considerations sub-page of MHD.
[iii]A statement that the second figure is two times greater or two times higher than the first figure would also be correct. As discussed on the Times Higher/Times Greater sub-page of the Vignettes page of this site, however, a statement that the second figure is three times greater or higher than the first (the predominant usage in most scientific journals) would be incorrect. As discussed on the Percentage Points sub-page of the Vignettes page, a statement that the second figure is 2% greater than the first, whether incorrect or not, should be discouraged. But these are different issues from those addressed on this page.
[iv] But see page 9 and Section D of the Harvard University Measurement Letter regarding reasons why those who express an awareness of the way various measures yield different results may betray a fundamental misunderstanding of the purpose of an inquiry into the forces causes the rates of two groups to differ.