James P. Scanlan, Attorney at Law

Home Page

Curriculum Vitae

Publications

Published Articles

Conference Presentations

Working Papers

page1

Journal Comments

Truth in Justice Articles

Measurement Letters

Measuring Health Disp

Outline and Guide to MHD

Summary to MHD

Solutions

page3

Solutions Database

Irreducible Minimums

Pay for Performance

Between Group Variance

Concentration Index

Gini Coefficient

Reporting Heterogeneity

Cohort Considerations

Relative v Absolute Diff

Whitehall Studies

AHRQ's Vanderbilt Report

NHDR Measurement

NHDR Technical Issues

MHD A Articles

MHD B Conf Presentations

MHD D Journal Comments

Consensus/Non-Consensus

Spurious Contradictions

Institutional Corresp

page2

Scanlan's Rule

Outline and Guide to SR

Summary to SR

Bibliography

Semantic Issues

Employment Tests

Case Study

Case Study Answers

Case Study II

Subgroup Effects

Subgroup Effects NC

Illogical Premises

Illogical Premises II

Inevitable Interaction

Interactions by Age

Literacy Illustration

RERI

Feminization of Poverty S

Explanatory Theories

Mortality and Survival

Truncation Issues

Collected Illustrations

Income Illustrations

Framingham Illustrations

Life Table Illustrations

NHANES Illustrations

Mort/Surv Illustration

Credit Score Illustration

Intermediate Outcomes

Representational Disp

Statistical Signif SR

Comparing Averages

Meta-Analysis

Case Control Studies

Criminal Record Effects

Sears Case Illustration

Numeracy Illustration

Obesity Illusration

LIHTC Approval Disparitie

Recidivism Illustration

Consensus

Algorithm Fairness

Mortality and Survival 2

Mort/Survival Update

Measures of Association

Immunization Disparities

Race Health Initiative

Educational Disparities

Disparities by Subject

CUNY ISLG Eq Indicators

Harvard CRP NCLB Study

New York Proficiency Disp

Education Trust GC Study

Education Trust HA Study

AE Casey Profic Study

McKinsey Achiev Gap Study

California RICA

Nuclear Deterrence

Employment Discrimination

Job Segregation

Measuring Hiring Discr

Disparate Impact

Four-Fifths Rule

Less Discr Alt - Proc

Less Discr Altl - Subs

Fisher v. Transco Serv

Jones v. City of Boston

Bottom Line Issue

Lending Disparities

Inc & Cred Score Example

Disparities - High Income

Underadjustment Issues

Absolute Differences - L

Lathern v. NationsBank

US v. Countrywide

US v. Wells Fargo

Partial Picture Issues

Foreclosure Disparities

File Comparison Issues

FHA/VA Steering Study

CAP TARP Study

Disparities by Sector

Holder/Perez Letter

Federal Reserve Letter

Discipline Disparities

COPAA v. DeVos

Kerri K. V. California

Truancy Illustration

Disparate Treatment

Relative Absolute Diff

Offense Type Issues

Los Angeles SWPBS

Oakland Disparities

Richmond Disparities

Nashville Disparities

California Disparities

Denver Disparities

Colorado Disparities

Nor Carolina Disparitie

Aurora Disparities

Allegheny County Disp

Evansville Disparities

Maryland Disparities

St. Paul Disparities

Seattle Disparities

Minneapolis Disparities

Oregon Disparities

Beaverton Disparities

Montgomery County Disp

Henrico County Disparitie

Florida Disparities

Connecticut Disparities

Portland Disparities

Minnesota Disparities

Massachusetts Disparities

Rhode Island Disparities

South Bend Disparities

Utah Disparities

Loudoun Cty Disparities

Kern County Disparities

Milwaukee Disparities

Urbana Disparities

Illinois Disparities

Virginia Disparities

Behavior

Suburban Disparities

Preschool Disparities

Restraint Disparities

Disabilities - PL 108-446

Keep Kids in School Act

Gender Disparities

Ferguson Arrest Disp

NEPC Colorado Study

NEPC National Study

California Prison Pop

APA Zero Tolerance Study

Flawed Inferences - Disc

Oakland Agreement

DOE Equity Report

IDEA Data Center Guide

Duncan/Ali Letter

Feminization of Poverty

Affirmative Action

Affirm Action for Women

Other Affirm Action

Justice John Paul Stevens

Statistical Reasoning

The Sears Case

Sears Case Documents

The AT&T Consent Decree

Cross v. ASPI

Vignettes

Times Higher Issues

Gender Diff in DADT Term

Adjustment Issues

Percentage Points

Odds Ratios

Statistical Signif Vig

Journalists & Statistics

Multiplication Definition

Prosecutorial Misconduct

Outline and Guide

Misconduct Summary

B1 Agent Cain Testimony

B1a Bev Wilsh Diversion

B2 Bk Entry re Cain Call

B3 John Mitchell Count

B3a Obscuring Msg Slips

B3b Missing Barksdale Int

B4 Park Towers

B5 Dean 1997 Motion

B6 Demery Testimony

B7 Sankin Receipts

B7a Sankin HBS App

B8 DOJ Complicity

B9 Doc Manager Complaints

B9a Fabricated Gov Exh 25

B11a DC Bar Complaint

Letters (Misconduct)

Links Page

Misconduct Profiles

Arlin M. Adams

Jo Ann Harris

Bruce C. Swartz

Swartz Addendum 2

Swartz Addendum 3

Swartz Addendum 4

Swartz Addendum 7

Robert E. O'Neill

O'Neill Addendum 7

Paula A. Sweeney

Robert J. Meyer

Lantos Hearings

Password Protected

OIC Doc Manager Material

DC Bar Materials

Temp Confidential

DV Issues

Indexes

Document Storage

Pre 1989

1989 - present

Presentations

Prosec Misc Docs

Prosec Misc Docs II

Profile PDFs

Misc Letters July 2008 on

Large Prosec Misc Docs

HUD Documents

Transcripts

Miscellaneous Documents

Unpublished Papers

Letters re MHD

Tables

MHD Comments

Figures

ASPI Documents

Web Page PDFs

Sears Documents

Pages Transfer


Algorithm Fairness

(sketch)

(Jan. 4, 2012)

This page is a placeholder (sketch) for a page that will discuss various issues about the so-called algorithm fairness or unfairness.  Perceptions about algorithm fairness or unfairness involve the pattern whereby when one group (Group A) has a higher likelihood of experiencing outcome X than another group (Group B), while an imperfect predictor of outcome X will tend to underpredict the outcome X for Group A and overpredict the outcome for Group B, (a) among persons who experience outcome X after being evaluated by the predictor, a higher proportion of Group B than Group A will have been identified by the predictor as unlikely to experience outcome X (so-called false negatives) and (b) among persons who do not experience outcome X after being evaluated by the predictor higher proportion of Group A than Group B will have been identified by the predictor as likely to experience the outcome (so-called false positives).  The issue, which is currently much discussed with regard to the fairness of algorithms used to make decisions about arrested or convicted persons is the same as that much discussed with regard to employment tests in Ability Testing: Uses, Consequences and Controversies, Part I, National Academies Press (1982) (Committee on Ability Testing, Assembly of Behavioral and Social Sciences, National Research Council, Alexandra K. Wigdor & Wendell R. Garner (eds.)) and various others places in the 1980s and early 1990s.

Whether outcome X is the favorable or corresponding adverse outcome in a particular context, as well as whether groups are deemed false positive of false negatives, is entirely arbitrary.  Thus, in the case of employment tests where whites have higher scores than blacks, successful performance of the job is commonly regarded as outcome X.  Thus, perceived test unfairness will commonly be described in terms of higher false negative rates for blacks (higher rates of test failure among blacks than whites who performed well on the job) and higher false positive rates for whites (i.e., higher pass rates among whites than blacks who did not perform well on the job).    

In the case of algorithms used to predict recidivism, recidivism is commonly treated as X.  Thus, in situations where black defendants have higher recidivism risk scores than whites, perceived algorithm unfairness will commonly be cast in terms of higher false positives for blacks and higher false negatives for whites.  Commonly the racial differences in false positives and false negatives will be cast in relative terms.

With regard to both employment tests and recidivism algorithms, the characterization of the issue if often misleading.  In the case of employment tests, the matter has at times been characterized in terms of underprediction of successful performance for blacks and overprediction for whites, when in fact that opposite is the case. 

In the case of recidivism algorithms, a May 23, 2016 ProPublica article titled “How We Analyzed the COMPASS Algorithm” that is the subject of the Recidivism Illustration page, would state that “black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk.”  Such statements, even if semantically accurate, must be interpreted with an understanding that, among all defendants, black defendants were in fact less likely to be incorrectly identified as a likely to recidivate than a white defendant and more likely than whites to be incorrectly identified as unlikely to recidivate.  It is only among defendants who did not recidivate that blacks were more likely than whites to have been incorrectly identified as highly likely to recidivate and only among defendants who did recidivate that whites were more likely than blacks to have been incorrectly identified as highly unlikely to recidivate.  

This page will eventually address several issues regarding perceptions about algorithm fairness.  One will involve the failure to understand the way cutoffs affect the size of relative differences in false positives and relative differences in false negative.  Observers speak about reducing the unfairness of a predictor.  But just such observers universally fail to understand the way altering a cutoffs will tend to affect relative differences in favorable and adverse outcomes pursuant to the predictor (e.g., altering the cutoff in a way that increase favorable outcomes tends to reduce relative differences in favorable outcomes while increasing relative differences in the corresponding adverse outcome), they fail to understand the way cutoffs affect the size of relative difference in false positives or false negatives.

Another may involve the way the validity of the predictor affects relative difference in false positives and relative difference in false negatives.  That is, the more valid the predictor, the fewer will be the number of false positives and false negatives.  But the effect on relative differences in false positives and false negatives is another matter.