Council of Parent Attorneys and Advocates, Inc. v. DeVos
(June 2, 2019; rev. July 18, 2019)
This page discusses the innumeracy of the Departments of Education and Justice reflected by actions taken to delay implementation of a regulation requiring that “significant [racial/ethnic] disproportionality” in the identification of children with disabilities and particular types of disabilities, as well as in disciplinary actions among children with disabilities, in terms of risk ratios, and in defending the delay decision against a challenge in Council of Parent Attorneys and Advocates, Inc. v. DeVos. At is issue is a situation where, rather than justifying the delay on a basis that would have insulated it from a successful challenge and likely obviated any challenge, the agencies continued to promote their mistaken understanding that generally reducing an outcome will tend to reduce, rather than increase, relative differences in rates of experiencing the outcome.
A. The Department of Education’s December 2016 Disproportionality Regulation
One area where racial disparity monitoring consumes great resources of state and local governments involves the Individuals with Disabilities Education Act (IDEA). IDEA requires recipients of federal education funds to identify “significant [racial/ethnic] disproportionality” in the identification of children with disabilities and particular types of disabilities as well as in disciplinary actions among children with disabilities. When significant disproportionality is found, educational authorities are required to take certain actions. These actions commonly involve adding circumspection to decision-making processes in a way that reduces the frequency of identification of children with disabilities and the frequency of disciplinary actions among children with disabilities. For reasons I have explained in many places, including “Race and Mortality Revisited,” Society (July/Aug. 2014), “Compliance Nightmare Looms for Baltimore Police Department,” Federalist Society Blog (Feb. 8, 2017), that will tend to increase relative racial/ethnic differences in rates of experiencing these outcomes. See “Getting it Straight When Statistics Can Lie,” Legal Times ( June 23, 1993) regarding, among other things, a study reflecting the mistaken belief that an organization with many safeguards against arbitrary treatment of employees would tend to have comparatively small, rather than comparatively large, relative racial differences in termination rates.[i]
In the case of determinations of significant racial/ethnic disproportionality in identification of children with disabilities, IDEA also requires education authorities to set aside a certain proportion of federal IDEA funds “to provide comprehensive coordinated early intervening services to serve children in the local educational agency, particularly children in those groups that were significantly overidentified.” Effective programs of this nature (i.e., those that reduce the numbers of children requiring special education services) that are provided equally for children of all racial/ethnic groups, by reducing the total number of identifications, will tend to increase relative racial/ethnic differences between identification rates of advantaged and disadvantaged groups. Effective programs of this nature that are provided solely for disadvantaged groups will tend to reduce all measures of differences between identification rates of advantaged and disadvantaged groups. The effect on relative differences of programs that are “particularly” focused on disadvantaged groups will turn on how “particularly” is interpreted and a range of other factors that will likely vary from setting to setting.
In December 2016, just before the change in administrations, leadership of the Department of Education issued a regulation (Disproportionality Regulation) requiring educational authorities covered by IDEA to measure significant disproportionality in terms of the relative difference between a particular group’s rate and the rate for all other persons, as reflected in the ratio of those two rates (termed the Risk Ratio in the regulation). The regulation left to the states to determine the size of the Risk Ratio that would constitute significant disproportionality and certain other things that would affect the frequency of determinations of significant disproportionality. The regulation also specified among things that should be measured with regard to racial/ethnic differences in disciplining of students with disabilities two categories of in-school suspensions (ten days or less, more than ten days) and two like categories of out-of-school suspensions. States were to comply with the requirements of the regulation by July 2018.
B. The Department of Education’s July 2018 Delay Regulation
Current leadership of the Department of Education, believing that there has been too much required monitoring of racial/ethnic differences under IDEA when there was little reason to believe that bias is involved in those differences and that pressures to reduce significant disproportionality may have been causing the underidentification of disabilities among racial/ethnic minorities, wanted to reconsider this regulation. Therefore, in July 2018, the agency issued a regulation (Delay Regulation) postponing implementation of the December 2016 regulation until July 2020 in order that the agency could give further thought to the matter.
Whatever else it might wish to say to justify the delay, an agency with a sound understanding regarding measurement of demographic differences would sensibly have pointed out the irrationality of measuring demographic differences, especially in this context, by means of relative differences between rates. In fact, the agency could have simply done the following based on Disproportionality Regulation itself.
It could have quoted the following language from the Preamble to the Disproportionality Regulation (Federal Register, Vol. No. 243, page 92407 (Dec. 19, 2016):
“Similarly, one commenter argued that the risk ratio is an illogical measure of the association between two groups; for example, a risk ratio of 1.85 for outcome rates of 37 percent and 20 percent means the same thing as a risk ratio of 2.60 for rates of 13 percent and 5 percent[.]”
This statement was a somewhat garbled compacting of a statement in my Comment on the Proposed Regulation.[ii] But it nevertheless raised the problematic feature of the risk ratio as a measure of association in that it treats situations that are the same as if they are were different.[iii]
The agency then could have pointed that, although the Preamble then devoted almost 400 words to a purported discussion of the statement and another point, nothing said in any way bore on the illogic of treating two situations differently when they in fact are the same. That is, for example, the discussion did not challenge the point that the situations were the same, nor did it argue that, even though the two situations involved the same level of association between group membership and outcome rates, there somehow existed a rational basis for treating them differently.
The agency could have also pointed out that, apart from the illogic of the relative difference as a measure of association, in the context at issue, other things being equal, educational authorities that are the more circumspect about the identification of children with disabilities or the imposition of discipline (and those that adopt programs to generally reduce discipline rates) will tend to show larger relative differences in these outcomes than other educational authorities. Fulfillment of the obligations arising from determinations of significant disproportionality (probably including the provision of comprehensive coordinated early intervening services) will tend to increase disproportionality as measured by Risk Ratios still further. In pointing these things out, the Department of Education would also have sensibly clarified that the matter had been complicated by the agency’s own promoting of the mistaken belief that generally reducing an outcome tends to reduce relative differences in rates of experiencing it when in fact the opposite is the case. Explaining to the public, school administrators, and the public that the agency had been mistakenly leading them to believe that certain policies would reduce relative differences when the actions in fact tended to increase those differences is something the agency could fairly be regarded as having an obligation to do in any case. Regarding this obligation, see my July 17, 2017 letter to the U.S. Departments of Education, Health and Human Services, and Justice, my “Innumeracy at the Department of Education and the Congressional Committees Overseeing It,” Federalist Society Blog (Aug. 24, 2017), “United States Exports Its Most Profound Ignorance About Racial Disparities to the United Kingdom,” Federalist Society Blog (Nov. 2, 2017), as well as my December 8, 2017 testimony to the U.S. Commission on Civil Rights and the handout a March 22, 2018 meeting with Department of Education staff.[iv]
The Department of Education could also have simply pointed out that at the time the 2016 regulation was issued, the agency, like most other federal agencies and most of the social science community, believed reducing an outcome tended to reduce relative differences in rates of experiencing an outcome, and that, now understanding that the opposite is the case, the agency needs time to consider the implications of the fact that its understanding was incorrect.
To be comprehensive, an agency that fully understood all pertinent measurement issues would also have pointed out problems with measuring a demographic difference by comparing a group’s rate with the rate for all other persons. That is, when black and white rates for any outcome are, say, 15 percent and 5 percent, all measures of differences concerning such rates would be different in situations where a student body is comprised solely of blacks and whites and situations where the student body is partly comprised of other groups. For example, if a student body is one-third black, one-third white, and one-third Hispanic, and, say, the Hispanic rate is 10 percent, the black 15 percent rate would be compared with a 10 percent rate rather than a 5 percent rate. Further, in that situation, no measure would find disproportionality as to Hispanics since the Hispanic rate would be same as the rate for all other students combined. See my IDEA Data Center Disproportionality Guide page and slides 98 to 106 of my October 10, 2014 University of Maryland workshop.
An agency that fully understood measurement issues would also have pointed out that it is impossible to analyze differences in rates of experiencing either category of in-school suspensions or the category of out-of-school suspensions for ten days or less. These are what may be deemed Intermediate Outcomes categories. Differences between two groups reflected by their rates of falling into such categories can never be quantified in a rational manner, just as differences in rates of having fair health cannot be quantified in a rational manner (though difference between rates of less-than-good health, or between the corresponding rates of good-or-better health, can be rationally quantified).[v]
If the Department of Education has simply explained the most basic problem with the Risk Ratio for quantifying demographic differences in this (or any) context, in addition to educating the public, educational authorities, Congress, and the disparities research community regarding a matter they all woefully misunderstood, the agency would have insulated the Delay Regulation from successful challenge under the Administrative Procedure Act (assuming the courts proved to educable on the matter).[vi] Such explanation would also have obviated reasons for any entity concerned about effective monitoring of demographic differences to challenge the delay regulation.
Failing to understand these issues, however, in justifying the delay regulation the Department of Education simply relied on concerns that the rule would incentivize educational authorities to reduce Risk Ratios (a) by improperly reducing the total number of students identified as having disabilities and (b) by improperly reducing the number of students in particular racial/ethnic groups identified as having disabilities. In making these arguments, the agency cited a Texas law limiting the proportion of students in an educational authority identified as having disabilities to 8.5 percent.
It warrants note that in making the argument that the regulation would incentivize education authorities to reduce the total number of identifications in order to reduce the Risk Ratio, the agency was once again leading the public and educational authorities to believe incorrectly that reducing an outcome would tend to reduce, rather than increase, relative differences between rates of experiencing the outcome. Nevertheless, there is a certain perverse validity to the argument. The only entities actually incentivized to reduce Risk Ratios by limiting the total number of identifications are those that, like the Department of Education itself, mistakenly believe that limiting the total numbers of identifications will tend to reduce, rather than increase, Risk Ratios. But, since virtually everyone involved shares that mistaken belief, the incentive would exist, and it appears to have influenced Texas.
There is also a certain perverse validity to the argument that using the Risk Ratio to measure significant disproportionality would incentivize education authorities to take race/ethnic-conscious action to reduce Risk Ratios, though not in the way the Delay Regulation regards the matter. Attaching adverse consequences to a finding or racial/ethnic disproportionality will certainly incentive authorities to take race/ethnic-conscious action to avoid such consequences, but it will do regardless of the measure used to identify racial/ethnic disproportionality. But in this and varied other settings the incentive is especially strong when the sanctioned measure is the Risk Ratio. For the main actions commonly taken with the aim of reducing such ratios that are not race/ethnic-conscious in fact tend to increase the ratios, thus providing additional incentives to take racial/ethnic-conscious action to reduce the ratios. Texas, for example, would have especially strong incentives to reduce the identification of particular racial/ethnic groups precisely because its limitation on the total number of identifications will tend to cause its educational authorities to have comparatively large risk ratios, at least where groups that commonly have comparatively high identification rates make up comparatively large proportions of the student population, and the 8.5 percent limitation thus in fact reduces the total number of identifications.
C. COPAA v. Devos
Nine days after the Department of Education issued the delay regulation, an advocacy group brought Council of Parent Attorneys and Advocates, Inc. v. DeVos et al. to challenge the regulation as arbitrary and capricious. In cases like this, federal agencies are usually represented by the Department of Justice, which shares all the misunderstandings regarding the measurement of demographic differences that the Department of Education has. In fact, the Department of Justice has promoted the belief that generally reducing an outcome will tend to reduce relative differences in rates of experiencing it in far more situations than the Department of Education has. The Department of Justice defended the case using arguments in the delay regulation itself, while revealing no understanding that generally reducing identification rates tends to increase, not reduce, racial/ethnic Risk Ratios. In doing so, the agency at least impliedly led the court to believe (as both the Departments of Justice and Education believe), not only that states and educational authorities would believe that generally reducing identification rates would tend to reduce Risk Ratios, but that generally reducing identification rates would in fact tend to reduce Risk Ratios. By ruling of March 7, 2019, the United States District Court for the District of Columbia rejected those arguments and vacated the delay regulation. The government appealed the ruling on May 6, 2019, and the government’s brief as appellant is due on August 19, 2019.
Whether the appeal process will cause either agency or the courts to finally understand that generally reducing an outcome tends to increase, not reduce, relative differences in rates of experiencing it remains to be seen. If the plaintiff Council of Parent Attorneys and Advocates, Inc. comes to understand the issue, it would sensibly withdraw the suit. Such an organization has no interest in causing the implementation of a nonsensical regulation. On the other hand, it has a strong interest in causing the Department of Education to rethink the regulation and to rethink everything else the agency has done with regard to quantifying of demographic differences in educational outcomes. See the above-mentioned “Innumeracy at the Department of Education and the Congressional Committees Overseeing It,” Federalist Society Blog (Aug. 24, 2017).
Other situations of continuing innumeracy on the part of federal agencies are reflected in my letters of April 12, 2018, and April 17, 2018, to the Government Accountability Office and December 13, 2018 to the Department of Housing and Urban Development (HUD). The last item involves a situation with elements that are some respect comparable to elements in the situation addressed on this page. HUD is reconsidering its regulation, upheld Texas Department of Housing and Community Development, et al. v. The Inclusive Communities Project, Inc., that applied the disparate impact doctrine to the Fair Housing Act. Withdrawing the regulation would be easier if HUD understood the problematic nature of the doctrine and its less discriminatory alternative requirement, given that relaxing a lending standard, while tending to reduce relative differences in meeting a standard, tends to increase relative differences in failing to meet the standard. See my amicus curiae brief in the Inclusive Communities case and my “Is the Disparate Impact Doctrine Unconstitutionally Vague?” Federalist Society Blog (May 6, 2016). But even though income and credit score data make in absolute clear that lowering income and credit score requirements tend to increase relative racial differences in failing to meet the requirements, HUD, like the Department of Justice and other agencies enforcing fair lending laws, continues to believe that lowering standards tends to reduce those differences.
[i] IDEA also requires that educational authorities identify “significant discrepancies” between long-term suspension and expulsion rates of children with and without disabilities, and requires that when such discrepancies are found that educational authorities consider certain actions (including, as specifically identified in the statute, “positive behavioral interventions and supports”) of a type that commonly increase relative differences between rates in rates of suspensions and expulsions. See "Race and Mortality Revisited" at 343 and my Disabilities – Public Law 104-446 page. The Department of Education regulation discussed on this page does not apply to determinations of significant discrepancies.
[ii] The figures are from Table 1 of "Race and Mortality Revisited," which shows how lowering a test cutoff, while reducing relative differences in pass rates, tends to increase relative differences in failure rates. The quotation in the Preamble is the closest thing the Department of Education has come to recognizing that lowering a test cutoff tends to increase relative differences in failure rates. Possibly every person in the Department of Education involved in the analysis of demographic differences believes that lowering a test cutoff tends to reduce relative differences in failure rates.
[iii] The problematic nature of the risk ratio as a measure of association, of course, is also reflected by the following. Even though the forces causing the adverse outcome rates of advantaged and disadvantaged groups to differ are the same as the forces causing the groups’ favorable outcome rates to differ, according to the numbers presented in the example, risk ratios for the adverse outcome indicate that the forces are stronger in the second situation than the first situation while risk ratios for the favorable outcome indicate that the forces are stronger in the first situation than the second situation. The problematic nature of the risk ratio as a measure of association is also reflected by the fact that anytime a risk ratio says that strength of an association is the same for situations involving different pairs of rates – as, for example, where rates of 20 percent and 10 percent and rates of 10 percent and 5 percent both yield a risk ratio of 2.0 – the risk ratios for the opposite outcomes will necessarily say that the strength of association is different in the two situations. In fact, the principal thing one can divine with some assurance from the fact that risk is the same in two situations involving different pairs of advantaged and disadvantaged group rates for either a favorable or adverse outcome is that the strength of the forces causing the groups’ rates to differ is not the same in the two situations.
[iv] The handout (at 11) discusses the Disproportionality Regulation among materials produced or sponsored by the agency that it should withdraw.
[v] This issue applies to comparisons of rates at which different groups receive single suspensions (which cannot be effectively analyzed) as distinguished from rates of one-or-more suspensions (which can be effectively analyzed), as explained in my July 17, 2017 letter to the Departments of Education, Health and Human Services, and Justice (at 7).
[vi] To my knowledge, while courts have occasionally discussed that different measures of demographic differences yield differing values in way that may be pertinent to a legal issue, none has ever recognized that it is possible for two measures to change in opposite directions as the prevalence of an outcome changes. Like the government, all courts seem to take for granted that reducing an adverse outcome will tend to reduce, rather than increase, relative differences in rates of experiencing it. See my “Getting it Straight When Statistics Can Lie,” Legal Times ( June 23, 1993).