Page 22 - Delaware Lawyer - Fall 2023
P. 22
FEATURE | CONFRONTING IMPLICIT BIAS
communities of color spans decades, new developments like facial recogni- tion technologies (FRT) and other AI technologies represent a new area of stress. For instance, facial analysis tech- nologies have higher error rates overall for people of color. The disparate effect on people of color has led some experts to fear that the new technology may further divide police and the commu- nity they serve.19
Not only has the enlarged scope of AI technologies raised concern, but re- search findings suggest it has inherited biases and misinformation that influ- ence policing decisions as well. After studying facial recognition technologies in several large cities, Harvard research- er Alex Najibi concluded: “In a criminal justice setting, face recognition technol- ogies that are inherently biased in their accuracy can misidentify suspects, incar- cerating innocent Black Americans.”20 Research shows that algorithms trained on law enforcement data display both gender and racial biases, particularly against Black men.21 The point is that new AI technology intended to help make society safer may hurt the com- munities the police claim to protect. Similar concerns also surround the use of AI technologies in the court system.
The explosion of AI-based legal data products in the court system has brought these concerns to the fore in several venues. The 2019 conference of the American Association of Law Libraries Diversity & Inclusion Sym- posium focused on bias in the court system. “Algorithms and taxonomies are ultimately created by human be- ings with our own prejudices and ste- reotypes,” the symposium reported. “Artificial intelligence technologies and platforms could be replicating or even aggravating the same issues when it comes to discrimination of information regarding specific groups or just ignor- ing highly important data crucial to a case or your patron.”
In response to concerns about AI bias in the legal system, the American Bar Association passed a resolution that “urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intel- ligence in the practice of law including: (1) bias, explainability and transpar- ency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.” Moreover, attorneys and law firms are urged not to assume the effectiveness of AI products.22
Correction’s use of AI technologies misinformation is possibly the most egregious example of data algorithm biases in the criminal justice system. A study of the underlying accuracy of the Correctional Offender Manage- ment Profiling for Alternative Sanctions (COMPAS) recidivism algorithm found that it was biased against Black people, that it overestimated the likelihood of Black offenders committing further crimes after completing their sentence while underestimating the likelihood of white offenders. As the authors ex- plain, COMPAS “. . . correctly predicts recidivism 61% of the time. But Blacks are almost twice as likely as whites to be labeled a higher risk but not actu- ally re-offend. It makes the opposite mistake among whites. They are much more likely than Blacks to be labeled lower risk but go on to commit other crimes.”23 Various studies corroborate the findings of biases in the COMPASS recidivism algorithms.24 Moreover, peo- ple of color are both underrepresented and misrepresented in data, and the AI algorithms impacting their lives.25
Artificial intelligence-driven trans- formation is reconfiguring society and the economy. The capacity of AI technologies to liberate humans from participating in routinized, labor- intensive and dangerous activities represents promising possibilities for
enhancing human life. Enhancing human life in a society and economy where human labor is becoming less significant will require forethought, appropriate planning, policy and leadership to make sure forecasts of job loss and disparate impact can be, if not avoided, at least minimized. The redress of biased data and misinformation currently in AI algorithms must be a high priority. Furthermore, involving people of color and other underrepresented groups in generating, organizing, an- alyzing and programming their data is necessary to help AI technologies become an instrument for enhancing all human life.
NOTES
1. Manyika, J., Lund, S., Chui, M., Bug- hin, J., Woetzel, J., Batra,P., Ko, R. & Sanghvi, S. Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills and Wages (November 28, 2017) https://www.mckinsey.com/featured- insights/future-of-work/jobs-lost-jobs- gained-what-the-future-of-work-will- mean-for-jobs-skills-and-wages
2. https://www.weforum.org/agen- da/2020/10/dont-fear-ai-it-will-lead-to- long-term-job-growth/
3. https://www.bls.gov/opub/reports/ race-and-ethnicity/2022/home.htm
4. https://www.forbes.com/sites/ forbestechcouncil/2023/07/06/20- new-and-enhanced-roles-ai- could-create/?sh=6c599f56f047
5. https://www.mckinsey.com/capabili- ties/mckinsey-digital/our-insights/the- economic-potential-of-generative-ai-the- next-productivity-frontier (June 14, 2023). 6. https://ncses.nsf.gov/pubs/ nsf23315/report/science-and-engineering- degrees-earned
7. https://www.whitehouse.gov/ostp/ ai-bill-of-rights/; https://www.white- house.gov/briefing-room/statements- releases/2023/02/16/fact-sheet-president- biden-signs-executive-order-to-strengthen- racial-equity-and-support-for-underserved- communities-across-the-federal-govern- ment
8. Larrazabal, A. L., Nieto, N. & Peter- son, V. Gender imbalance in medical imag- ing datasets produces biased classifiers for computer-aided diagnosis https://www. pnas.org/doi/10.1073/pnas.1919012117 (May 26, 2020)
9. https://jamanetwork.com/journals/ jama/fullarticle/2803797
10. Adewole, A. S. & Smith, A. (2018).
20 DELAWARE LAWYER FALL 2023