Page 20 - Delaware Lawyer - Fall 2023
P. 20
FEATURE | CONFRONTING IMPLICIT BIAS
suggests a low state of readiness, par- ticularly among African Americans and Hispanics. For example, findings from a 2019 Computing Research Asso- ciation survey indicate that of students pursuing advanced graduate degrees in STEM fields, only 3.2% were Hispanic and 2.4% were African American, while 45% were white and 22.4% were Asian. African Americans are also underrep- resented in academic robotics. Among students studying robotics engineering, 62% are white. Comparatively, 17.9% are Asian, 10.7% are Hispanic or Latino, and only 4.3% are African American. A similar trend exists at associate degree levels.6
How AI Bias Perpetuates, Deepens Inequities
Although concerning inferences may be drawn about job loss and job pre- paredness, information surrounding artificial intelligence is only projected and not actual outcomes. As such, this information is highly contested. There is agreement, however, on the need for proactive efforts to prepare industry and labor for an inevitable transforma- tion of the global economy. The under- lying premise is that with appropriate planning, policy and leadership, pessi- mistic forecasts of job loss and disparate impact can be, if not avoided, at least minimized. The recent Biden-Harris Administration Blueprint for an AI Bill of Rights and Executive Order direct- ing agencies to combat algorithmic dis- crimination are responses to these pes- simistic forecasts.7 These initiatives indi- cate that artificial intelligence concerns are far more substantial than potential job losses and preparedness. Numerous concerns exist about the quality of data used to generate artificial intelligence technologies and their impact on the equitable delivery of public and private goods and services.
Various platforms are observable where AI technologies amplify soci- etal inequities. Findings of a study
published in a National Academy of Sciences journal reveal gender imbal- ances in the training data sets of com- puter-aided diagnoses (CAD) systems, which led to these systems displaying lower accuracy with the underrepre- sented group.8 A study in the Journal of the American Medical Association identified similar disparities in skin can- cer diagnosis across people of different skin colors.9 Researchers found that the algorithms used to identify skin can- cer or potentially cancerous spots are primarily trained with data from light- skinned subjects, meaning they are less likely to accurately identify skin cancer in dark-skinned patients.10
Although there is concern that the increasing use of AI technologies per- petuates implicit and contextual biases and causes incorrect diagnoses and care disparities, artificial intelligence is gaining more traction in healthcare. The recent growth estimate of AI tech- nologies in healthcare is approximately 167%.11 AI systems analyzing medical images (x-rays or MRI scans) are one of the rapidly expanding areas of con- cern. Findings in Proceedings of the National Academy of Science reveal gender imbalances in the training data sets of CAD systems. These imbalances can lead to the CAD system displaying loweraccuracyfortheunderrepresent- ed group. Moreover, AI technologies often take on the implicit biases in the underpinning data and those of their trainers, thus continuing and solidify- ing those biases.
Even if healthcare providers are unaware of the misinformation or disinformation underpinning AI algo- rithms, they may inadvertently perpet- uate health disparities. As Janice Huck- aby, chief medical officer of Maternal Health at Optum, explains, “The scar y thing about implicit bias is that of- tentimes people are unaware that it’s shaping some of their reactions.”12 Im- plicit biases in AI algorithms, however,
are not limited to healthcare. They evidence themselves in the insurance, finance, criminal justice systems and other sectors.
Biased Data in Insurance and Finance
AI algorithms reportedly use biased data in the insurance and finance sectors that encourage unequal pricing, inade- quate coverage, and a lack of inclusivity in decisions for policies and mortgages provided to people of color. Remnants of historically biased insurance and mortgage practices, although illegal, are said to still present themselves in the values and behaviors of data scientists who generate, organize, analyze the data, and develop the algorithms that aid in loan and policy decisions. Values surrounding redlining, restrictive cov- enants, race-based insurance premiums, mortgage rates, and other subtle prox- ies for discrimination once acceptable as business practices may still be mani- festing themselves.13 When these values are allowed to permeate AI algorithms, intentionally or unintentionally, the re- sulting misinformation or disinforma- tion will likely result in higher premi- ums or rates for specific demographic groups. Moreover, suppose a specific racial or ethnic group historically had higher mortality rates. In that case, the AI algorithms might unfairly assign higher premium rates to the client, even if the individual’s risk factors differ.14
In recognition of implicit bias in historical data, the Casualty Actuarial Society has acknowledged “the poten- tial impact of systemic racism on un- derwriting, rating and claims practic- es” in four research papers.15 These pa- pers also include examples of financial institutions using biased information from AI algorithmic technologies to determine credit, insurance and mort- gage lending. Analyses of algorith- mic-generated credit-based insurance scores reveal that insurance premium determination increased the average
18 DELAWARE LAWYER SUMMER 2023