Skip to main content
Speech

Assistant Attorney General Kristen Clarke Delivers Keynote on AI and Civil Rights for the Department of Commerce’s National Telecommunications and Information Administration’s Virtual Listening Session

Location

Washington, DC
United States

Remarks as Prepared

Thank you, Dr. Hall. I’m honored to be here today. We appreciate the leadership demonstrated by the National Telecommunications and Information Administration, or NTIA, in hosting these listening sessions, and bringing together subject-matter experts across academia, think tanks, advocacy organizations, government and industry with members of the public for these critical discussions on privacy, equity and civil rights.

As Assistant Attorney General, I have the great privilege of leading the Justice Department’s federal civil rights enforcement efforts. The Civil Rights Division works to uphold the civil and constitutional rights of all persons in the United States – particularly, some of the most vulnerable members of our society.

I have focused my entire career on fighting for racial justice and civil rights. The civil rights and privacy implications of the use of artificial intelligence (AI) that are at stake in today’s listening session could not be more timely, or more urgent, for communities of color and other vulnerable communities who have been subjected to historical and ongoing discrimination. I am grateful for today’s opportunity to join the NTIA to advance our shared mission of justice and equality, as we turn to the subject at hand: discrimination and other harm to marginalized and historically excluded communities stemming from the commercial collection and use of personal information.

As we know, personal data originate from a multitude of sources, including public records, web browsing activity, emails, banking activity, social media, and, more recently, app usage on smart devices. Consumer reporting agencies, data brokers, internet platforms, digital advertising companies and other businesses are making millions of dollars compiling and selling consumers’ digital profiles. We also know that much of this personal data is also being fed into specialized algorithms that are used to make critical decisions affecting many aspects of our daily lives.

Algorithmic decision-making is increasingly influencing how employers hire, how banks decide who gets a loan, how police departments monitor individuals and communities, how courts determine risk, how government agencies make benefits determinations, and how businesses target advertisements to consumers. And as entities increasingly rely on algorithms to make decisions, their decisions become increasingly difficult to challenge because the process is so opaque. This lack of transparency and accountability raises serious concerns about inaccuracy, and also opens the door to potential discrimination. At the Civil Rights Division, we are particularly concerned about how the use of algorithms may perpetuate past discriminatory practices by incorporating, and then replicating or “baking in,” historical patterns of inequality.

I want to talk with you today about where we at the Civil Rights Division see issues that lie of the intersection of AI and civil rights issues, and how we are working to address it.

Congress established the Civil Rights Division through the Civil Rights Act of 1957, focusing primarily on the need to enforce voting rights laws. Sixty-four years later, the division has grown dramatically in both size and scope and has played a central role in many of the nation’s most significant civil rights battles. In addition to continuing our critical work to protect the fundamental right to vote, our mission has expanded to embrace federal statutes prohibiting discrimination on the basis of race, color, sex, disability, religion, familial status, national origin and citizenship status across the work of 11 sections. Through lawsuits, investigations, public education, regulation, policy coordination, technical support and more, we strive to promote the core principles that animate our democracy: equal opportunity, racial justice and fairness for all.

The Civil Rights Division and Justice Department leverage powerful tools to combat discrimination, including discrimination resulting from algorithmic bias. These laws include: the Fair Housing Act, the Equal Credit Opportunity Act, Title VII of the Civil Rights Act, the Americans with Disabilities Act, Title VI of the Civil Rights Act and a host of other federal civil rights statutes. Civil rights laws traditionally present two theories to combat discrimination: (1) disparate treatment, which is intentional discrimination; and (2) disparate impact, which deals with neutral practices that disproportionately and adversely affect protected groups.

AI and data use issues intersect with our civil rights work in numerous ways. For example, in the fair housing and fair lending contexts, we know that financial institutions collect and use large amounts of consumer data to make predictions and decisions in the context of underwriting, pricing, and loan requirements, as well as advertising for all types of loans – including home loans, car loans and student loans. Academic studies and recent news reports tell us that purportedly neutral algorithms can end up amplifying or reinforcing unlawful biases that have long existed around race, homeownership and access to credit in the United States. We need to ensure that use of data and algorithmic modeling is more transparent so that we can address these problems and avoid discriminatory outcomes.

AI and data privacy also affect hiring and employment. We are seeing reports of increased use of AI by employers to solicit job candidates and screen applicants. In addition, employers may be using predictive analytics to target job advertisements to shape the applicant pool in the first place. These tools can certainly expedite the hiring process, but whom an employer targets and which criteria it uses to weed out candidates must be analyzed to understand whether these decisions further discrimination, benefiting one class of people over another.

The employment context provides a good example of the principle that AI is only as unbiased as the programmers writing the algorithm and the data they rely on. If a programmer inputs only resumes of people whom the company has previously hired, and the previous hiring team harbored biases and preferences, then the newly created algorithm inherits those biases and preferences in screening applicants. This is the problem of “baking in” discrimination that I mentioned earlier.

AI technologies also have serious implications for the rights of people with disabilities. For example, some employers have utilized video interview and assessment tools that use facial and voice recognition software to analyze body language, tone, and other qualities to determine whether candidates exhibit preferred traits. If these tools are not implemented properly, they can limit access and discriminate against people whose disabilities affect their facial expressions or voice, such as people with deafness, blindness or speech disorders.

AI concerns also arise in the context of education. Algorithmic decision-making is already in wide use in our educational systems. For example, it can be helpfully employed to match students and mentors for support, like tutoring. But AI can also reproduce inequities through AI-driven approaches to admissions, student sorting, school discipline, and educational redlining. Furthermore, relying on historical data may affect student loans in much the same way as it may affect credit and housing, with some research suggesting that minority students are more likely to pay higher rates on their educational loans.

Finally, the criminal justice system employs AI tools, including predictive policing, facial recognition software, and risk assessment tools. We must be vigilant to ensure that these tools do not operate in ways that violate civil rights laws.

These are only some examples of how AI issues come up in the Civil Rights Division’s work. The division is committed to using these tools to ensure that entities do not discriminate through their use of AI systems.

To address these significant issues and challenges, and evaluate the potential impact of AI on civil rights, equal opportunity and civil liberties, I am proud that the Civil Rights Division is taking a holistic approach and marshaling its resources in four main areas – enforcement, education and outreach, interagency coordination and policy.

First, enforcement. We are undertaking a number of steps to strengthen our capacity for enforcement in the AI arena. We are analyzing developments in case law pertaining to AI and civil rights. We are also working to identify fact patterns that could arise under our enforcement authorities, including in employment, housing, credit, public benefits, education, disability, voting and the criminal justice system. Finally, we are identifying specific AI/civil rights enforcement opportunities, including options for submitting statements of interest in ongoing litigation.

Next, education and outreach. We recognize that the intersection of AI and civil rights features a rapidly evolving policy and legislative landscape, and that it is critically important to engage with AI experts at think tanks, academic and research institutes, and advocacy organizations. For that reason, the division launched a speaker series featuring stakeholders and experts on algorithmic tools and their ramifications for civil rights.   

Next, interagency coordination. To ensure that a civil rights lens is applied to across federal programs and activities involving AI, the Civil Rights Division is connecting with a number of efforts across multiple federal agencies and organizations working to develop AI ethical frameworks and guidelines. Through this process, we have heard from important thought leaders across the federal government on AI issues that intersect with civil rights, civil liberties and equal opportunity.

Finally, policy. We are studying whether policy or legislative solutions may also offer effective approaches for addressing AI’s potential for discrimination. We are also reviewing whether guidance on algorithmic fairness and the use of AI may be necessary and effective.

This multi-track approach is necessary because, while the division’s enforcement efforts can remedy discrimination with respect to specific facts or issues at play in particular investigations or cases in litigation, to fully and more meaningfully address discrimination for all protected classes, early intervention in the design and development process is key.

We know that many of those participating in today’s listening session represent organizations that have long been on the front lines combating discrimination. You have mounted resource-intensive litigation to fight discrimination; you have driven legislative and policy changes at every level of government; and you have launched public outreach campaigns that are critical to raising awareness. You have also continued to provide important voices supporting investment in pilot programs regarding non-discrimination.

As we consider the civil rights implications of the use of AI, our thinking is very much influenced and informed by your input and feedback.

Thank you for your efforts, and we look forward to learning so much from all of you over the next several days and beyond, as we work to advance together justice and equality in AI.


Topic
Civil Rights
Updated September 24, 2024