Skip to main content
Speech

Deputy Attorney General Lisa O. Monaco Delivers Remarks at the University of Oxford on the Promise and Peril of AI

Location

Oxford
United Kingdom

Remarks as Prepared for Delivery

Thank you for that very kind introduction, Sam. And thank you to Professor Sir Charles Godfray, the Oxford Martin School, and Professor Robert Trager — who couldn’t join today — for hosting this event.

It’s my first visit to this magnificent campus. My nephew is a proud graduate of Magdalen College, so I’ve heard lots about Oxford over the years. But it’s entirely different — dare I say better — to be here in person.

Oxford has an unmatched reputation for making world-shifting contributions in our most critical moments. From testing and then supplying penicillin during World War II — to pioneering research on the digital world’s impact on society through the Oxford Internet Institute — Oxford’s legacy of harnessing breakthroughs for public benefit has touched every corner of the world.

This is a special place, and I’m honored to be here today on behalf of the United States Department of Justice.

The mission of the Department of Justice is to uphold the rule of law, to keep our communities safe, and to protect civil rights.

With over 115,000 employees across America and stationed around the globe, the Department serves a unique role. It implements Administration policy priorities, but also serves as an independent investigator with four major law enforcement components, including the FBI — and is the sole agency empowered to bring federal prosecutions and represent the United States in court.

I’ve spent most of my professional life in the Justice Department — joining fresh from law school to work for Attorney General Janet Reno, the first woman Attorney General (it took us more than 200 years to hit that milestone) — and later serving as a federal prosecutor in Washington, D.C. Having served through Democratic and Republican Administrations over multiple decades and inflection points, I’ve seen the Department adapt to new threats and technological changes. In the wake of 9/11, at the FBI, I helped to transform the Bureau from one focused on investigating crimes after the fact to a national security organization focused on preventing the next terrorist attack. And later, as the head of our National Security Division — as cyber emerged as an increasing risk — we again reoriented to confront that threat.

Today, as the Chief Operating Officer and the Number 2 person in the Justice Department, I — along with Attorney General Garland — am laser-focused on what may well be the most transformational technology we’ve confronted yet: artificial intelligence, and what it portends for our core mission.

Every new technology is a double-edged sword, but AI may be the sharpest blade yet. It has the potential to be an indispensable tool to help identify, disrupt, and deter criminals, terrorists, and hostile nation-states from doing us harm.

So far, we’ve just scratched the surface of how AI can strengthen the Justice Department’s work. But we’ve already deployed AI:

  • To classify and trace the source of opioids and other drugs.
  • To help us triage and understand the more than one million tips submitted to the FBI by the public every year.
  • And to synthesize huge volumes of evidence collected in some of our most significant cases, including January 6.

Yet for all the promise it offers, AI is also accelerating risks to our collective security.

We know it has the potential to amplify existing biases and discriminatory practices.

It can expedite the creation of harmful content, including child sexual abuse material.

It can arm nation-states with tools to pursue digital authoritarianism, accelerating the spread of disinformation and repression.

And we’ve already seen that AI can lower the barriers to entry for criminals and embolden our adversaries. It’s changing how crimes are committed and who commits them — creating new opportunities for wanna-be hackers and supercharging the threat posed by the most sophisticated cybercriminals.

Election security is an area where I’m particularly focused on the potential risks posed by AI.

This year, over half the world’s population – more than four billion people – will have the chance to vote in an election. That includes some of the world’s largest democracies – from the United States to Indonesia and India, from Brazil to here in Britain.

The upcoming elections crack open a window for foreign adversaries and bad actors to divide and mislead voters.

  • They can radicalize users on social media with incendiary content created with generative AI — accelerating online harassment, hate, and disinformation.
  • They can misinform voters by impersonating trusted sources and spreading deepfakes — easy to create, and often hard to rapidly detect.
  • And with chatbots, fake images and even cloned voices spreading falsehoods about elections, they can deny people their most fundamental of rights — to have their voices heard as voters.

We’ve already seen the misuse of AI play out in elections from Chicago and New Hampshire to Slovakia. And I fear it’s just the start.

Left without guardrails, AI poses immense challenges for democracies around the world.

So, we’re at an inflection point with AI. We have to move quickly to identify, leverage, and govern its positive uses while taking measures to minimize its risks.

That’s why last October, President Biden announced an historic executive order on Safe, Secure, and Trustworthy AI to ensure we are seizing the promise and managing the risks of artificial intelligence. Among other things it establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, and promotes innovation.

It also charges the Justice Department to anticipate the impact of AI on our criminal justice system, on competition, and on our national security.

To seize the potential and avoid the dangers of advanced AI systems, the order summons the full power of the U.S. government. It directs agencies to use their existing authorities in new ways. Among other things, it requires that certain AI developers test the safety of their products — then share those results with the government — before the products go to market.

Meanwhile, our government is expanding this effort worldwide. Through a new initiative called the Hiroshima AI process, we’re working with allies to internationalize responsible codes of conduct for advanced AI systems. The United Kingdom — a key partner across so many issues — has been a leader in AI safety, using the Global AI Safety Summit last year to launch the AI Safety Institute — a sister organization to our own Safety Institute at the U.S. Commerce Department.

Our global work at the Department of Justice also includes efforts to stop criminals and rogue nation-states who want to use AI to undermine our collective security.

Last year, right here in the U.K., I announced an initiative called the Disruptive Technology Strike Force that enforces export control laws to strike back against adversaries trying to siphon off America’s most advanced technology and use it against us.

To neutralize these adversaries, we need to zero in on AI to make sure it’s not used to threaten U.S. national security. So going forward, that Strike Force will place AI at the very top of its enforcement priority list. After all, AI is the ultimate disruptive technology.

As we harness industry expertise and take international action, we have a responsibility to look in the mirror and examine our own uses of AI. We cannot be exempt from AI governance.

Right now, the Department of Justice is undertaking a major effort with our fellow federal agencies to create guidance to govern our own use of AI.

These new rules will ensure the Department, along with the whole U.S. government, applies effective guardrails for AI uses that impact rights and safety.

So, hypothetically, if the Department wanted to use new AI systems to — say — assist in identifying a criminal suspect or support a sentencing decision — we must first rigorously stress test that AI application and assess its fairness, accuracy, and safety.

And I want to be clear. These guardrails will be critical for the Department to do its job and deliver on its mission. The rule of law, the safety of our country, and the rights of Americans depend on it.

Yet for all the new ways we’re addressing risks in this rapidly changing landscape, we’re not starting from a blank page. We’re applying existing and enduring legal tools to their fullest extent — and looking to build on them where new ones may be needed.

In the early days of the internet, some argued that we needed a whole new legal regime to deal with the nascent challenges of the digital world.

As scholar Yuval Levin has observed, new technologies don’t necessarily demand new structures. He recalls a 1996 presentation from the University of Chicago professor and Judge Frank Easterbrook who rejected the notion that the internet demanded a separate legal structure. He likened such an effort to developing the “Law of the Horse.”

Judge Easterbrook argued that we have lots of cases and precedent that deal with horses: the sales of horses; the racing of horses; even what happens when people get kicked by horses.

But he cautioned that collecting these strands into a single “Law of the Horse” would be a mistake.

I’d like to think that, at the Justice Department, we heeded Judge Easterbrook’s advice. We have evolved with the threat, applied and adjusted existing legal tools, and added needed technology and expertise — for instance, incorporating cyber into every category of our work.

The same applies to AI today.

As it did with cyber, the law governing AI will develop. But our existing laws offer a firm foundation. We must remember that.

Discrimination using AI is still discrimination.

Price fixing using AI is still price fixing.

Identity theft using AI is still identity theft.

You get the picture. Our laws will always apply.

And — our enforcement must be robust.

The U.S. criminal justice system has long applied increased penalties to crimes committed with a firearm. Guns enhance danger, so when they’re used to commit crimes, sentences are more severe.  

Like a firearm, AI can also enhance the danger of a crime.

Going forward, where Department of Justice prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI — they will.  And if we determine that existing sentencing enhancements don’t adequately address the harms caused by misuse of AI, we will seek reforms to those enhancements to close that gap.

This approach will deepen accountability and exert deterrence. And it reflects the principle that our laws can and must remain responsive to the moment.

Finally, for the Department to help chart the course toward a safe and secure AI future, we need to build trust.

Our mission, our values, and our work must be regarded as legitimate by those who place trust in their government and whose rights the Department is tasked with protecting.

We have a responsibility to lead when it comes to our own use and governance of AI in the Justice Department. We have to look and listen beyond our own walls.

One way we’re doing this is by bringing together the Justice Department’s law enforcement and civil rights teams, and other experts, to form an Emerging Technology Board that will advise the Attorney General and me on the responsible and ethical uses of AI by the Justice Department.

Last month, the Department of Justice appointed our first Chief AI Officer to help spearhead this work.

But we can’t limit perspectives to our own building. To fulfill our mission while harnessing AI’s potential, we need a range of perspectives.

So today, I’m proud to announce Justice AI. Over the next six months, we will convene individuals from across civil society, academia, science, and industry to draw on varied perspectives. And to understand and prepare for how AI will affect the Department’s mission and how to ensure we accelerate AI’s potential for good while guarding against its risks.

These discussions will include foreign counterparts grappling with many of the same questions. And it will inform a report to President Biden at the end of the year on the use of AI in the criminal justice system.

Technological advancements have and always will fundamentally challenge the Department’s mission. Because at its core, technology impacts how we protect people and how we ensure equal treatment under the law.

Our work at the Department of Justice is to make sure that whatever comes now or next adheres to the law and is consistent with our values.

Our responses today — and the responsibilities we take on — from our work at the Department of Justice to your work right here at Oxford, will shape the role AI plays in our lives now and for decades to come.

Thank you.


Topic
National Security
Updated September 24, 2024