Responsible ways to use AI for government efficiency

The Washington Post reported earlier this month that representatives of DOGE — the Department of Government Efficiency — gained access to sensitive data at the Department of Education and fed it into AI software.
This has raised red flags over whether it violates federal privacy law. We reached out to DOGE for comment, but didn’t hear back.
But there are ways to use AI to improve efficiency without raising privacy concerns. Marketplace’s Stephanie Hughes spoke with Kevin Frazier, contributing editor at the publication Lawfare, about how the government has used AI in the past and how it could use it more responsibly in the future.
The following is an edited transcript of their conversation.
Kevin Frazier: The federal government’s use of AI really spans decades, if we’re going to be honest, because how you define AI is a whole, other hour-long conversation, if not two-hour-long conversation. But here we can even just look back to the end of 2024 when we had an inventory done of the federal government’s use cases of AI, and what we saw is that across 37 agencies, there were more than 1,700 different uses of AI, ranging from the Army Corps of Engineers using AI to predict flooding to, of course, the Department of Defense using AI to bolster its cybersecurity defenses.
Stephanie Hughes: Tell me a little bit more about, you know, what the goal is with incorporating AI into the federal government, like, what’s the hope?
Frazier: Yeah, so there are tons of hopes. I think the biggest advantage to relying on AI systems are a couple things. So number one, AI is really adept at spotting patterns that would otherwise elude human staffers, and so AI deployed in the federal government setting can really assist with efficiency when it comes to identifying waste, trying to forecast new trends, whether those are market trends or weather trends. So a lot of this just goes to trying to do really large, difficult tasks in a more streamlined and reliable fashion. One thing I want to point out is that AI operates the same way in any given context. We can see what its function is. We can know it’s going to run in a certain way. Now I’m not trying to say that AI is perfect, far from it. We know that it can be susceptible to bias and other issues, but it does have that capacity to operate in a more predictable fashion and serve different tasks that humans just aren’t really well suited for.
Hughes: Going big picture, the use of AI in many aspects of life, including government, seems inevitable. What’s the best way to maximize the benefits of AI while still maintaining public trust?
Frazier: First is AI literacy. We really haven’t seen a concentrated effort across the country to educate Americans about the risks and benefits and technical background of AI, and we need a lot more folks in the federal government who have a deep knowledge of AI and a deep experience with AI to help make sure that these systems are running in a responsible fashion that aligns with federal law. Number two is transparency. It’s really important that, from a trust perspective, Americans know when AI is going to be used to achieve certain ends. And I think a lot of Americans want to know that they’re either interacting with an AI system, or they’re helping inform an AI system or not. One third step I really want to see is experimentation, because in many ways, the use of AI is going to improve government services. For example, just to highlight one thing, the Social Security Administration has been using AI to proactively identify individuals who may be eligible for benefits. That’s an awesome use case, right? Finding Americans who should be receiving more benefits but aren’t, that’s really exciting. So I want that to keep happening. So let’s use AI on this project. Let’s see how it goes. Let’s report the results. Let’s show the American public how it’s working, what risk we identified, and how we’re responding to those and then keep scaling it up.
Hughes: You spent some time in the tech world. You had a stint at Google. You were at Cloudflare for a minute. You founded a tech non-profit. You’ve also spent a lot of time in the legal world. We now have all these people with tech mindsets coming into a world of politics and laws. Is there a happy medium between the “move fast and break things” approach of Silicon Valley and the way the federal government has traditionally worked, which is move cautiously and try not to break stuff?
Frazier: I really think there is and I think the sweet spot comes with a general approach that actually is already in display in Utah. So Utah recently created the Utah Office of AI policy. They’re creating what’s known as a regulatory sandbox where the government and the private entity in question work together to come up with a bespoke, flexible regulatory scheme. So you can imagine, for example, a new healthcare company moves to Utah. They want to use AI to identify health risks for the residents of Utah. Of course, you may have some folks who say, “oh my gosh, AI dealing with my health information? That’s scary. AI may be leading to bad predictions or hallucinating about whether or not I have some serious disease.” Under a regulatory sandbox, though, we err on the side of experimentation and seeing if those use cases align with our expectations around benefits, or whether instead, we’re seeing some unknown risks that we think maybe aren’t worth continuing with that project. So adoption of a more experimental approach that isn’t forcing folks to surrender their whole lives to AI, but also isn’t the kind of scared approach that we’ve seen usually typify governments when they see emerging technology, I think that’s a really happy medium.
DOGE was temporarily blocked from accessing student loan data after a student group in California sued to stop the disclosure. That order was lifted the following Monday after a federal judge found there wasn’t sufficient proof that irreparable harm had been done.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.