For banking customers, AI chatbots may have trust issues
The Consumer Financial Protection Bureau, a federal watchdog agency for the banking sector, recently warned the industry about the use of artificial intelligence chatbots. Previous iterations of chatbots, which operate like automated decision trees, have long been used in banks’ customer service operations.
But these new generative tools like ChatGPT are so good at imitating human communication, banks may be relying on them more than ever.
While they can incorporate huge amounts of data, AI chatbots are prone to “hallucinating,” or making things up. Also, they’re not equipped to handle complex questions that can be involved in banking services, according to Erie Meyer, chief technologist at the CFPB, who discussed these issues with Marketplace’s Meghan McCarty Carino. The following is an edited transcript of their conversation.
Erie Meyer: In our complaint database, there were a huge number of people begging for help when they were sort of trapped in these circuitous conversations with the robot. And they ranged from people trying to submit fraud alerts all the way to “A chatbot promised me that funds had been transferred. They ultimately were not, and I was charged a junk fee because of this misunderstanding.” So it sort of runs the gamut all the way up to material that makes a difference to a consumer in real dollars and cents immediately.
Meghan McCarty Carino: In the past, we’ve done some coverage on “neobanks” — kind of completely digital fintech banking services that don’t have physical branches, often use automated customer service and often serve people who have historically been underbanked. What are some of the implications for consumers if dealing with a complex inquiry in this kind of context?
Meyer: I’ll say that whether it’s a very large bank, a very new bank, a very old bank, firms that make a bet on cutting corners on customer service also should be ready for any legal liability that it extends to their operations. And I think instead of picking on any particular segment of the market, what I’ll tell you is what we don’t see in our complaint database is people who can get a straight answer, ideally, from a person in a timely way. That is the same for every single kind of financial institution. If a neobank, just to use your term, is able to give a straight answer on an important question, that’s fantastic. If they have a poorly implemented chatbot that sends people in doom loops, that’s another question.
McCarty Carino: Explain this term that you just used, “doom loop.”
Meyer: Well, unfortunately, I think most of your listeners know exactly what I mean. This is when something goes wrong, you try to get a straight answer, you’re sent either to another person or another department or in this case of a chatbot, you’re given an answer that doesn’t totally make sense. And working on chatbots and doom loops was the first time my extended, sometimes elderly, family immediately got what I was saying. They say, “I hate being stuck talking with a robot when they aren’t giving me the right answers.”
McCarty Carino: Can you detail some of the top consumer harms that you all are thinking about when it comes to this technology?
Meyer: I think there are current and well-known problems with the technology, including that it can make mistakes. I’d point you to not my words, but the words of Sam Altman, OpenAI’s CEO. When he testified in the Senate recently, he encouraged people to double-check and make sure that what was coming out was correct. I think when you’re a customer of especially a financial institution, you expect that the information you’re getting is correct. And in many cases, it’s actually the law. Another problem in the marketplace is when people get inept assistance from a chatbot. For many folks, a portion of their financial lives that is most meaningful is when they’re getting counsel or having a direct conversation with their local banker. And for some people, that’s a person they walk into a building and they know directly. To go from that sort of conversation about what a loan is, what it means for your financial life, how to apply for a mortgage, what that means for your financial life — to something that is poorly designed and may not work is really sort of a scary proposition for many folks. I also think that there are concerns around privacy and security.
When you think about a website that just sort of exists and you can get information from, that’s one thing. But when you have a conversation where a human, maybe a frustrated human, is sort of frantically putting information into a chatbox where they think maybe a person is going to answer and be able to help them sort through a bunch of confusing information — are the institutions protecting the information they’ve input? So making sure those records are organized, stored and protected in the way that they should be is an important obligation as well.
McCarty Carino: Given all of these concerns and kind of the need to maybe double-check or redouble work that a chatbot does, I mean, does it seem like a cost-effective strategy at this point to be reverting to this kind of technology exclusively for customer service?
Meyer: I think it’s a question that firms are going to have to look at carefully. I think one of the things we’ve seen sometimes with emerging technology is that the costs seem really low at the start, and as the market changes, as access to compute power changes, those costs may shift, including when things go wrong, those costs may shift. So folks at firms who were looking at this really carefully should definitely take the different costs into consideration.
McCarty Carino: Tell me more about some of the legal liabilities that come up here.
Meyer: So one of the things that’s really important is to note that all existing consumer finance law applies to any sort of advanced technology. There are things like the Fair Credit Reporting Act that has been around since the ’70s. It’s still something that’s been around for a long time, and there’s not really a way around it no matter what tech you’re using. So I think the key thing for financial firms to understand is that there’s no exclusion in law for specific technologies when it comes to meeting their existing obligations.
McCarty Carino: As more and more industries try to integrate this kind of technology into lots of different platforms, including many different forms of customer service, what would the CFPB recommend companies do when integrating AI chatbots?
Meyer: I think when you’re integrating AI chatbots, or making any sort of changes, or launching how you do business, you really have to start with the consumer. Are you meeting your obligations in the areas of law that say that you have to be able to answer a question? Is there a chance that this new technology would give an answer that you would never let a bank teller say out loud or a customer service representative say out loud? Sometimes people characterize made-up statements as hallucinations. I think that’s giving a little too much humanity to a robot, but if you would be comfortable with a teller just fully making something up to a member of the public, that tells you what you’d be comfortable with your technology doing. Now, I don’t know many financial institutions, especially reputable banks, that are comfortable with that. So I just think it’s important that folks start with the consumer and really understand how they can meet their obligations, no matter the tech they’re using.
The CFPB estimates that about 37% of Americans reported interacting with a bank chatbot last year. Of course, our current AI chatbot era really didn’t kick into high gear until November, when ChatGPT became available to the public.
The American Bankers Association told Fox Business that their surveys show 95% of Americans are happy with the customer service they receive from their bank.
Research from the nonprofit Commonwealth organization last year posited that AI chatbots could help bridge the gap in access to financial services for communities that lack physical branches or have been historically excluded. That’s sort of the idea we noted with neobanks earlier too.
But Commonwealth surveys of low-to-moderate-income people found 87% prefer talking to a human and that this technology would be better deployed as a complement to human agents rather than a replacement. That was also the general message when we spoke to customer experience strategist Christina McAllister earlier this year about how she’s advising firms on integrating these tools.
She said, “You don’t want to end up being the company that is being used for the next 10 years as the example of a chatbot gone rogue.” But it kind of feels like we’re going to have plenty of examples of that in no time.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.