Dating apps fail to protect some users from predators, Mother Jones finds
Share Now on:
Warning: This episode contains references to sexual abuse and violence.
Whether for a hookup or to find true love, 3 out of 10 American adults say they have used a dating app, according to the Pew Research Center. But an investigation out Wednesday from Mother Jones looks into how these apps can also incubate abuse, finding that companies like Grindr and Match Group have failed to protect some of their users from predators.
At the heart of this story is this question: Is that the companies’ responsibility? The tech industry has long argued the answer is no, thanks to the 26 words in Section 230 of the Communications Decency Act, which became law in 1996.
Abby Vesoulis is the author of the Mother Jones investigation. Her story begins with Matthew Herrick, whose ex-boyfriend created fake profiles of him on Grindr. The following is an edited transcript of Vesoulis’ conversation with Marketplace’s Lily Jamali.
Abby Vesoulis: (The profiles) said that he was into orgies and violent sex. And worst of all, they said that he had rape fantasies. And so over the course of these 10 months, over 1,100 men showed up at Matthew’s house, they showed up at his workplace, they followed him into public restrooms. And all the while, Grindr was ignoring Matthew’s pleas for help. So Matthew took them to court. And ultimately, Grindr was found not to be responsible for the fake profiles that the ex was making.
Lily Jamali: And this is not that big of a surprise that that case didn’t go anywhere, because of the way Section 230 works. Can you talk us through that?
Vesoulis: Yeah, so Section 230 was written in 1996, in the very nascent days of the internet, when it was really just rudimentary chat forums. Now the internet is used, in the case of dating apps, to link strangers together in real life. It’s just a very different level of danger compared to what the internet was originally being used for. But Section 230 means that the platforms aren’t responsible.
Jamali: I wonder what indications you’re seeing that some lawyers and judges are starting to think differently about the responsibility that online platforms have for what users post on them?
Vesoulis: Yeah, increasingly, we’ve seen a couple judges at the federal, but also some state levels, look at cases against Big Tech companies and say, “Perhaps, we can look at these such as judges look at cases involving more tangible products.” So, for example, if a car seat manufacturer was taken to court because their product was faulty and a child ended up harmed over it, there would be an easy lawsuit there. So now some judges are starting to look at tech platforms’ product design in a similar way. One judge in the last couple years recently looked at Omegle when an 11-year-old girl was linked up with a man who ended up extracting pornographic imagery out of her. The family sued Omegle, and Omegle tried to argue, “Section 230, we’re not responsible.” But a judge said, “Well, actually, this has more to do with the design of your product than it does with the fact that a man was doing bad things. You designed your product in a way where potentially that design played a role in this girl’s abuse.” And so increasingly, Section 230 is not being looked at as the get-out-of-jail free card. If tech companies designed their products in a way that culminated an abuse, they could be held liable.
Jamali: Yeah, how significant would you say that shift in thinking is? I know, it’s just one drudge among many here, but it seems like potentially a real sea change could be underway because of that.
Vesoulis: So there’s at least two cases I can think of in which judges have been open to the product liability approach in cases. But beyond that, there’s also a growing number of lawyers who are willing to take up these cases against Big Tech companies. The lawyer that represented Matthew Herrick against Grindr a couple of years ago, she said that when she first took up that case, she felt really lonely, because there was nobody else who really seemed open to her line of thinking as applied to Big Tech. But now there’s a number of lawyers who are taking on these cases, and in some cases, getting some marginal wins.
Jamali: Can you talk about how the dating apps are responding? I mean, you write about how Match Group, for example, has embedded a third-party service for users to do background checks on potential dates. What kind of feedback are you hearing on that?
Vesoulis: There is a couple of limitations of that specific tool. If a user didn’t want to pay $3.25 to do this limited screening on their date, and then they were assaulted or harmed in some other way, then you might look at the person who could have done that screening and said, “Well, you didn’t look this person up. So that’s on you.” So experts are a little worried about the possibility of victim blaming when it’s on somebody to investigate their date, rather than on the apps to just require there be a more thorough screening on all users.
Jamali: Is it your sense that it might be safer or one potentially safe avenue for dating apps to require verification systems? Does that seem to be a real flaw in the way these apps are designed?
Vesoulis: I think it is a step that dating apps could take. But the dating apps that I talked to, for the most part, came back with a concern about user privacy. They were worried about the possible implications of storing things like users’ government IDs. Some talked about what it would mean to store a government ID for a trans person, for example, and how that name might not match up with the name that they go by in real life. And so there might be a concern about privacy there. But I will say that lots of companies and services require IDs. You can’t really go to Costco without using a government ID, you can’t book a hotel or rent a car. So those companies have managed to protect user privacy and come up with a good data security method. And in some ways, I think that the stakes for meeting a stranger that you met online through online dating might be higher than the risks of going to Costco or renting a car. So it’s definitely something that Big Tech might want to consider moving forward.
Jamali: So it’s worth noting that you came in to report this story based on an incident in your own life on the dating app Hinge. Not to put you on the spot, but I wonder if you don’t mind sharing a little bit about that experience?
Vesoulis: Of course. The reason I first got interested in this story was because a couple years ago, I, like many of my friends, was using online dating apps. And I met a man who said he was 31, that he was single. And I, as a reporter, tried to do my due diligence. I was able to confirm that he worked at the place that he said he worked at, I thought I was good and that this was safe. And then some things just started to not add up, and I later discovered that he had lied about his age by six years. But more than that, it was scary that it was so easy to misrepresent yourself online. And I just wondered if there had been other people who received, or were on the other end of some harm, whether emotional or physical, because of how easy it was to mislead and misrepresent yourself online. And that’s how I found the stories of many of the people that I include in my article.
Jamali: You quote a scholar in your article who basically says — I’m paraphrasing here — something along the lines of look, this activity has happened as long as people have been socializing with each other. This is just one of the pitfalls of social interaction, and that apps cannot fix humans. As somebody who had encountered a slice of this kind of activity in your own life, what did you make of that comment from the scholar who was saying, I mean, something that is unequivocally true — this is, unfortunately, the darker side of how humans interact?
Vesoulis: Yeah, I mean, he made a good point. And it made me really reflect on what he was saying. And there have been people for as long as bars have existed that have gone into a bar and have been drugged or had too much to drink. And then something bad happened to them as a result of drinking and going into that bar. And usually we don’t hold the bar legally responsible for what happened there. At the same time, though, bars have instituted steps to make such abuse less likely. They teach bartenders how to not overserve, they have water stations out at bars, some bars are required to have food to make it so that people don’t get as drunk as quickly. And in my case, the person I was dating had lied about his age online because it was so easy to just plug in a fake birthdate. Had there been an ID check, he wouldn’t have been able to do that. So in my particular case, I think what I went through could have been prevented with just a little bit stricter of a safety check. And I think the same is true for several of the other characters in my story as well. You can’t completely wipe out abuse online, but certainly some bad things can be prevented with just a little bit more foresight.
Editor’s note: A Match Group spokesperson responded to Mother Jones with this statement:
“As a company, we are committed to helping users date more safely and are continuously investing in ways to enhance the safety and tools offered across Match Group’s portfolio. The safety of users is paramount and our brands’ work to build safer dating platforms is never-ending. Match Group remains engaged in conversations with elected officials and safety experts in order to work together to identify and take steps to help improve safety not only on the platforms within our portfolio, but beyond our industry as well.”
A Grindr spokesperson told Mother Jones that the dating app hasn’t done more to verify user identities because doing so may risk user privacy.
“The way we view the world is through the experience for our users. And our users need two things. They need privacy and they need safety. And those things are two sides of the same coin. But they come as perfectly diametric tradeoffs.”
In Washington, legislation introduced earlier this summer would create a carve-out to the industry’s immunity under the Section 230 provision. It would allow for social media companies to be sued for spreading harmful material created with generative AI technology.
It’s a bipartisan bill brought by Democrat Sen. Richard Blumenthal of Connecticut and Republican Sen. Josh Hawley of Missouri, who generally don’t agree on a whole lot. They introduced the legislation after the Supreme Court struck down two cases that would have narrowed the scope of the tech industry’s immunity under Section 230.