Pinterest CEO wants to build a “more positive version of social media”
Pinterest. It’s the platform best known for its viral recipes, fashion trend forecasts, DIY crafts and ideas for just about any wedding or birthday party theme you could think of.
The 14-year-old digital bulletin board recently hit a milestone with 500 million monthly active users.
In a sea of outrage and division on social media, Pinterest CEO Bill Ready wants you to think of the platform as a sanctuary of positivity in the online universe.
Marketplace’s Lily Jamali recently sat down with Ready at the Milken Institute Global Conference and asked him about how Pinterest has changed since its launch.
The following is an edited transcript of their conversation.
Bill Ready: Pinterest has historically been a great place for visual inspiration, but it didn’t bring a lot of actionability. So, I joined approximately two years ago and there were two sort of major parts of what I was looking to do with Pinterest. One was making it more actionable. So, Pinterest had historically solved digital window shopping, but all the stores were closed. So, you’d find lots of great things on Pinterest, but then you couldn’t click to buy the things that you wanted. So now we’re opening those doors and making it so that when you find something on Pinterest and you say, “I really want that outfit,” that you can click and easily purchase whatever it is that you were inspired by.
And then the second thing is bringing more positivity. You know, over the last two years we’ve really accelerated our growth. Gen Z is now our largest, fastest-growing demographic, and the actionability has been at the core of that, but also the positivity that we’re bringing to the platform where we want to prove a different business model for social media, one built on positivity and actionability, and not on engagement via enragement. Gen Z actually cites that as some of the biggest reasons that they come to Pinterest, is that they see it as an oasis away from the toxicity of much of social media.
Jamali: Now, you are a social media company at the end of it all. Where do you see Pinterest fitting into an industry that is getting quite a bit of criticism right now?
Ready: Yes. So, this was one of the biggest reasons I came to Pinterest. And Pinterest has always tried to be a very different type of social media. And so I saw that as a really great start that Pinterest always wanted to be a more positive corner of the internet. But in this world of AI-driven feeds, social media has really just become media, because it’s most of the content you consume.
Jamali: And you argue we should just call it media, not social media anymore.
Ready: I think that’s right. If you look at the data, it is where the majority of media consumption is occurring. So, it is just media. Now, obviously, the fact that it’s social really changes that the dynamic a lot. And as we’ve been building a more positive version of social media, one of the things we’ve been doing is looking at how do we tune AI for positivity? How do we tune for things that are going to leave you feeling better when you come off the platform? And so, things like inclusive AI, where we’re bringing diversity by default into our feeds, and where we’re bringing things like body-type ranges, so that you can see people that look more like you, and adding skin-tone filters, hair-pattern filters, all these things. This was a change that we made on the platform when I joined, where Pinterest, like many other platforms, was sort of rapidly going down the path of short-form video feeds, where you just continuously scroll through a nonstop feed of 30-second videos. And we shifted from that to say, “Let’s not tune for view time, but actually tune for positivity and tune more for things where the user expresses some indication of what they want to see.” And the combination of those two things actually shifted the kind of content that was rising to the top. When the AI was tuned to maximize view time, you would see a lot of the same triggering content that you would find elsewhere.
When we switched to tuning for positivity and started to incorporate into that what users said they wanted to see again — not just what they looked at one time, which could be the triggering content, but the things they said they wanted to see again, which more often than not tend to be inspiring content — then we saw much more inspiring content rise to the top. Things like self-help, do-it-yourself content, all these things that let people invest in themselves and get to the outcomes they wanted. Those are things that have really changed the dynamic on the platform for the better.
And I’d say if you step all the way back from it, there’s another element of this, which is so much of social media has become comparative, performative. And this is one of things we hear from our Gen Z users is that they will say that when they’re elsewhere on social media, they feel like they see what others want them to be, or what they feel like they have to be. And when they’re on Pinterest, they get to invest in themselves. And Pinterest is about what they want to be, not what others want them to be. So, these are things that we are consciously designing into the platform to create a very different environment than the rest of social media.
Jamali: You have been very vocal about the potential toxicity of social media. And that seems to have been sparked at least in part by an NBC investigation that came out last year that found that some adult men were using your platform to create collections of images of young girls. When that came out, you wrote an opinion piecey that said basically that AI has a part to play in keeping children safe. How does that work exactly?
Ready: Yeah, so like many other platforms, we had seen that there was behavior of adults taking what were otherwise innocuous images, but making lewd comments or organizing them in ways that clearly had a different intent. And so, it was a good example of where, yes, we’ve been tuning AI for positivity to bring more positive content, but also, when we found that kind of behavior, it was a great example of where we could use AI to spot bad actors and get them off the platform. And in addition to that, we started looking at how we can make different policy choices around how we provide better protection for teens on our platform. So, one of the things that we did that is still industry-leading is that we did age collection for all the users on our platform well ahead of any regulation requiring that. And we don’t think it’s safe for somebody under 16 to have a publicly discoverable profile or publicly discoverable messaging. So, we both made the profiles and the messaging private only for under 16. So, it’s a combination of using AI to fight bad actors, but then also making policy decisions on our platform where we are putting first and foremost the safety of our users.
Jamali: So, you talk about using AI to fight bad actors. What do you say to someone who hears you say that and says, “You know, I don’t really trust AI with a task as important as protecting our kids”?
Ready: A couple of thoughts. The first would be AI, like any technology, can be used for good or for bad. And, and so we’re really focused on how we use it for good, like the example I gave of inclusive AI, bringing more diversity by default into our feeds, tuning the AI to serve content that’s going to leave you feeling better, that’s going to be additive, and not addictive. But then you can also use it to fight bad actors. And I think I’d flip that around and say, you know, the bad actors are going to use the AI and so, you need to be able to have really good AI to combat that. And so, it’s not about diminishing the importance of something by using AI for it. Actually, AI resources are really hard to come by. These are the hardest engineers to find. GPUs [graphics processing units], which are the core processing that’s required to go power these things, are really scarce resources. And we’re saying those really scarce resources, we’re putting meaningful amounts of that toward protecting our users. That is actually a significant resource allocation. And I think it is actually going to be necessary, because these AI tools are being put in the hands of everyone. And the technology will get put in the hands of good actors and bad actors. And so, you want to make sure that you’re using these things on your platform to provide safety for your users, and combating the bad actors who are also going to try to use these tools as well.
In our conversation, Bill Ready told me about the Inspired Internet Pledge that Pinterest signed last summer.
It was created by the Digital Wellness Lab at Boston Children’s Hospital in collaboration with Pinterest, and it’s described as a call to action for tech companies to unite and make the internet a “safer and healthier place for everyone.”
Signatories of the pledge commit to three basic principles for their products: Tune for well-being, listen and act, and commit to openness. So far, 10 companies have signed the pledge, one of which is TikTok.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.