Parents, educators are unaware how their students use generative AI, report finds
As soon as ChatGPT burst onto the scene in late 2022, it became clear that artificial intelligence was going to send massive shockwaves through education. And, as with any new technology, young people were likely to adopt it more quickly.
Well, now we have some data about that phenomenon. A new report from the non-profit Common Sense Media shows seven in 10 teenagers from ages 13-18 are using generative AI in some way. And Jim Steyer, founder and CEO of Common Sense Media, told Marketplace’s Meghan McCarty Carino it’s not all about cheating. The following is an edited transcript of their conversation:
Jim Steyer: This research showed that young people are using AI in a number of different ways. One, just to help research on homework questions. We also see, quite frankly, that they’re using it because they’re bored and they like to play around with AI and see what kind of answers and funny content that AI will give them — because remember, AI is still not a perfect platform at all; these chatbots make a number of different mistakes. But they’re also just using it to explore different stuff and almost like a search tool. So we’re starting to see this new evolution of AI in schools, but students are leading the way more than teachers and parents.
Meghan McCarty Carino: Tell me about what you found in terms of how much parents know and understand about how their kids are using this technology.
Steyer: I think it’s fair to say that parents know very, very little, almost nothing, about how their teenagers are using generative AI. So there’s a huge disparity in the same family. That’s one of the most interesting things, is that we surveyed students and parents in the same family, and their knowledge bases were completely different, with young people being far more knowledgeable about the AI chatbots. So there’s a clear need to educate parents and teachers now so that they can get up to speed and help young people make good decisions around AI.
McCarty Carino: Does it look like schools are grappling with this challenge in any kind of official or systemic way, coming up with any formal policies around this?
Steyer: Right now, schools for the most part don’t have any policies around AI and, quite frankly, there’s a tremendous need for professional development for teachers, administrators, principals, etc, in school, because kids are using generative AI platforms. Now, seven out of 10 kids who are teenagers in the U.S. are using generative AI chatbots. But we have to educate the teachers about many different aspects. For example, it’s true that generative AI platforms often give wrong answers. They also provide biased information, or just inappropriate responses. And so generative AI is clearly not teen-proof or kid-proof. That said, most teachers have not received AI literacy training yet, and that’s going to be an incredibly important element of teacher training and professional development in the months and years ahead. The same is true for parents. They are are even more clueless, if you will, than the teachers are. But the good news is we actually know how to teach this form of AI literacy, and I think you’re going to see in the coming months, a major galvanizing effort to try to teach both educators as well as parents about how AI works and how they can make sure that their teenagers are critical thinkers and using AI properly.
McCarty Carino: Yeah, what did your report show about how much teachers currently know about their students’ use of generative AI.
Steyer: This new research makes it clear that over 80% of parents in the United States say that their kids schools have not communicated with them or other families about generative AI in any way. So that also suggests that there aren’t policies in those schools, and quite frankly, that there’s very little teacher training and professional development. So there’s a major gap that needs to be answered. And the more adults talk about this, the better that students will learn to think critically and to use ai supplementally. And that’s really where Common Sense Media comes in, because having a curriculum around digital literacy in over 80% of the schools in the United States, we’ve built an AI literacy curriculum on top of that digital literacy program that will be very, very helpful to teachers and parents all across the United States. We can clearly close this communication gap, because after all, AI is here to stay. It’s only going to become more important in schools, more important in young people’s lives, and, quite frankly, more important in all of our lives.
McCarty Carino: Young people were very aware, it sounds like from this research, of some really ugly ways that this technology could be used with deepfakes, for use in bullying or to deceive parents or teachers. Tell me about what they shared on that.
Steyer: It was very disheartening to see that young people are pretty sophisticated already about AI, and that AI can be used for really nefarious purposes, like very, very high quality deepfakes, like very unfortunate, inappropriate forms of cyber bullying that you weren’t able to do before AI, including using deepfakes to embarrass other people. And also, quite frankly, to cheat. So young people are pretty sophisticated, and they overall have a good moral compass, though, the research shows, and they’re aware that it shouldn’t be used for that purpose. And the truth is, what we need to do is bring teachers and parents up to speed so they can teach the kids to think critically and use it properly. And one thing that we’ve recommended throughout the advent of AI is that parents do take AI test drives together with their students. Think of like practicing driving a car before you get a license. So go on your kids’ AI chatbot, ChatGPT, Gemini, Bard, whatever it is, with the kids, explore AI together to understand how it works and what it’s used for. Sit there with them in front of the computer. That’s a really good thing that teachers can do as well in a classroom. And I think that the more we bring adults into the picture with the students, the more proper and appropriate the practice of AI in schools will become.
McCarty Carino: Yeah, what does incorporating AI education into schools look like?
Steyer: Well, this is the brave new frontier of education, because AI can become a homework helper, it can become a research helper, it can help in virtually every area, whether it’s math, science, English, history, you name it. But the truth is, there have to be guardrails. There have to be policies and practices. There have to be guidelines, and schools have to put them into place, and parents need to be brought up to speed so they can understand them. So I think with AI, it’s critically important that we get out there with AI literacy courses right now for teachers and parents, on test drives and basically teaching all of us to think critically in these AI digital spaces, because in the long run, this is going to be a major form of supplement today’s education, and everyone’s going to have to understand the positive uses of AI as well as the downside and what the guardrails and policies we all need to make sure it’s done appropriately.
Some additional data highlighted in that Common Sense Media report show some notable emerging disparities when it comes to AI use and perceptions among different racial and ethnic groups. As Jim Steyer explained:
Steyer: Parents of African American teens were almost twice as likely to indicate that these platforms will have a positive impact on their teens’ learning in school, compared to parents of white teens. But it’s really important to also note that Black teens were more than twice as likely as white or Latino teenagers to say that teachers in their schools were flagging their schoolwork as being created by generative AI when it was not. So there’s an inherent bias that we see even in these early stages of AI in the classroom. And it’s interesting because Black parents were more excited than white parents about their kids using AI. So we’re going to have to get our arms around that one and try to make it something that we address as we continue to develop this broader area of AI in the classroom.
We’re going to be bringing you more on the implications of this data in the coming week. And we’ll talk about what our struggles with and lack of regulation for social media can teach us about how to protect young people from the potential harms of AI.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.