Privacy laws meant to protect young users often deter social media platforms from allowing children younger than 13 to have accounts. But often kids find ways to use them anyway.
TikTok, better known as Musical.ly among kids, is one of these social media platforms. The Federal Trade Commission recently fined the company more than $5 million for failing to protect children’s privacy.
Because of TikTok’s popularity among school-age children, researchers from the University of Central Florida and the University of Maryland teamed up to see what kids had to say about “stranger danger” and how it might be handled more effectively through design.
The children, ages 8 to 11, were asked to redesign TikTok in a way that would help keep them safe online while still being appealing to their age group.
The study found that young children don’t often recognize dangers when they are engaging with others on social media, such as encountering sexual predators or providing too much personal information that could lead to children being lured away and taken for human trafficking. And if the children encounter a sticky situation, they would prefer to ask artificial intelligence systems for advice, rather than having their parents have direct control over their social media interactions.
While the children acknowledged there are some bad people online, they were more concerned with privacy and independence. Under two scenarios, each group opted for a design that had AI assistance rather a design that gave parents the ability to outright ban or control a child’s activity.
“Kids did seem to have a sense that an online stranger wasn’t something that was physically harmful to them, so they didn’t seem to view strangers online as they do offline.” — Karla Badillo-Urquiola ’14 ’15MS, UCF doctoral scholar
These are among the findings from a study that gave children the opportunity to redesign TikTok as a way to gain insight into their online behavior.
“Kids did seem to have a sense that an online stranger wasn’t something that was physically harmful to them, so they didn’t seem to view strangers online as they do offline,” says Karla Badillo-Urquiola ’14 ’15MS, a doctoral scholar at the University of Central Florida who led the study. “It’s commonplace to interact with strangers online, so they don’t see this as abnormal. The lack of physicality lowers their sense of risk.”
“It’s not that they trust more in AI necessarily,” said Pamela Wisniewski, a UCF assistant professor of computer science who co-authored the study’s findings. “Rather AI promises more privacy. Children don’t want to be constantly monitored by their parents, and AI could be an alternative to helping them balance their privacy and safety.”
The study will be published as part of the June 12-15 Association for Computing Machinery Conference (ACM): Interaction Design and Children Conference in Boise, Idaho.
“We are definitely not suggesting that we remove parents from the equation,” Badillo-Urquiola said. “Rather, parents need to be more thoughtful about how they are monitoring technology use in ways that empower the child and teach them to use it effectively instead of doing it through surveillance, restriction and punishment. If the parent has all the control, there is no room for the child to learn on their own.”
“Parents need to be more thoughtful about how they are monitoring technology use in ways that empower the child and teach them to use it effectively instead of doing it through surveillance, restriction and punishment.” — Karla Badillo-Urquiola ’14 ’15MS, UCF doctoral scholar
Wisniewski has been researching teens’ online behavior for years. She has found that teens want more independence and privacy, while parental-control technologies promise safety but primarily deliver privacy-invasive levels of restriction and monitoring.
She suggests that just using technology to avoid bad situations isn’t the best approach. Rather, parents should find ways to help teach their teens how to navigate these situations to build resiliency.
The researchers wanted to examine what younger children thought about the balance between safety and autonomy, which led them to KidsTeam at the University of Maryland’s Human-Computer Interaction Lab (HCIL). KidsTeam co-designs technologies that support children’s learning and play. The group’s research examines how intergenerational design techniques impact use and by working with children, the designs are more likely to be more relevant to children’s interests and needs. All participant parents agreed to have the children participate in the study.
The researchers agree that some platforms aren’t appropriate for children. They both have young children so their familiarity is both academic and personal, and their goal is to make the internet a safer place for their children. TikTok, for example, has been banned in several countries and neither researcher allows their children to use it.
“However, we want to think about designing for child social media use more broadly, because designing to restrict and limit use isn’t designing for a positive online experience,” Wisniewski said.
Badillo-Urquiola earned her bachelor’s and master’s degrees at the UCF and is pursuing her doctorate in modeling and simulation. She expects to graduate in 2021 and hopes to one day become a lead scholar in human-computer interaction and a tenured professor, a status held by only 4 percent of Hispanic women.
Wisniewski is an expert on the topic of adolescent online safety and is the first computer scientist to receive the scholars award from the William T. Grant Foundation, as well as multiple awards from the National Science Foundation on promoting adolescent online safety.