Close Menu
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact

Subscribe to Updates

Get the latest technology news from TechFinancials News about FinTech, Tech, Business, Telecoms and Connected Life.

What's Hot

Building Intelligent Support Systems: The Architecture Behind AI-Powered Customer Service Agents

2025-08-29

City of Cape Town Introduces New Fleet Tracking Tech

2025-08-29

From XRP to ETH : Investors are choosing Quid Miner’s stable income model

2025-08-29
Facebook X (Twitter) Instagram
Trending
  • Building Intelligent Support Systems: The Architecture Behind AI-Powered Customer Service Agents
Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp RSS
TechFinancials
  • Homepage
  • News
  • Cloud & AI
  • ECommerce
  • Entertainment
  • Finance
  • Opinion
  • Podcast
  • Contact
TechFinancials
Home»Opinion»Teens On Social Media Need Both Protection And Privacy – AI Could Help Get The Balance Right
Opinion

Teens On Social Media Need Both Protection And Privacy – AI Could Help Get The Balance Right

The prevalence of risks for teens on social media is well established. These risks range from harassment and bullying to poor mental health and sexual exploitation.
Afsaneh RaziBy Afsaneh Razi2024-02-01Updated:2024-04-021 Comment5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Mobile Fraud
Lady with a smartphone. Pexels
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Meta announced on Jan. 9, 2024, that it will protect teen users by blocking them from viewing content on Instagram and Facebook that the company deems to be harmful, including content related to suicide and eating disorders. The move comes as federal and state governments have increased pressure on social media companies to provide safety measures for teens.

At the same time, teens turn to their peers on social media for support that they can’t get elsewhere. Efforts to protect teens could inadvertently make it harder for them to also get help.

Congress has held numerous hearings in recent years about social media and the risks to young people. The CEOs of Meta, X – formerly known as Twitter – TikTok, Snap and Discord are scheduled to testify before the Senate Judiciary Committee on Jan. 31, 2024, about their efforts to protect minors from sexual exploitation.

The tech companies “finally are being forced to acknowledge their failures when it comes to protecting kids,” according to a statement in advance of the hearing from the committee’s chair and ranking member, Senators Dick Durbin (D-Ill.) and Lindsey Graham (R-S.C.), respectively.

I’m a researcher who studies online safety. My colleagues and I have been studying teen social media interactions and the effectiveness of platforms’ efforts to protect users. Research shows that while teens do face danger on social media, they also find peer support, particularly via direct messaging. We have identified a set of steps that social media platforms could take to protect users while also protecting their privacy and autonomy online.

What kids are facing

The prevalence of risks for teens on social media is well established. These risks range from harassment and bullying to poor mental health and sexual exploitation. Investigations have shown that companies such as Meta have known that their platforms exacerbate mental health issues, helping make youth mental health one of the U.S. Surgeon General’s priorities.

Play
Teens’ mental health has been deteriorating in the age of social media.

Much of adolescent online safety research is from self-reported data such as surveys. There’s a need for more investigation of young people’s real-world private interactions and their perspectives on online risks. To address this need, my colleagues and I collected a large dataset of young people’s Instagram activity, including more than 7 million direct messages. We asked young people to annotate their own conversations and identify the messages that made them feel uncomfortable or unsafe.

Using this dataset, we found that direct interactions can be crucial for young people seeking support on issues ranging from daily life to mental health concerns. Our finding suggests that these channels were used by young people to discuss their public interactions in more depth. Based on mutual trust in the settings, teens felt safe asking for help.

Research suggests that privacy of online discourse plays an important role in the online safety of young people, and at the same time a considerable amount of harmful interactions on these platforms comes in the form of private messages. Unsafe messages flagged by users in our dataset included harassment, sexual messages, sexual solicitation, nudity, pornography, hate speech and sale or promotion of illegal activities.

However, it has become more difficult for platforms to use automated technology to detect and prevent online risks for teens because the platforms have been pressured to protect user privacy. For example, Meta has implemented end-to-end encryption for all messages on its platforms to ensure message content is secure and only accessible by participants in conversations.

Also, the steps Meta has taken to block suicide and eating disorder content keep that content from public posts and search even if a teen’s friend has posted it. This means that the teen who shared that content would be left alone without their friends’ and peers’ support. In addition, Meta’s content strategy doesn’t address the unsafe interactions in private conversations teens have online.

Striking a balance

The challenge, then, is to protect younger users without invading their privacy. To that end, we conducted a study to find out how we can use the minimum data to detect unsafe messages. We wanted to understand how various features or metadata of risky conversations such as length of the conversation, average response time and the relationships of the participants in the conversation can contribute to machine learning programs detecting these risks. For example, previous research has shown that risky conversations tend to be short and one-sided, as when strangers make unwanted advances.

We found that our machine learning program was able to identify unsafe conversations 87% of the time using only metadata for the conversations. However, analyzing the text, images and videos of the conversations is the most effective approach to identify the type and severity of the risk.

These results highlight the significance of metadata for distinguishing unsafe conversations and could be used as a guideline for platforms to design artificial intelligence risk identification. The platforms could use high-level features such as metadata to block harmful content without scanning that content and thereby violating users’ privacy. For example, a persistent harasser who a young person wants to avoid would produce metadata – repeated, short, one-sided communications between unconnected users – that an AI system could use to block the harasser.

Ideally, young people and their care givers would be given the option by design to be able to turn on encryption, risk detection or both so they can decide on trade-offs between privacy and safety for themselves.The Conversation

Afsaneh Razi, Assistant Professor of Information Science, Drexel University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI Instagram Online Safety Privacy privacy online Social media technology
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Afsaneh Razi

Related Posts

Building Intelligent Support Systems: The Architecture Behind AI-Powered Customer Service Agents

2025-08-29

Amid Uncertainty, Navigating The AI transition, A Roadmap For Africa

2025-08-26

Where Are The Black Investors And VCs?

2025-08-25

Why South Africa Is The Hidden Powerhouse For Global Executive Search

2025-08-25

Unilabs Finance (UNIL) AI Hedge Fund Crosses $14 Million After Announcing Mining Fund Beta Launch

2025-08-23

BNB Momentum Slows—Now Analysts Back This Low-Cap Altcoin for 50x in 2025

2025-08-21

Your WiFi Router Is About To Start Watching You

2025-08-21

Scaling Vision: How AI is Advancing Image Intelligence from Smartphones to Self-Driving Cars

2025-08-21

The AI-Powered Future of Work: Achraf Golli on Education, Workforce Readiness, and the Role of AI

2025-08-21

1 Comment

  1. Lashawn Jones on 2024-02-01 22:10

    Thanks for sharing this insightful article! It’s crucial to acknowledge the delicate balance between protecting teens on social media while still respecting their privacy. AI indeed holds tremendous potential in helping us navigate these complexities. Leveraging technology to safeguard young users while empowering them to express themselves safely is key to fostering a positive online environment. Great read!

    Reply
Leave A Reply Cancel Reply

DON'T MISS
Breaking News

BankservAfrica Rebrands As PayInc

The financial market infrastructure giant BankservAfrica has officially been rebranded to PayInc. The launch, held…

KZN’s First Supercar-Centric Luxury Residential Development Unveiled

2025-08-27

Government Pensions Administration Agency CEO Placed On Precautionary Suspension

2025-08-26

Airtel Africa & Vodacom Forge Landmark Infrastructure Partnership

2025-08-12
Stay In Touch
  • Facebook
  • Twitter
  • YouTube
  • LinkedIn
OUR PICKS

SA’s Skhokho 2.0 Puts Enterprise AI In SME Hands

2025-08-28

Please Call Me: After 25 Years, Will SCA’s New Bench Silence ConCourt?

2025-08-26

Vodacom Invests R400M To Expand Network In Free State And Northern Cape

2025-08-26

Elon Musk’s Starlink Backs BEE Equity Equivalents, Not 30% Ownership

2025-08-18

Subscribe to Updates

Get the latest tech news from TechFinancials about telecoms, fintech and connected life.

About Us

TechFinancials delivers in-depth analysis of tech, digital revolution, fintech, e-commerce, digital banking and breaking tech news.

Facebook X (Twitter) Instagram YouTube LinkedIn WhatsApp Reddit RSS
Our Picks

Building Intelligent Support Systems: The Architecture Behind AI-Powered Customer Service Agents

2025-08-29

City of Cape Town Introduces New Fleet Tracking Tech

2025-08-29

From XRP to ETH : Investors are choosing Quid Miner’s stable income model

2025-08-29
Recent Posts
  • Building Intelligent Support Systems: The Architecture Behind AI-Powered Customer Service Agents
  • City of Cape Town Introduces New Fleet Tracking Tech
  • From XRP to ETH : Investors are choosing Quid Miner’s stable income model
  • Zayna Mahomed Is EPF Solve’s For X August Winner 
  • XRP Price Support Confirmed At $2.85 As Investors Back New PayFi Altcoin Trending In The Crypto Space
TechFinancials
RSS Facebook X (Twitter) LinkedIn YouTube WhatsApp
  • Homepage
  • Newsletter
  • Contact
  • Advertise
  • Privacy Policy
  • About
© 2025 TechFinancials. Designed by TFS Media.

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.