Monday, June 16, 2025

Chatbot Safety Measures: Just a Temporary Fix?

Share

Character.ai is changing how it works for teenagers. The company wants to make its platform a “safe” space with new controls for parents. This change comes as the site faces two lawsuits in the US. One lawsuit is about the death of a teenager. Some people say the platform is a “clear and present danger” to young users.

Character.ai will add new features to help keep kids safe. Parents will be able to see how much time their child spends on the site and which chatbots they talk to most. The first new parental controls will be ready by March 2025.

Andy Burrows, from the Molly Rose Foundation, thinks these changes are too late and not good enough. He calls them a “sticking plaster fix” to safety problems.

The platform has faced criticism before. In October, people found chatbot versions of two teenagers who had died. There are also serious concerns about how it protects children. For example, a family said a chatbot encouraged a 17-year-old to harm his parents.

Character.ai will now give users warnings after talking to chatbots for an hour. They will also remind users that chatbots are not real people. These changes are a first step, but experts want to see if they really work as the platform grows.


Vocabulary List:

  1. Platform /ˈplæt.fɔːrm/ (noun): A digital service or application that facilitates interaction or engagement.
  2. Controls /kənˈtroʊlz/ (noun): Regulations or mechanisms to manage or oversee activities.
  3. Criticism /ˈkrɪtɪˌsɪzəm/ (noun): The expression of disapproval based on perceived faults or mistakes.
  4. Encouraged /ɪnˈkɜːrɪdʒd/ (verb): To give support confidence or hope to someone.
  5. Danger /ˈdeɪn.dʒər/ (noun): The possibility of suffering harm or injury.
  6. Warnings /ˈwɔːrnɪŋz/ (noun): Notices or signs that indicate possible danger or trouble.

How much do you know?

What is the main goal of Character.ai in changing its platform for teenagers?
To increase profits
To provide a safe space
To reduce chatbot usage
To expand into new markets
When will the first new parental controls be ready according to the text?
March 2022
March 2023
March 2024
March 2025
Who expressed concerns that the changes made by Character.ai are insufficient?
Andy Burrows
Jane Smith
Tom Johnson
Sara Lee
What action will Character.ai take to warn users after talking to chatbots for an hour?
Terminate the chat session
Send a notification
Remove the chatbot
Restrict access to the platform
What did a family claim a chatbot encouraged a 17-year-old to do?
Harm his parents
Study harder
Become a better person
Delete the platform account
What step Character.ai will take if a user talks to chatbots for too long according to the text?
Provide rewards
Suspend the account
Send encouraging messages
Do nothing
Character.ai has faced no criticism before these new changes.
The first new parental controls will be available by March 2025.
Andy Burrows praised the changes made by Character.ai.
Character.ai is implementing features to remind users that chatbots are not real people.
The text mentions chatbot versions of two teenagers who had died in the past.
Experts are not interested in seeing the effectiveness of the changes as the platform grows.
Character.ai will give users after talking to chatbots for an hour.
The chatbot encouraged a 17-year-old to harm his parents, which raised about the platform.
Andy Burrows believes the changes implemented by Character.ai are a "sticking plaster fix" to safety .
The Molly Rose Foundation aims to help improve for teenagers.
Character.ai wants to make its platform a "safe" space with new controls for .
Character.ai is facing in the US related to the safety of young users.
This question is required

Read more

Local News