Character.ai is changing how it works for teenagers. The company wants to make its platform a “safe” space with new controls for parents. This change comes as the site faces two lawsuits in the US. One lawsuit is about the death of a teenager. Some people say the platform is a “clear and present danger” to young users.
Character.ai will add new features to help keep kids safe. Parents will be able to see how much time their child spends on the site and which chatbots they talk to most. The first new parental controls will be ready by March 2025.
Andy Burrows, from the Molly Rose Foundation, thinks these changes are too late and not good enough. He calls them a “sticking plaster fix” to safety problems.
The platform has faced criticism before. In October, people found chatbot versions of two teenagers who had died. There are also serious concerns about how it protects children. For example, a family said a chatbot encouraged a 17-year-old to harm his parents.
Character.ai will now give users warnings after talking to chatbots for an hour. They will also remind users that chatbots are not real people. These changes are a first step, but experts want to see if they really work as the platform grows.
Vocabulary List:
- Platform /ˈplæt.fɔːrm/ (noun): A digital service or application that facilitates interaction or engagement.
- Controls /kənˈtroʊlz/ (noun): Regulations or mechanisms to manage or oversee activities.
- Criticism /ˈkrɪtɪˌsɪzəm/ (noun): The expression of disapproval based on perceived faults or mistakes.
- Encouraged /ɪnˈkɜːrɪdʒd/ (verb): To give support confidence or hope to someone.
- Danger /ˈdeɪn.dʒər/ (noun): The possibility of suffering harm or injury.
- Warnings /ˈwɔːrnɪŋz/ (noun): Notices or signs that indicate possible danger or trouble.