It feels like generative AI tools are everywhere these days. While they’ve become essential for professionals across industries, they also bring a lot of entertainment value and are great for satisfying everyday curiosity. Kids are one of its primary audiences. But is it safe for children? Italy’s data protection watchdogs have an answer to this question, and it’s a “no” with a side of €15 million fine.
Why is Italy fining Open AI?
Italy’s Garante per la protezione dei dati personali (GPDP) has been scrutinising OpenAI since 2023. Initially, its generative AI tool, ChatGPT, was temporarily banned due to data privacy concerns. This latest fine comes from findings that suggest OpenAI used personal data to train its AI without proper legal justification. The company also failed to notify authorities of a data breach. The GPDP also criticised OpenAI for not implementing effective age verification systems, which could expose kids under 13 to inappropriate content generated by the chatbot.
OpenAI claims that the fine is almost 20 times its revenue from Italy during the relevant period. The company also plans to appeal.
OpenAI has also been ordered to launch a six-month public awareness campaign in Italy. The company must educate users on how their data is used and how they can exercise their rights under GDPR.
A fix in the pipeline?
On Christmas Eve, Open AI CEO Sam Altman posted on X: “What would you like OpenAI to build/fix in 2025?” While the thread was flooded with suggestions, one idea stood out.
User @P4LSEC replied, “Family accounts. Let me create accounts for my kids with guard rails. Let their curiosity take off, but within reasonable limits, as determined by the parent. Maybe we could even get insights about our kids from their usage!” Altman seemed to like the idea.
It’s a suggestion that feels overdue. Altman’s acknowledgement is promising, but good ideas take time to become actionable features.
A broader debate outside of Italy
It’s not just Italy. Regulators worldwide are grappling with how to ensure that AI complies with privacy laws and ethical standards. EU’s upcoming AI Act aims to establish strict guidelines for systems like ChatGPT. U.S. regulators are also keeping a close eye on AI companies, although their approach has been slower and less coordinated.
However, as a parent, can you trust AI tools to engage your kids responsibly? The lack of an age verification system or family-friendly settings does not promote confidence in companies like OpenAI. While OpenAI insists it’s committed to privacy and user safety, the fine and ongoing scrutiny suggests there’s a long way to go.
What’s next?
The €15 million fine might be just the beginning. As AI becomes more integrated into education, entertainment, and even social interactions, companies will need to do more than just appeal fines and promise fixes. Regulators are signalling that transparency, accountability, and proactive safeguards are essential.
If OpenAI follows through on Altman’s nod to family-friendly accounts, it could be a step toward addressing these concerns. But until then, the onus is on parents to navigate these tools cautiously. After all, while ChatGPT can be a powerful educational resource, it’s clear the EU (and many others) don’t yet trust it to play safely with your kids.