"Italy's data protection authority has fined ChatGPT maker OpenAI a fine of €15 million ($15.66 million) over how the generative artificial intelligence application handles personal data.
The fine comes nearly a year after the Garante found that ChatGPT processed users' information to train its service in violation of the European Union's General Data Protection Regulation (GDPR).
The authority said OpenAI did not notify it of a security breach that took place in March 2023, and that it processed the personal information of users to train ChatGPT without having an adequate legal basis to do so. It also accused the company of going against the principle of transparency and related information obligations toward users.
"Furthermore, OpenAI has not provided for mechanisms for age verification, which could lead to the risk of exposing children under 13 to inappropriate responses with respect to their degree of development and self-awareness," the Garante said.
Besides levying a €15 million fine, the company has been ordered to carry out a six-month-long communication campaign on radio, television, newspapers, and the internet to promote public understanding of how ChatGPT works.
This specifically includes the nature of data collected, both user and non-user information, for the purpose of training its models, and the rights that users can exercise to object, rectify, or delete that data."
#AI #GenerativeAI #EU #Italy #OpenAI #DataProtection #Privacy #ChatGPT
"The utility of the activity data in risk mitigation and behavioural modification is questionable. For example, an actuary we interviewed, who has worked on risk pricing for behavioural Insurtech products, referred to programs built around fitness wearables for life/health insurance, such as Vitality, as ‘gimmicks’, or primarily branding tactics, without real-world proven applications in behavioural risk modification. The metrics some of the science is based on, such as the BMI or 10,000 steps requirement, despite being so widely associated with healthy lifestyles, have ‘limited scientific basis.’ Big issues the industry is facing are also the inconsistency of use of the activity trackers by policyholders, and the unreliability of the data collected. Another actuary at a major insurance company told us there was really nothing to stop people from falsifying their data to maintain their status (and rewards) in programs like Vitality. Insurers know that somebody could just strap a FitBit to a dog and let it run loose to ensure the person reaches their activity levels per day requirement. The general scepticism (if not broad failure) of products and programs like Vitality to capture data useful for pricing premiums or handling claims—let alone actually induce behavioural change in meaningful, measurable ways—is widely acknowledged in the industry, but not publicly discussed."
https://www.sciencedirect.com/science/article/pii/S0267364924001614