this post was submitted on 11 Nov 2024
585 points (99.2% liked)
Privacy
1266 readers
158 users here now
Icon base by Lorc under CC BY 3.0 with modifications to add a gradient
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
their given reasons are "to keep backups" and "academic and clinical research with de-identified datasets"
they seem to actually do a fairly good job with anonymizing the research datasets, unlike most "anonymized research data", though for the raw data stored on their servers, they do not seem to use encryption properly and their security model is "the cloud hoster wouldn't spy on the data right?" (hint: their data is stored on american servers, so the american authorities can just subpoena Amazon Web Services directly, bypassing all their "privacy guarantees". (the replacement for the EU-US Privacy Shield seems to be on very uncertain legal grounds, and that was before the election))
Doubt.
De-identified data is an oxymoron. Basically any dataset that's in any way interesting is identifiable.
no it's not. If you reduce the information in the datapoints until none of them are unique, then it is very obviously impossible to uniquely identify someone from them. And when you have millions of users the data can definitely still be kept interesting
(though there's pretty big pitfalls here, as their report seems to leave open the possibility of not doing it correctly)