this post was submitted on 17 Sep 2024
25 points (93.1% liked)

Futurology

1762 readers
127 users here now

founded 1 year ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] sabreW4K3@lazysoci.al 12 points 1 month ago (2 children)

Because this is what AI is supposed to do. Analyse mass swathes of data in order to amplify the important information and allow smart people to make more educated decisions.

[–] Lugh@futurology.today 9 points 1 month ago* (last edited 1 month ago)

I agree, to me one of the most frustrating aspects of much online discussion of AI is that it focuses on trivial chatter and nonsense. In particular boring fanboyism when it comes to the likes of Musk or OpenAI. Meanwhile the truly Earth shattering long-term events are happening elsewhere, and this is one example of them. Halving unexpected deaths in hospital settings is such a huge thing and yet it goes barely reported, in comparison to the brain-dead ra-ra Silicon Valley gossip that passes for most discussion about AI.

[–] Zexks@lemmy.world 6 points 1 month ago

I’m all for AI but I want to point out a spot of contention that is missed in this comment. The people bitching about it are focusing on hallucinations and other ai fuckups (while ignoring the exact same shit from humans I’ll add). They will simply not trust what ever the AIs output for those reasons. Regardless of how relevant the data might be.