this post was submitted on 30 Apr 2025
91 points (100.0% liked)
Technology
38590 readers
304 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
One of the first things drilled into me in journalism was "Smith thinks" should be recast to "Smith said he thinks."
The C-suite is likely well aware of limitations, but shareholders like to hear about the hot new thing.
The thing is, the idea isn't wrong. Automating complex tasks is a bitch, but the repetitive tasks that turn any job into a grind are prime candidates. The larger issue is instead of letting employees spend more time doing fulfilling activities because of increased efficiency, companies tend to do layoffs.
The problem is, this varies from person to person. My team divvies (or did, I quit not too long ago) up tasks based on what different people enjoy doing more, and no executive would have any clue which repeating tasks are repetitive (in a derogatory way), and which ones are just us doing our job. I like doing network traffic analysis. My coworker likes container hardening. Both of those could be automated, but that would remove something we enjoy from each of our respective jobs.
A big move in recent AI company rhetoric is that AI will "do analyses", and people will "make decisions", but how on earth are you going to keep up the technical understanding needed to make a decision, without doing the analyses?
An AI saying, "I think this is malicious, what do you want to do?" isn't a real decision if the person answering can't verify or repudiate the analysis.