this post was submitted on 25 Jul 2024
993 points (97.4% liked)

Technology

59578 readers
2908 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI's impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

you are viewing a single comment's thread
view the rest of the comments
[–] FartsWithAnAccent@fedia.io 106 points 4 months ago* (last edited 4 months ago) (4 children)

They tried implementing AI in a few our our systems and the results were always fucking useless. What we call "AI" can be helpful in some ways but I'd bet the vast majority of it is bullshit half-assed implementations so companies can claim they're using "AI"

[–] DragonTypeWyvern@midwest.social 32 points 4 months ago (1 children)

The one thing "AI" has improved in my life has been a banking app search function being slightly better.

Oh, and a porn game did okay with it as an art generator, but the creator was still strangely lazy about it. You're telling me you can make infinite free pictures of big tittied goth girls and you only included a few?

[–] MindTraveller@lemmy.ca 29 points 4 months ago (2 children)

Generating multiple pictures of the same character is actually pretty hard. For example, let's say you're making a visual novel with a bunch of anime girls. You spin up your generative AI, and it gives you a great picture of a girl with a good design in a neutral pose. We'll call her Alice. Well, now you need a happy Alice, a sad Alice, a horny Alice, an Alice with her face covered with cum, a nude Alice, and a hyper breast expansion Alice. Getting the AI to recreate Alice, who does not exist in the training data, is going to be very difficult even once.

And all of this is multiplied ten times over if you want granular changes to a character. Let's say you're making a fat fetish game and Alice is supposed to gain weight as the player feeds her. Now you need everything I described, at 10 different weights. You're going to need to be extremely specific with the AI and it's probably going to produce dozens of incorrect pictures for every time it gets it right. Getting it right might just plain be impossible if the AI doesn't understand the assignment well enough.

[–] TheBat@lemmy.world 6 points 4 months ago

Generating multiple pictures of the same character is actually pretty hard.

Not from what I have seen on Civitai. You can train a model on specific character or person. Same goes for facial expressions.

Of course you need to generate hundreds of images to get only a few that you might consider acceptable.

[–] okwhateverdude@lemmy.world 4 points 4 months ago

This is a solvable problem. Just make a LoRA of the Alice character. For modifications to the character, you might also need to make more LoRAs, but again totally doable. Then at runtime, you are just shuffling LoRAs when you need to generate.

You're correct that it will struggle to give you exactly what you want because you need to have some "machine sympathy." If you think in smaller steps and get the machine to do those smaller, more do-able steps, you can eventually accomplish the overall goal. It is the difference in asking a model to write a story versus asking it to first generate characters, a scenario, plot and then using that as context to write just a small part of the story. The first story will be bland and incoherent after awhile. The second, through better context control, will weave you a pretty consistent story.

These models are not magic (even though it feels like it). That they follow instructions at all is amazing, but they simply will not get the nuance of the overall picture and be able to accomplish it un-aided. If you think of them as natural language processors capable of simple, mechanical tasks and drive them mechanistically, you'll get much better results.

[–] speeding_slug@feddit.nl 9 points 4 months ago

To not even consider the consequences of deploying systems that may farm your company data in order to train their models "to better serve you". Like, what the hell guys?

[–] Hackworth@lemmy.world 8 points 4 months ago* (last edited 4 months ago) (1 children)

What were they trying to accomplish?

[–] FartsWithAnAccent@fedia.io 37 points 4 months ago (4 children)

Looking like they were doing something with AI, no joke.

One example was "Freddy", an AI for a ticketing system called Freshdesk: It would try to suggest other tickets it thought were related or helpful but they were, not one fucking time, related or helpful.

[–] Hackworth@lemmy.world 16 points 4 months ago (1 children)

Ahh, those things - I've seen half a dozen platforms implement some version of that, and they're always garbage. It's such a weird choice, too, since we already have semi-useful recommendation systems that run on traditional algorithms.

[–] FartsWithAnAccent@fedia.io 11 points 4 months ago

It's all about being able to say, "Look, we have AI!"

[–] MentallyExhausted@reddthat.com 8 points 4 months ago (2 children)

That’s pretty funny since manually searching some keywords can usually provide helpful data. Should be pretty straight-forward to automate even without LLM.

[–] FartsWithAnAccent@fedia.io 6 points 4 months ago

Yep, we already wrote out all the documentation for everything too so it's doubly useless lol. It sucked at pulling relevant KB articles too even though there are fields for everything. A written script for it would have been trivial to make if they wanted to make something helpful, but they really just wanted to get on that AI hype train regardless of usefulness.

[–] Static_Rocket@lemmy.world 1 points 4 months ago

TFIDF and some light rules should work well and be significantly faster.

[–] dgriffith@aussie.zone 8 points 4 months ago* (last edited 4 months ago) (1 children)

As an Australian I find the name Freddy quite apt then.

There is an old saying in Aus that runs along the lines of, "even Blind Freddy could see that....", indicating that the solution is so obvious that even a blind person could see it.

Having your Freddy be Blind Freddy makes its useless answers completely expected. Maybe that was the devs internal name for it and it escaped to marketing haha.

[–] FartsWithAnAccent@fedia.io 4 points 4 months ago* (last edited 4 months ago)

I actually ended up becoming blind to Freddy because of how profoundly useless it was: Permanently blocked the webpage elements that showed it from my browser lol. I think Fresh since gave up.

Don't get me wrong, the rest of the service is actually pretty great and I'd recommend Fresh to anyone in search of a decent ticketing system. Freddy sucks though.

[–] rottingleaf@lemmy.world 1 points 4 months ago

It's bloody amazing, here I am, having all my childhood read about 20/80, critical points, Guderian's heavy points, Tao Te Ching, Sun Zu, all that stuff about key decisions made with human mind being of absolutely overriding importance over what tools can do.

These morons are sticking "AI"'s exactly where a human mind is superior over anything else at any realistic scale and, of course, could have (were it applied instead of human butt) identified the task at hand which has nothing to do with what "AI"'s can do.

I mean, half of humanity's philosophy is about garbage thinking being of negative worth, and non-garbage thinking being precious. In any task. These people are desperately trying to produce garbage thinking with computers as if there weren't enough of that already.

[–] menemen@lemmy.world 4 points 4 months ago

It is great for pattern recognition (we use it to recognize damages in pipes) and probably pattern reproduction (never used it for that). Haven't really seen much other real life value.