this post was submitted on 23 Sep 2024
173 points (94.8% liked)

Technology

59612 readers
3270 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 22 comments
sorted by: hot top controversial new old
[–] Max_P@lemmy.max-p.me 89 points 2 months ago

OpenAI: Here's a new model that can think in steps and reason about things!

User: How did you conclude this is the correct answer?

OpenAI: No! Not like that! banhammer

[–] Endmaker@ani.social 86 points 2 months ago (3 children)
[–] glitchdx@lemmy.world 25 points 2 months ago (1 children)

did anyone ever actually assume that "open" wasn't a lie?

[–] Eril 37 points 2 months ago (1 children)

When I heard about it first, I thought it was some open source project, because of the name. :(

[–] Womble@lemmy.world 12 points 2 months ago

It was, originally. GPT-2 was eventually released after some push back from openAI and the models prior to that were fully released immediately. Its been apparent for quite a while that OpenAI have been transitioning from a non-profit org interested in pushing technology forward to a VC backed monopoly-seeking company. The big Altman putsch/counter putsch was just the solidfying of that.

[–] TheBat@lemmy.world 15 points 2 months ago

Open, not like a library, but like Sandworm's mouth.

[–] T00l_shed@lemmy.world 12 points 2 months ago
[–] spacecadet@lemm.ee 75 points 2 months ago (2 children)

Please be applicable to business accounts, Please be applicable to business accounts, Please be applicable to business accounts, Please be applicable to business accounts,

I want to get rid of this shit so bad, of another junior dev submits a shit MR they can’t explain because they had chatGPT write it I’m going to explode. Also, the number of AI executives we have in charge of our manufacturing company is somehow more than we have in charge of manufacturing, and guess what?! They are all MBAs who haven’t written a god damn line of code in their life but have become professional “prompt engineers”.

[–] yemmly@lemmy.world 42 points 2 months ago (1 children)

Every time I hear someone talking up prompt engineering, I feel like I should say something. But I don’t.

[–] elrik@lemmy.world 28 points 2 months ago

"Prompt engineering" must be the easiest job to replace with AI. You can simply ask an LLM to generate and refine prompts.

[–] Reverendender@sh.itjust.works 9 points 2 months ago (3 children)

Do they not test them before submission?

[–] vrighter@discuss.tchncs.de 18 points 2 months ago

I've met someone employed as a dev, who not only didn't know that the compiler generates an executable file, but actually spent a month trying to change the code, not noticing that 0 of their code changes were having any effect whatsoever (because they kept running an old build of mine)

[–] SlopppyEngineer@lemmy.world 13 points 2 months ago (1 children)

They probably tested in ideal circumstances and their stuff breaks down when even coming close to an edge case.

[–] Reverendender@sh.itjust.works 10 points 2 months ago (1 children)

I would be really interested in learning a language. The AI assistance method actually meshes very well with my learning style. I would never submit anything to anyone that I was not certain was good working code though. My brain wouldn’t let me do it. Now i just need to choose a language.

[–] Failx@sh.itjust.works 15 points 2 months ago (1 children)

I applaud your ethics. But you don't know how close you are to falling from grace.

Just yesterday I had to remove perfectly tested, sensible and non-ai code from our production system, not because that it did not do what the author intended, but because what the author intended was flawed. And this is exactly what ai also cannot teach you right now: Taking a step back to realize that your code might be right, but your intentions are not.

Definetly keep at it. But be aware you will do the wrong things even with perfectly working code.

[–] SlopppyEngineer@lemmy.world 4 points 2 months ago

Yeah, the code can work flawlessly in test, but after a few months of production there are a lot more records or files and the code starts to have issues.

[–] RecluseRamble@lemmy.dbzer0.com 4 points 2 months ago

Probably don't know how to get it to run.

[–] Chozo@fedia.io 11 points 2 months ago (2 children)

I don't understand why it's so hard to sandbox an LLM's configuration data from it's training data.

[–] MoondropLight@thelemmy.club 10 points 2 months ago

Because its all one thing. The promise of AI is that you can basically throw anything at it, and you don't need to understand exactly how/why it makes the connections it does; you just adjust the weights until it kinda looks alright.

There are many structural hacks used to give it better results (and in this case some form of reasoning) but ultimately they're mostly relying on connecting multiple nets together and retrying queries and such. There's no human understandable settings. Neural networks are basically one input and one output (unless you're training it).

[–] WalnutLum@lemmy.ml 1 points 2 months ago (1 children)

What do you mean by "configuration data?"

[–] Chozo@fedia.io 2 points 2 months ago (1 children)

The data used to configure it.

[–] WalnutLum@lemmy.ml 1 points 2 months ago

Do you mean finetune data?

A model's configuration data is training data.