this post was submitted on 08 Dec 2024
455 points (94.5% liked)

Technology

60023 readers
2698 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
455
The GPT Era Is Already Ending (www.theatlantic.com)
submitted 2 weeks ago* (last edited 2 weeks ago) by cyrano@lemmy.dbzer0.com to c/technology@lemmy.world
 

If this is the way to superintelligence, it remains a bizarre one. “This is back to a million monkeys typing for a million years generating the works of Shakespeare,” Emily Bender told me. But OpenAI’s technology effectively crunches those years down to seconds. A company blog boasts that an o1 model scored better than most humans on a recent coding test that allowed participants to submit 50 possible solutions to each problem—but only when o1 was allowed 10,000 submissions instead. No human could come up with that many possibilities in a reasonable length of time, which is exactly the point. To OpenAI, unlimited time and resources are an advantage that its hardware-grounded models have over biology. Not even two weeks after the launch of the o1 preview, the start-up presented plans to build data centers that would each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes.

https://archive.is/xUJMG

you are viewing a single comment's thread
view the rest of the comments
[–] floofloof@lemmy.ca 28 points 1 week ago (1 children)

Yesterday, alongside the release of the full o1, OpenAI announced a new premium tier of subscription to ChatGPT that enables users, for $200 a month (10 times the price of the current paid tier), to access a version of o1 that consumes even more computing power—money buys intelligence.

We poors are going to have to organize and make best use of our human intelligence to form an effective resistance against corporate rule. Or we can see where this is going.

[–] astronaut_sloth@mander.xyz 32 points 1 week ago (1 children)

The thing I'm heartened by is that there is a fundamental misunderstanding of LLMs among the MBA/"leadership" group. They actually think these models are intelligent. I've heard people say, "Well, just ask the AI," meaning asking ChatGPT. Anyone who actually does that and thinks they have a leg up are insane and kidding themselves. If they outsource their thinking and coding to an LLM, they might start getting ahead quickly, but they will then fall behind just as quickly because the quality will be middling at best. They don't understand how to best use the technology, and they will end up hanging themselves with it.

At the end of the day, all AI is just stupid number tricks. They're very fancy, impressive number tricks, but it's just a number trick that just happens to be useful. Solely relying on AI will lead to the downfall of an organization.

[–] taladar@sh.itjust.works 13 points 1 week ago (3 children)

If they outsource their thinking and coding to an LLM, they might start getting ahead quickly

As a programmer I have yet to see evidence that LLMs can even achieve that. So far everything they product is a mess that needs significant effort to fix before it even does what was originally asked of the LLM unless we are talking about programs that have literally been written already thousands of times (like Hello World or Fibonacci generators,...).

[–] hark@lemmy.world 4 points 1 week ago

I've seen a junior developer use it to more quickly get a start on things like boiler plate code, configuration, or just as a starting point for implementing an algorithm. It's kind of like a souped up version of piecing together Stack Overflow code snippets. Just like using SO, it needs tweaking, and someone who relies too much on either SO or AI will not develop the proper skills to do so.

[–] uranibaba@lemmy.world 4 points 1 week ago (1 children)

I find LLM's great for creating shorter snippets of code. It can also be great as a starting point or to get started with something that you are not familiar with.

[–] taladar@sh.itjust.works 5 points 1 week ago

Even asking for an example on how to use a specific API has failed about 50% of the time, it tends to hallucinate entire parts of the API that don't exist or even entire libraries that don't exist.

[–] driving_crooner@lemmy.eco.br 3 points 1 week ago

I'm not a programmer, more like a data scientist, and I use LLMs all day, I write my shity pretty specific code, check that it works and them pass it to the LLM asking for refactoring and optimization. Some times their method save me 2 secs on a 30 secs scripts, other ones it's save me 35 mins in a 36 mins script. It's also pretty good helping you making graphics.