this post was submitted on 27 Dec 2024
344 points (95.0% liked)

Technology

60123 readers
2468 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] FlyingSquid@lemmy.world 43 points 1 day ago* (last edited 1 day ago)

"It's at a human-level equivalent of intelligence when it makes enough profits" is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.

[–] Free_Opinions@feddit.uk 51 points 1 day ago (8 children)

We've had definition for AGI for decades. It's a system that can do any cognitive task as well as a human can or better. Humans are "Generally Intelligent" replicate the same thing artificially and you've got AGI.

[–] Toribor@corndog.social 3 points 2 hours ago

Oh yeah!? If I'm so dang smart why am I not generating 100 billion dollars in value?

[–] rational_lib@lemmy.world 1 points 5 hours ago

So then how do we define natural general intelligence? I'd argue it's when something can do better than chance at solving a task without prior training data particular to that task. Like if a person plays tetris for the first time, maybe they don't do very well but they probably do better than a random set of button inputs.

Likewise with AGI - say you feed an LLM text about the rules of tetris but no button presses/actual game data and then hook it up to play the game. Will it do significantly better than chance? My guess is no but it would be interesting to try.

[–] IndustryStandard@lemmy.world 1 points 8 hours ago (1 children)
[–] Free_Opinions@feddit.uk 2 points 5 hours ago

It should be able to perform any cognitive task a human can. We already have AI systems that are better at individual tasks.

[–] LifeInMultipleChoice@lemmy.ml 15 points 1 day ago (2 children)

So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether... And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I'd say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can't, but language models to me aren't "AGI" in my opinion.

[–] Don_alForno 3 points 10 hours ago (1 children)

any cognitive Task. Not "9 out of the 10 you were able to think of right now".

[–] notfromhere@lemmy.ml 4 points 6 hours ago

Any is very hard to benchmark and is also not how humans are tested.

[–] hendrik@palaver.p3x.de 7 points 1 day ago (2 children)

Agree. And these tasks can't be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn't enough in my eyes. Especially since it even struggles to do that. It's the "general" that is missing.

[–] NeverNudeNo13@lemmings.world 2 points 5 hours ago (1 children)

On the same hand... "Fluently translate this email into 10 random and discrete languages" is a task that 99.999% of humans would fail that a language model should be able to hit.

[–] hendrik@palaver.p3x.de 2 points 4 hours ago* (last edited 4 hours ago)

Agree. That's a super useful thing LLMs can do. I'm still waiting for Mozilla to integrate Japanese and a few other (distant to me) languages into my browser. And it's a huge step up from Google translate. It can do (to a degree) proverbs, nuance, tone... There are a few things AI or machine learning can do very well. And outperform any human by a decent margin.

On the other hand, we're talking about general intelligence here. And translating is just one niche task. By definition that's narrow intelligence. But indeed very useful to have, and I hope this will connect people and broaden their (and my) horizon.

[–] Free_Opinions@feddit.uk 4 points 15 hours ago (1 children)

It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner.

This is more about robotics than AGI. A system can be generally intelligent without having a physical body.

[–] hendrik@palaver.p3x.de 1 points 9 hours ago* (last edited 8 hours ago)

You're - of course - right. Though I'm always a bit unsure about exactly that. We also don't attribute intelligence to books. For example an encyclopedia, or Wikipedia... That has a lot of knowledge stored, yet it is not intelligent. That makes me believe being intelligent has something to do with being able to apply knowledge, and do something with it. And outputting text is just one very limited form of interacting with the world.

And since we're using humans as a benchmark for the "general" part in AGI... Humans have several senses, they're able to interact with their environment in lots of ways, and 90% of that isn't drawing and communicating with words. That makes me wonder: Where exactly is the boundary between an encyclopedia and an intelligent entity... Is intelligence a useful metric if we exclude being able to do anything useful with it? And how much do we exclude by not factoring in parts of the environment/world?

And is there a difference between being book-smart and intelligent? Because LLMs certainly get all of their information second-hand and filtered in some way. They can't really see the world itself, smell it, touch it and manipulate something and observe the consequences... They only get a textual description of what someone did and put into words in some book or text on the internet. Is that a minor or major limitation, and do we know for sure this doesn't matter?

(Plus, I think we need to get "hallucinations" under control. That's also not 100% "intelligence", but it also cuts into actual use if that intelligence isn't reliably there.)

[–] zeca@lemmy.eco.br 7 points 1 day ago* (last edited 1 day ago) (3 children)

Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.

[–] barsoap@lemm.ee 1 points 7 hours ago* (last edited 7 hours ago) (1 children)

But we know too little about whether the limits of the turing machine are also limits of human cognition.

Erm, no. Humans can manually step interpreters of Turing-complete languages so we're TC ourselves. There is no more powerful class of computation, we can compute any computable function and our silicon computers can do it as well (given infinite time and scratch space yadayada theoretical wibbles)

The question isn't "whether", the answer to that is "yes of course", the question is first and foremost "what" and then "how", as in "is it fast and efficient enough".

[–] zeca@lemmy.eco.br 1 points 2 hours ago* (last edited 2 hours ago) (1 children)

No, you misread what I said. Of course humans are at least as powerful as a turing machine, im not questioning that. What is unkonwn is if turing machines are as powerful as human cognition. Who says every brain operation is computable (in the classical sense)? Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?

[–] barsoap@lemm.ee 1 points 1 hour ago* (last edited 1 hour ago)

Who is to say the brain doesnt take advantage of some weird physical phenomenon that isnt classically computable?

Logic, from which follows the incompleteness theorem, reified in material reality as cause and effect. Instead of completeness you could throw out soundness (that is, throw out cause and effect) but now the physicists are after you because you made them fend off even more Boltzmann brains. There is theory on hypercomputation but all it really boils down to is "if incomputable inputs are allowed, then we can compute the incomputable". It should be called reasoning modulo oracles.

Or, put bluntly: Claiming that brains are legit hypercomputers amounts to saying that humanity is supernatural, as in aphysical. Even if that were the case, what would hinder an AI from harnessing the same supernatural phenomenon? The gods?

[–] Free_Opinions@feddit.uk 1 points 15 hours ago* (last edited 15 hours ago)

As with many things, it’s hard to pinpoint the exact moment when narrow AI or pre-AGI transitions into true AGI. However, the definition is clear enough that we can confidently look at something like ChatGPT and say it’s not AGI - nor is it anywhere close. There’s likely a gray area between narrow AI and true AGI where it’s difficult to judge whether what we have qualifies, but once we truly reach AGI, I think it will be undeniable.

I doubt it will remain at "human level" for long. Even if it were no more intelligent than humans, it would still process information millions of times faster, possess near-infinite memory, and have access to all existing information. A system like this would almost certainly be so obviously superintelligent that there would be no question about whether it qualifies as AGI.

I think this is similar to the discussion about when a fetus becomes a person. It may not be possible to pinpoint a specific moment, but we can still look at an embryo and confidently say that it’s not a person, just as we can look at a newborn baby and say that it definitely is. In this analogy, the embryo is ChatGPT, and the baby is AGI.

load more comments (1 replies)
load more comments (3 replies)
[–] Mikina@programming.dev 170 points 1 day ago (55 children)

Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.

If we ever get it, it won't be through LLMs.

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

[–] 7rokhym@lemmy.ca 8 points 20 hours ago (1 children)

Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind

His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.

All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.

What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn't know, for that we need to build nuclear power plants everywhere. It's sickening really.

So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.

[–] RoidingOldMan@lemmy.world 2 points 41 minutes ago

a series of switches is not ever going to create a sentient being

Is the goal to create a sentient being, or to create something that seems sentient? How would you even tell the difference (assuming it could pass any test a normal human could)?

[–] GamingChairModel@lemmy.world 24 points 1 day ago

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

They did! Here's a paper that proves basically that:

van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.

This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

[–] SlopppyEngineer@lemmy.world 36 points 1 day ago

There are already a few papers about diminishing returns in LLM.

[–] TheFriar@lemm.ee 16 points 1 day ago (2 children)

The only text predictor I want in my life is T9

load more comments (2 replies)
[–] rottingleaf@lemmy.world 6 points 1 day ago* (last edited 1 day ago)

I mean, human intelligence is ultimately too "just" something.

And 10 years ago people would often refer to "Turing test" and imitation games in the sense of what is artificial intelligence and what is not.

My complaint to what's now called AI is that it's as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.

But I disagree that this technology will not be present in a real AGI if it's achieved. I think that it will be.

[–] suy@programming.dev 8 points 1 day ago

Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

This is correct, and I don't think many serious people disagree with it.

If we ever get it, it won’t be through LLMs.

Well... depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The "trick" is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do "fine". The key for generality is trying to learn after you've been trained, to try to solve something that you've not been prepared for.

Even OpenAI's O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.

I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

I'm not sure if it's already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can't go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.

[–] feedum_sneedson@lemmy.world 14 points 1 day ago (1 children)

I just tried Google Gemini and it would not stop making shit up, it was really disappointing.

[–] zerozaku@lemmy.world 2 points 17 hours ago

Gemini is really far behind. For me it's Chatgpt > Llama >> Gemini. I haven't tried Claude since they require mobile number to use it.

load more comments (48 replies)
[–] adarza@lemmy.ca 303 points 1 day ago (7 children)

AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits

nothing to do with actual capabilities.. just the ability to make piles and piles of money.

[–] NotSteve_@lemmy.ca 23 points 1 day ago

That's an Onion level of capitalism

[–] floofloof@lemmy.ca 94 points 1 day ago

The same way these capitalists evaluate human beings.

[–] LostXOR@fedia.io 45 points 1 day ago (19 children)

Guess we're never getting AGI then, there's no way they end up with that much profit before this whole AI bubble collapses and their value plummets.

load more comments (19 replies)
load more comments (4 replies)
[–] frezik@midwest.social 72 points 1 day ago (3 children)

We taught sand to do math

And now we're teaching it to dream

All the stupid fucks can think to do with it

Is sell more cars

Cars, and snake oil, and propaganda

load more comments (2 replies)
[–] ChowJeeBai@lemmy.world 44 points 1 day ago (2 children)

This is just so they can announce at some point in the future that they've achieved AGI to the tune of billions in the stock market.

Except that it isn't AGI.

load more comments (2 replies)
load more comments
view more: next ›