this post was submitted on 08 Oct 2024
224 points (93.1% liked)

Technology

58613 readers
4140 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 41 comments
sorted by: hot top controversial new old
[–] lambda_notation@lemmy.ml 57 points 3 days ago (2 children)

"They used physics to do it" is just a laughably pathetic motivation. Nobel hated "abstract wankery" or "intellectual masturbation" and wanted to promote results which benefitted the common man and society directly. This is incidentally also why there doesn't exist a Nobel prize in economics. The nobel prize comitte has since long abandoned Nobel's will in this matter and it is anyones guess what the order of magnitude of spin Nobel's corpse has accumulated.

it is anyones guess what the order of magnitude of spin Nobel’s corpse has accumulated.

I'm guessing it's nearing the theoretical limits of "abstract wankery."

[–] prole@sh.itjust.works -3 points 2 days ago* (last edited 2 days ago) (1 children)

The nobel prize comitte has since long abandoned Nobel's will in this matter

How long? Wouldn't this just kind of suggest that criteria is simply different at this point? Maybe complex electronic devices didn't exist back then, so one can really only guess as to what he would think. Because he's been dead for a very long time...

The prize is named after the dude, he doesn't get to decide the rules of its award in perpetuity.

The prize is named after the dude, he doesn't get to decide the rules of its award in perpetuity.

Yes he does. It's the law.

Nobel's last will specified that his fortune be used to create a series of prizes for those who confer the "greatest benefit on mankind"

[–] expatriado@lemmy.world 66 points 3 days ago (4 children)

Physics' Nobel prize awarded for a Computer Science achievement, actual physics is having a dry spell I guess

[–] kamenlady@lemmy.world 38 points 3 days ago (1 children)

Beyond recognizing the laureates’ inspirations from condensed-matter physics and statistical mechanics, the prize celebrates interdisciplinarity. At its core, this prize is about how elements of physics have driven the development of computational algorithms to mimic biological learning, impacting how we make discoveries today across STEM. 

They explain the flex at least

[–] sugar_in_your_tea@sh.itjust.works 12 points 3 days ago (1 children)

Seems like a pretty extreme flex, I'm worried it'll snap.

[–] catloaf@lemm.ee 5 points 3 days ago

If they award a Nobel for materials science, this should win.

[–] demesisx@infosec.pub 6 points 3 days ago

Narrator: It isn’t.

[–] CarbonIceDragon@pawb.social 3 points 3 days ago (1 children)

To be fair, regardless of one's stance on the utility of current AI or the wisdom of developing it, it is an extremely difficult and potentially world changing technical achievement, and given there isn't a computer science prize, physics is probably the most relevant category to it

[–] vrighter@discuss.tchncs.de 9 points 3 days ago (1 children)

not really. A lot of techniques have been known for decades. What we didn't have back then was insane compute power.

and there's the turing award for computer science.

[–] PixelProf@lemmy.ca 7 points 2 days ago* (last edited 2 days ago)

Insane compute wasn't everything. Hinton helped develop the technique which allowed more data to be processed in more layers of a network without totally losing coherence. It was more of a toy before then because it capped out at how much data could be used, how many layers of a network could be trained, and I believe even that GPUs could be used efficiently for ANNs, but I could be wrong on that one.

Either way, after Hinton's research in ~2010-2012, problems that seemed extremely difficult to solve (e.g., classifying images and identifying objects in images) became borderline trivial and in under a decade ANNs went from being almost fringe technology that many researches saw as being a toy and useful for a few problems to basically dominating all AI research and CS funding. In almost no time, every university suddenly needed machine learning specialists on payroll, and now at about 10 years later, every year we are pumping out papers and tech that seemed many decades away... Every year... In a very broad range of problems.

The 580 and CUDA made a big impact, but Hinton's work was absolutely pivotal in being able to utilize that and to even make ANNs seem feasible at all, and it was an overnight thing. Research very rarely explodes this fast.

Edit: I guess also worth clarifying, Hinton was also one of the few researching these techniques in the 80s and has continued being a force in the field, so these big leaps are the culmination of a lot of old, but also very recent work.

[–] skillissuer@discuss.tchncs.de 1 points 2 days ago (1 children)

i'm here to remind you that for last 20ish years half of the time chemistry nobel goes to biologists, and now they doubled down on ai wankery with giving it to alphafold

[–] AnarchistArtificer@slrpnk.net 6 points 2 days ago

To be fair, AlphaFold is pretty incredible. I remember when it was first revealed (but before they open sourced parts of it) that the scientific community were shocked by how effective it was and assumed that it was going to be technologically way more complex than it ended up being. Systems Biologist Mohammed AlQuraishi captures this quite well in this blog post

I'm a biochemist who has more interest in the computery side of structural biology than many of my peers, so I often have people asking me stuff like "is AlphaFold actually as impressive as they say, or is it just more overhyped AI nonsense?". My answer is "Yes."

[–] msantossilva@sh.itjust.works 27 points 3 days ago (2 children)

I guess some people are genuinely concerned about AI wiping out humanity. Do not worry, that will never happen. We are already doing a fine job fostering our own extinction. If we keep going down our current path, those soulless robots will never even get the chance.

Now, in truth, I do not know what will kill us first, but I reckon it is important to stay positive

[–] scarabic@lemmy.world 4 points 2 days ago

What’s laughable are the “terminator” scenarios where it suddenly comes to life in an instant and in that moment already has the power to wipe us out, and then does so.

A more likely scenario is that we come to rely heavily on AI more and more as time goes by, until it truly does have a grip on resource supply chains, manufacturing facilities, energy plants, etc. And I don’t just mean that machine learning gets used in all of those contexts because we are already there. I’m talking about custodial authority. We’ve ceded those duties to it in large part - can’t do those jobs without AI.

Then a malicious AI could put a real squeeze on humanity. It wouldn’t need to be a global war. Just enough disruption that we starve and begin to war among ourselves. Has anyone ever noticed how many of us there are now? Our population would absolutely fall apart without our massive industrial and agricultural complexes running full time.

[–] slaacaa@lemmy.world 9 points 3 days ago (1 children)

I mean it’s definitely helping, but not in the way I imagined. It is becoming a major driver of CO2 emissions due to the large computational power if needs, which will only increase in the future. The planet is boiling, and they will keep building more server farms for the next LLM upgrade, giving up on stopping/controlling climate change.

[–] mindaika@lemmy.dbzer0.com 6 points 3 days ago

Wouldn’t that be something: we choke to death trying to create a supercomputer to tell us to stop doing exactly that

True irony

[–] Letstakealook@lemm.ee 16 points 3 days ago (5 children)

Predictive text algorithms will not wipe out humanity. 🙄

[–] keropoktasen@lemmy.world 8 points 3 days ago (1 children)

The Artificial Intelligence field is much broader than that limited definition of yours.

[–] Letstakealook@lemm.ee 3 points 2 days ago (1 children)

And yet the only tech any company is interested in using is LLMs, which are about to fall flat on their face. What tech in the field is close to being able to think for itself and truly act autonomously?

[–] RatherBeMTB@sh.itjust.works -2 points 2 days ago* (last edited 2 days ago) (2 children)

I can tell you don't use AI. It's frightening how good it is. Edited "good"😂

[–] Letstakealook@lemm.ee 5 points 2 days ago

That's just true believer talk. It really is trash.

[–] dubyakay@lemmy.ca 4 points 2 days ago (1 children)

It's frightening how God it is.

Intentional?

[–] phdepressed@sh.itjust.works 2 points 2 days ago

Written by AI?

[–] SkyNTP@lemmy.ml 14 points 3 days ago

The problem isn't the technology. The problem is the people losing their minds about it.

[–] sunbeam60@lemmy.one 0 points 1 day ago (1 children)

Such a lame hot take. Do you understand how language models work? To claim there’s no higher order understanding is frankly laughable.

[–] Letstakealook@lemm.ee 2 points 1 day ago (1 children)

If you legitimately believe llms "understand" anything at all, I really don't believe there's anything to discuss with you. That is a completely absurd notion at this stage.

[–] sunbeam60@lemmy.one 1 points 1 day ago

Well, why don’t you argue with the guy who spearheaded the backpropagation algorithm, spends his whole day thinking about it and who won the Nobel Prize in Physics, rather than me? I’m not saying some fanciful notion that isn’t supported by evidence. If they just predict text, how can they solve riddles theyve never encountered in their training materials? Are you claiming the logic solution is just text statistics?

[–] EtherWhack@lemmy.world 5 points 3 days ago (1 children)
[–] Letstakealook@lemm.ee 5 points 3 days ago

That doesn't really count, lol. The reality is, we've already killed ourselves, we just won't admit it yet. The climate effects we're seeing today aren't even from recent emissions. Mr. Bones Wild Ride has only just begun, and there's no getting off.

[–] ParetoOptimalDev@lemmy.today 2 points 3 days ago (1 children)

Maybe the Nobel should have went to you.

[–] Letstakealook@lemm.ee 2 points 3 days ago

The prize has nothing to do with these claims. Furthermore, past accomplishments do not make a person infallible. Nice ad hominem, though.

[–] vrighter@discuss.tchncs.de 10 points 3 days ago

and physicists use tools from math, so fields medals should be awarded to physicists.

[–] Etterra@lemmy.world 5 points 3 days ago (1 children)

I mean we do kind of deserve it. But at least we've had a good run.

[–] sunbeam60@lemmy.one 0 points 1 day ago

Was our run really that good? We killed a bunch of species, drained our planet of resources and belched pollution into the air. I wouldn’t be surprised if the AIs manage to steward our planet better.

[–] zlatiah@lemmy.world 6 points 3 days ago

So it was the physics Nobel... I see why the Nature News coverage called it "scooped" by machine learning pioneers

Since the news tried to be sensational about it... I tried to see what Hinton meant by fearing the consequences. Believe he is genuinely trying to prevent AI development without proper regulations. This is a policy paper he was involved in (https://managing-ai-risks.com/). This one did mention some genuine concerns. Quoting them:

"AI systems threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society. They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance"

like bruh people already lost jobs because of ChatGPT, which can't even do math properly on its own...

Also quite some irony that the preprint has the following quote: "Climate change has taken decades to be acknowledged and confronted; for AI, decades could be too long.", considering that a serious risk of AI development is climate impacts

[–] mindaika@lemmy.dbzer0.com 4 points 3 days ago* (last edited 3 days ago)

It’s probably easier to righteously quit your job after a decade of collecting senior executive salary

Also: physics?

[–] Pieresqi@lemmy.world 5 points 3 days ago

Yeesh, everyone now jumps on the ai hypetrain.