this post was submitted on 19 Jun 2024
8 points (100.0% liked)

Technology

59099 readers
3213 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] rf_@lemmy.world 1 points 4 months ago (1 children)

The company is called “Safe Super Intelligence”. Not a fan of names like these, kind of like if a company called itself “safe airplanes”, there’s something about it that makes me think it won’t live up to the name.

Not sure how they plan on raising money when so many other AI companies are promising commercialization. A company prioritizing safety will be defeated by another prioritizing profit. A company like this could have flourished in the time before openAI, but right now there’s so much demand for gpus and talent that makes it very challenging to catch up, more so when less scrupulous companies offer more money for engineers. They’d have to hire from a smaller and more limited pool of applicants that believe in the mission.

[–] UnderpantsWeevil@lemmy.world 0 points 4 months ago

A big part of the AI Hype cycle has been "AIs are potentially too omnipotent for us to control, but also too much of a national security threat to ignore". So you get these media hacks insisting we need a super-intelligent artificial mind that is firmly within the grip of its creator.

As a consequence of the hype over-topping any kind of real utility from these machines, you've got some of the top board members of these firms spinning out their own boutique branches of the industry by insisting prior iterations are too dangerous or too constrained to fulfill their future their intended role as techno-utopian machine gods.

Not sure how they plan on raising money when so many other AI companies are promising commercialization.

The sensationalist bullshit is how they plan to make money. "Don't trust Alice's AI, its too dangerous! I'm the Safe AI" versus "Don't trust Bob's AI, its too limited. I'm the Ambitious AI". Then Wall Street investment giants, who don't know shit from shoelaces, throw gobs of money at both while believing they've hedged their bets. And a few years after that, when these firms don't produce anything remotely as fantastical as they promised, we go into a giant speculative bubble collapse that takes out half the energy or agricultural sector as collateral damage.

In twenty years, we'll be reading books titled "How AI Destroyed The Orange", describing the convoluted chain of events that tied fertilizer prices to debt-swaps on machine learning centers and resulted in almost all of Florida's biggest cash crop being lost to a hiccup in the NASDAQ between 2026 and 2029.