this post was submitted on 22 Dec 2024
301 points (95.7% liked)

Technology

60023 readers
2796 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 38 comments
sorted by: hot top controversial new old
[–] LenielJerron@lemmy.world 101 points 14 hours ago* (last edited 14 hours ago) (2 children)

A big issue that a lot of these tech companies seem to have is that they don't understand what people want; they come up with an idea and then shove it into everything. There are services that I have actively stopped using because they started cramming AI into things; for example I stopped dual-booting with Windows and became Linux-only.

AI is legitimately interesting technology which definitely has specialized use-cases, e.g. sorting large amounts of data, or optimizing strategies within highly restrained circumstances (like chess or go). However, 99% of what people are pushing with AI these days as a member of the general public just seems like garbage; bad art and bad translations and incorrect answers to questions.

I do not understand all the hype around AI. I can understand the danger; people who don't see that it's bad are using it in place of people who know how to do things. But in my teaching for example I've never had any issues with students cheating using ChatGPT; I semi-regularly run the problems I assign through ChatGPT and it gets enough of them wrong that I can't imagine any student would be inclined to use ChatGPT to cheat multiple times after their grade the first time comes in. (In this sense, it's actually impressive technology - we've had computers that can do advanced math highly accurately for a while, but we've finally developed one that's worse at math than the average undergrad in a gen-ed class!)

[–] Brodysseus@lemmy.dbzer0.com 5 points 6 hours ago

I've ran some college hw through 4o just to see and it's remarkably good at generating proofs for math and algorithms. Sometimes it's not quite right but usually on the right track to get started.

In some of the busier classes I'm almost certain students do this because my hw grades would be lower than the mean and my exam grades would be well above the mean.

[–] Voroxpete@sh.itjust.works 38 points 12 hours ago (1 children)

The answer is that it's all about "growth". The fetishization of shareholders has reached its logical conclusion, and now the only value companies have is in growth. Not profit, not stability, not a reliable customer base or a product people will want. The only thing that matters is if you can make your share price increase faster than the interest on a bond (which is pretty high right now).

To make share price go up like that, you have to do one of two things; show that you're bringing in new customers, or show that you can make your existing customers pay more.

For the big tech companies, there are no new customers left. The whole planet is online. Everyone who wants to use their services is using their services. So they have to find new things to sell instead.

And that's what "AI" looked like it was going to be. LLMs burst onto the scene promising to replace entire industries, entire workforces. Huge new opportunities for growth. Lacking anything else, big tech went in HARD on this, throwing untold billions at partnerships, acquisitions, and infrastructure.

And now they have to show investors that it was worth it. Which means they have to produce metrics that show people are paying for, or might pay for, AI flavoured products. That's why they're shoving it into everything they can. If they put AI in notepad then they can claim that every time you open notepad you're "engaging" with one of their AI products. If they put Recall on your PC, every Windows user becomes an AI user. Google can now claim that every search is an AI interaction because of the bad summary that no one reads. The point is to show "engagement", "interest", which they can then use to promise that down the line huge piles of money will fall out of this pinata.

The hype is all artificial. They need to hype these products so that people will pay attention to them, because they need to keep pretending that their massive investments got them in on the ground floor of a trillion dollar industry, and weren't just them setting huge piles of money on fire.

[–] MagicShel@lemmy.zip 6 points 10 hours ago* (last edited 10 hours ago) (4 children)

I know I'm an enthusiast, but can I just say I'm excited about NotebookLLM? I think it will be great for documenting application development. Having a shared notebook that knows the environment and configuration and architecture and standards for an application and can answer specific questions about it could be really useful.

"AI Notepad" is really underselling it. I'm trying to load up massive Markdown documents to feed into NotebookLLM to try it out. I don't know if it'll work as well as I'm hoping because it takes time to put together enough information to be worthwhile in a format the AI can easily digest. But I'm hopeful.

That's not to take away from your point: the average person probably has little use for this, and wouldn't want to put in the effort to make it worthwhile. But spending way too much time obsessing about nerd things is my calling.

[–] Voroxpete@sh.itjust.works 11 points 7 hours ago (1 children)

From a nerdy perspective, LLMs are actually very cool. The problem is that they're grotesquely inefficient. That means that, practically speaking, whatever cool use you come up with for them has to work in one of two ways; either a user runs it themselves, typically very slowly or on a pretty powerful computer, or it runs as a cloud service, in which case that cloud service has to figure out how to be profitable.

Right now we're not being exposed to the true cost of these models. Everyone is in the "give it out cheap / free to get people hooked" stage. Once the bill comes due, very few of these projects will be cool enough to justify their costs.

Like, would you pay $50/month for NotebookLM? However good it is, I'm guessing it's probably not that good. Maybe it is. Maybe that's a reasonable price to you. It's probably not a reasonable price to enough people to sustain serious development on it.

That's the problem. LLMs are cool, but mostly in a "Hey this is kind of neat" way. They do things that are useful, but not essential, but they do so at an operating cost that only works for things that are essential. You can't run them on fun money, but you can't make a convincing case for selling them at serious money.

[–] MagicShel@lemmy.zip 7 points 7 hours ago

Totally agree. It comes down to how often is this thing efficient for me if I pay the true cost. At work, yes it would save over $50/mo if it works well. At home it would be difficult to justify that cost, but I'd also use it less so the cost could be lower. I currently pay $50/mo between ChatGPT and NovelAI (and the latter doen't operate at a loss) so it's worth a bit to me just to nerd out over it. It certainly doesn't save me money except in the sense that it's time and money I don't spend on some other endeavor.

My old video card is painfully slow for local LLM, but I dream of spending for a big card that runs closer to cloud speeds even if the quality is lower, for easier tasks.

[–] FarceOfWill@infosec.pub -1 points 9 hours ago (1 children)

You're using the wrong tool.

Hell, notepad is the wrong tool for every use case, it exists in case you've broken things so thoroughly on windows that you need to edit a file to fix it. It's the text editor of last resort, a dumb simple file editor always there when you need it.

Adding any feature (except possibly a hex editor) makes it worse at its only job.

[–] MagicShel@lemmy.zip 4 points 8 hours ago* (last edited 8 hours ago) (1 children)

... I don't use Notepad. For anything. Hell, I don't even use Windows.

Not sure where the wires got crossed here.

[–] tja@sh.itjust.works -1 points 7 hours ago (1 children)

Then either you replied with your first post to the wrong post or you misread "windows putting AI into notepad" as notebookLLM? Because if not there is nothing obvious connecting your post to the parent

[–] MagicShel@lemmy.zip 1 points 7 hours ago* (last edited 7 hours ago) (1 children)

I don't think anyone is putting AI into Notepad. It reads to me like a response to NotebookLLM but maybe I was wrong.

I did at least explain what my vision is and why I wanted it which.... doesn't sound anything like Notepad, I think.

[–] tja@sh.itjust.works 4 points 6 hours ago (1 children)

I don't think [...]

Well, you think wrong: https://blogs.windows.com/windows-insider/2024/11/06/new-ai-experiences-for-paint-and-notepad-begin-rolling-out-to-windows-insiders/

I did at least explain what my vision is and why I wanted it which.... doesn't sound anything like Notepad, I think.

Might be, but the person you responded to wrote about windows putting AI into notepad, so everyone assumed you were responding to that and not writing about something that was not even mentioned

[–] MagicShel@lemmy.zip 4 points 6 hours ago

I stand corrected. Thank you. I hadn't heard about that. Notepad has always been no frills, and I can't see integrating AI with that over just using AI, but they are and it seems silly, I agree.

load more comments (2 replies)
[–] einlander@lemmy.world 25 points 15 hours ago (1 children)
[–] SlopppyEngineer@lemmy.world 7 points 14 hours ago (2 children)

The article does mention that when the AI bubble is going down, the big players will use the defunct AI infrastructure and add it to their cloud business to get more of the market that way and, in the end, make the line go up.

[–] Voroxpete@sh.itjust.works 8 points 11 hours ago* (last edited 7 hours ago)

That's not what the article says.

They're arguing that AI hype is being used as a way of driving customers towards cloud infrastructure over on-prem. Once a company makes that choice, it's very hard to get them to go back.

They're not saying that AI infrastructure specifically can be repurposed, just that in general these companies will get some extra cloud business out of the situation.

AI infrastructure is highly specialized, and much like ASICs for the blockchain nonsense, will be somewhere between "very hard" and "impossible" to repurpose.

[–] Alphane_Moon@lemmy.world 3 points 13 hours ago (1 children)

Assuming a large decline in demand for AI compute, what would be the use cases for renting out older AI compute hardware on the cloud? Where would the demand come from? Prices would also go down with a decrease in demand.

[–] SlopppyEngineer@lemmy.world 7 points 13 hours ago (1 children)
[–] Alphane_Moon@lemmy.world 2 points 12 hours ago

Haha. I believe the AMD Instinct / Nvidia Datacentre GPUs aren't that great for gaming.

[–] walter_wiggles@lemmy.nz 8 points 13 hours ago

Big tech is out of ideas and needs AI to work in order to drive growth.

[–] Novamdomum@fedia.io -1 points 11 hours ago (2 children)

"Today’s hype will have lasting effects that constrain tomorrow’s possibilities."

Nope. No it won't. I'd love to have the patience to be more diplomatic but they're just wrong... and dumb.

I'm getting so sick of these anti AI cultists who seem to be made up of grumpy tech nerds behaving like "I was using AI before it was cool" hipsters and panicking artists and writers. Everyone needs to calm their tits right down. AI isn't going anywhere. It's giving creative and executive options to millions of people that just weren't there before.

We're in an adjustment phase right now and boundaries are being re-drawn around what constitutes creativity. My leading theory at the moment is that we'll all mostly eventually settle down to the idea that AI is just a tool. Once we're used to it and less starry eyed about it's output then individual creativity, possibly supported by AI tools, will flourish again. It's going to come down to the question of whether you prefer reading something cogitated, written, drawn or motion rendered by AI or you enjoy the perspective of a human being more. Both will be true in different scenarios I expect.

Honestly, I've had to nope out of quite a few forums and servers permanently now because all they do in there is circlejerk about the death of AI. Like this one theory that keeps popping up that image generating AI specifically is inevitably going to collapse in on itself and stop producing quality images. The reverse is so obviously true but they just don't want to see it. Otherwise smart people are just being so stubborn with this and it's, quite frankly, depressing to see.

Also, the tech nerds arguing that AI is just a fancy word and pixel regurgitating engine and that we'll never have an AGI are probably the same people that were really hoping Data would be classified as a sentient lifeform when Bruce Maddox wanted to dissassemble him in "The Measure of a Man".

How's that for whiplash?

[–] sudneo@lemm.ee 20 points 9 hours ago (2 children)

Models are not improving, companies are still largely (massively) unprofitable, the tech has a very high environmental impact (and demand) and not a solid business case has been found so far (despite very large investments) after 2 years.

That AI isn't going anywhere is possible, but LLM-based tools might also simply follow crypto, VR, metaverses and the other tech "revolutions" that were just hyped and that ended nowhere. I can't say it will go one way or another, but I disagree with you about "adjustment period". I think generative AI is cool and fun, but it's a toy. If companies don't make money with it, they will eventually stop investing into it.

Also

Today’s hype will have lasting effects that constrain tomorrow’s possibilities

Is absolutely true. Wasting capital (human and economic) on something means that it won't be used for something else instead. This is especially true now that it's so hard to get investments for any other business. If all the money right now goes into AI, and IF this turns out to be just hype, we just collectively lost 2, 4, 10 years of research and investments on other areas (for example, environment protection). I am really curious about what makes you think that that sentence is false and stupid.

load more comments (2 replies)
[–] GhiLA@sh.itjust.works 7 points 11 hours ago (1 children)

It's fucking fantastic news, tbh.

Here's my take, let them dismiss it.

Let em! Remember Bitcoin at $15k after 2019?

Let em! And it's justified! If Ai isn't important right now, then why should its price be inflated to oblivion? Let it fall. Good! Lower prices for those of us that do see the value down the road.

That's how speculative investment works. In no way is this bad. Are sales bad? Sit back and enjoy the show.

[–] Voroxpete@sh.itjust.works 4 points 8 hours ago

Are sales bad?

Of AI products? By all available metrics, yes, sales for AI driven products are atrocious.

Even the biggest name in AI is desperately unprofitable. OpenAI has only succeeded in converting 3% of their free users to paid users. To put that on perspective, 40% of regular Spotify users are on premium plans.

And those paid plans don't even cover what it costs to run the service for those users. Currently OpenAI are intending to double their subscription costs over the next five years, and that still won't be enough to make their service profitable. And that's assuming that they don't lose subscribers over those increased costs. When their conversion rate at their current price is only 3%, there's not exactly an obvious appetite to pay more for the same thing.

And that's the headline name. The key driver of the industry. And the numbers are just as bad everywhere else you look, either terrible, or deliberately obfuscated (remember, these companies sank billions of capex into this; if sales were good they'd be talking very openly and clearly about just how good they are).

[–] UraniumBlazer@lemm.ee -3 points 12 hours ago (6 children)

I have no idea how people can consider this to be a hype bubble especially after the o3 release. It smashed the ARC AGI benchmark on the performance front. It ranks as the 175th best competitive coder in the world on Codeforces' leaderboard.

o3 proved that it is possible to have at least an expert AGI if not a Virtuoso AGI (according to Deep mind's definition of AGI). Sure, it's not economical yet. But it will get there very soon (just like how the earlier GPTs were a lot dumber and took a lot more energy than the newer, smaller parameter models).

Please remember - fight to seize the means of production. Do not fight the means of production themselves.

[–] r4venw@sh.itjust.works 3 points 7 hours ago

Where, in that position piece, do they mention o3? Who "proved" this?

Additionally, I'm pretty sure that this "ARC AGI" benchmark is not using the same definition of AGI that you linked to by DeepMind. Conflating them is misleading. There is already so much misinformation out there about "AI", don't add to it.

Lastly, I struggle to take at face value essays written by for-profit companies claiming they have AGI (that DeepMind paper links to OpenAI essays). They only stand to gain monetarily by claiming that their AI is an AGI (to be clear, this is an opinion; I do not have evidence to suggest that OpenAI is being disingenuous).

[–] Voroxpete@sh.itjust.works 18 points 11 hours ago

It's a bubble because OpenAI spend $2.35 for every $1.00 they make. Yes, you're mathing right, that is a net loss.

It's a bubble because all of the big players in AI development agree that future models will cost exponentially more money to train, for incremental gains. That means there is no path forward that doesn't intensely amplify the unprofitability of an already deeply unprofitable industry.

It's a bubble because newer models with better capabilities only cost more and more to run.

It's a bubble because as far as anyone knows there will never be a solution to the hallucination problem.

It's a bubble because despite investments treating it as a trillion dollar industry, no one has yet figured out a trillion dollar problem that AI can solve.

You're trying on a new top of the line VR headset and saying "Wow, this is incredible, how can anyone say this is a bubble?" Its not about how cool the tech is in isolation, it's about its potential to effect widespread change. Facebook went in hard on VR, imagining a future where everyone worked from home while wearing VR headsets. But what they got was an expensive toy that only had niche uses.

AI performs do well on certain coding tasks because a lot of the individual problems that make up a particular piece of software have already been solved. It's standard practice to design programs as individual units, each of which performs the smallest task possible, and which can then be assembled to complete more complex tasks. This fits very well into the LLM model of assembling pieces into their most likely expected configurations. But it cannot create truly novel code, except by a kind of trial and error mutation process. It cannot problem solve. It cannot identify a users needs and come up with ideal solutions to them. It cannot innovate.

This means that, at best, genAI in the software world becomes a tool for producing individual code elements, guided and shepherded by experienced programmers. It does not replace the software industry, merely augments it, and it does so at a cost that many companies simply may not feel is worth paying.

And that's its best case scenario. In every other industry AI has been a spectacular failure. But it's being invested in as if it will be a technological reckoning for every form of intellectual labour on earth. That is the absolute definition of a bubble.

[–] Omega_Jimes@lemmy.ca 12 points 10 hours ago (3 children)

o3 made the high score on ARC through brute force, not by being good. To raise the score from 75% to 87% required 175 times more computing power, but exactly stunning returns.

load more comments (3 replies)
[–] dustyData@lemmy.world 5 points 10 hours ago (1 children)

Unless we invent cold fusion between the next 5 years, they will never be economical. They are the most energy inefficient thing ever invented by humanity and all prediction models state that it will cost more energy, not less, to keep making them better. They will never be energy efficient nor economical in their current state, and most companies are out of ideas on how to shake it up. Even the people who created generative models agree that they have just been brute forcing by making the models larger with more energy consumption. When you try to make them smaller or more energy efficient, they fall off the performance cliff and only produce garbage. I'm sure there are researchers doing cool stuff, but it is neither economical nor efficient.

[–] ricdeh@lemmy.world 2 points 9 hours ago

Untrue. There are small models that produce better output than the previous "flagships" like GPT-2. Also, you can achieve much more than we currently do with far less energy by working on novel, specialised hardware (neuromorphic computing).

[–] queermunist@lemmy.ml 5 points 11 hours ago

Your example is strange because, as far as I know, GPTs aren't economical either.

[–] SupraMario@lemmy.world 1 points 9 hours ago (1 children)

Why is it getting an AGI stamp now? I was under the impression humanity has not delivered a sentient AI? Which is what the AGI title was supposed to be used for...has that been pulled back again?

[–] communist@lemmy.frozeninferno.xyz 5 points 7 hours ago* (last edited 7 hours ago) (1 children)

Agi has nothing to do with sentience, which cannot be measured, openai, I think validly, defines it as a system that can do all intellectual labor.

[–] SupraMario@lemmy.world 1 points 6 hours ago (1 children)

So it's now, can it do anything a human can do?...sans emotional traits.

[–] communist@lemmy.frozeninferno.xyz 4 points 6 hours ago* (last edited 6 hours ago)

It was never about sentience, sentience is a meaningless, unmeasurable term.

It's a question of if it can replace humans in the workforce.

artificial general intelligence means it's able to generalize its intelligence, not sentience at all.

load more comments
view more: ‹ prev next ›