ShakingMyHead

joined 4 months ago
[–] ShakingMyHead@awful.systems 6 points 3 months ago

Also doxxing them when giving their refund.

[–] ShakingMyHead@awful.systems 16 points 3 months ago (1 children)

A mouse that lasts forever... until y'know, it breaks, because it's a piece of hardware that actively gets worn out.

[–] ShakingMyHead@awful.systems 15 points 3 months ago

Bosses are urging employees to increase their output with the help of AI tools (37 percent), to expand their skill sets (35 percent), take on a wide range of responsibilities (30 percent), return to the office (27 percent), work more efficiently (26 percent), and work more hours (20 percent).

Stop working from home because AI.

[–] ShakingMyHead@awful.systems 7 points 3 months ago

If you look at the bottom right that's exactly the case. They didn't just type a prompt in SORA and call it a day.

[–] ShakingMyHead@awful.systems 8 points 4 months ago

I'm so certain that ASI is so soon that I'm going to go hiking in the woods in the dead of night with no supplies and not tell anyone where I am.

[–] ShakingMyHead@awful.systems 6 points 4 months ago (1 children)

Unfortunately, "extremely expensive" and "high-end" aren't really synonyms, thanks to, y'know, bitcoin. Of course, I don't disagree with your argument that having to buy a GPU just to ensure your webmail does what it's advertised to do is, well, dumb.

What I don't know is what the LLM even is. Did they just tack on Llama to their webmail app and call it a day? Did they train a model? Was it trained on emails? If so, whose emails? What an advertisement that would be: "Use Protonmail to encrypt your emails so that companies like Protonmail can't use them to train an LLM."

[–] ShakingMyHead@awful.systems -4 points 4 months ago* (last edited 4 months ago) (3 children)

Not to downplay what proton mail is doing, but they're saying that you can run this locally with a 2 core, 4 thread CPU from 2017 (the i3 7100, which is a 7000 series processor), and a RTX 2060, a GPU that was never considered high end. Perhaps they changed the requirements while you weren't looking. Or Am I reading this wrong?

[–] ShakingMyHead@awful.systems 8 points 4 months ago

I'm sure they will thank us once we explain that the alternative was GPT-5.

[–] ShakingMyHead@awful.systems 8 points 4 months ago

60% of the time, it works 100% of the time.

[–] ShakingMyHead@awful.systems 9 points 4 months ago (3 children)

Probably would have been easier when the context window wasn't 128k.

Though what the point would be should someone actually achieve that eludes me a bit.

view more: ‹ prev next ›