this post was submitted on 23 Jun 2024
2 points (100.0% liked)

TechTakes

1425 readers
294 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh facts of Awful you'll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

top 50 comments
sorted by: hot top controversial new old
[–] sc_griffith@awful.systems 10 points 4 months ago (1 children)

thinking about how I was inoculated against part of ai hype bc a big part of my social circle in undergrad consisted of natural language processing people. they wanted to work at places with names like "OpenAI" and "google deepmind," their program was more or less a cognitive science program, but I never once heard any of them express even the slightest suspicion that LLMs of all things were progressing toward intelligence. it would have been a nonsequiter.

also from their pov the statistical approach to machine learning was defined by abandoning the attempt to externalize the meaning of text. the cliche they used to refer to this was "the meaning of a word is the context in which it occurs."

finding out that some prestigious ai researchers are all about being pilled on immanetizating agi was such a swerve for me. it's like if you were to find out that michio kaku has just won his fourth consecutive nobel prize in physics

[–] o7___o7@awful.systems 3 points 4 months ago* (last edited 4 months ago)

it’s like if you were to find out that michio kaku has just won his fourth consecutive nobel prize in physics

hell of a stinger

[–] gerikson@awful.systems 2 points 4 months ago (17 children)

The Death of the Junior Developer

Steve Yegge goes hard into critihype, there's no need for any junior people anymore, all you need is a senior prompt engineer. No word on what happens when the seniors retire or die off, guess we'll have AGI by then and it'll all work out. Also no word on how the legal profession will survive when all the senior prompt engineer's time is spend rewriting increasingly meaningless LLM responses as the training corpus inevitably degenerates from slurm contamination.

load more comments (17 replies)
[–] sailor_sega_saturn@awful.systems 2 points 4 months ago* (last edited 4 months ago) (5 children)

Microsoft's AI leader claimed that copyright on the internet can be ignored: https://www.windowscentral.com/software-apps/ever-put-content-on-the-web-microsoft-says-that-its-okay-for-them-to-steal-it-because-its-freeware

With respect to content that is already on the open web, the social contract of that content since the 90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That's been the understanding, there's a separate category where a website or a publisher or a news organization had explicitly said, 'do not scrape or crawl me for any other reason than indexing me so that other people can find that content.' That's a gray area and I think that's going to work its way through the courts.

Watch the entire interview if you're bored because he is in deep. Microsoft probably just hired the most AI-enthused person they could find.

[–] Soyweiser@awful.systems 1 points 4 months ago

He isn't totally wrong re the unspoken rule, but he forgets the second unspoken rule, that the first rule only applies to human being doing entertainment not corporations trying to make money.

[–] o7___o7@awful.systems 1 points 4 months ago* (last edited 4 months ago)

Never thought I'd see Microsoft suggest downloading a car, but I should have seen it coming.

[–] Eiim@lemmy.blahaj.zone 1 points 4 months ago

I think even wilder is that he thinks content which has explicitly been labeled "do not scrape except for search engine indexing" is a "gray area" with regards to scraping for AI. Like, that's exactly what it says not to do!

[–] 200fifty@awful.systems 1 points 4 months ago (1 children)

Anyone can copy it, recreate with it, reproduce with it

Ew... stay away from my content, you creep!

[–] froztbyte@awful.systems 1 points 4 months ago

see it was wrong when those dirty pirate hippies tried to do it but it's totally fine when microsoft does it because microsoft can't be wrong, see? easy

load more comments (1 replies)
[–] froztbyte@awful.systems 1 points 4 months ago (2 children)

okay at this point I should probably make a whole-ass perplexity post because this is the third time I'm featuring them in stubsack but 404media found yet more dirt

... which included creating a series of fake accounts and AI-generated research proposals to scrape Twitter, as CEO Aravind Srinivas recently explained on the Lex Fridman podcast

According to Srinivas, all he and his cofounders Denis Yarats and Johnny Ho wanted to do was build cool products with large language models, back when it was unclear how that technology would create value

tell me again how lies and misrepresentation aren't foundational parts of the business model, I think I missed it

[–] sailor_sega_saturn@awful.systems 1 points 4 months ago* (last edited 4 months ago) (1 children)

How can someone implement that and not just be constantly thinking "I really really really do not want to be prosecuted under the CFAA, I should not be doing this".

Ethics clearly don't really work in this profession, so schools should hammer home legal liability as well.

[–] froztbyte@awful.systems 1 points 4 months ago* (last edited 4 months ago) (1 children)

Ethics clearly don’t really work in this profession, so schools should hammer home legal liability as well.

I've thought about this a bunch in the past, and tbh the only answer I've come to over many forms of it is "fuck the fucking USA"

it's a place that is structurally built to allow for that kind of evasion and abuse to happen

load more comments (1 replies)
[–] gerikson@awful.systems 1 points 4 months ago (1 children)

A couple of examples Srinivas gave on the podcast is “Who is Lex Fridman following that Elon Musk is also following,” or “what are the most recent tweets that were liked by both Elon Musk and Jeff Bezos.”

Questions asked by the terminally deranged.

[–] Soyweiser@awful.systems 1 points 4 months ago

Or somebody looking for 'the worst posts online' cringe compilation. Musks CEOs must be able to build their companies products not be able to read spreadsheets was a good example.

[–] Soyweiser@awful.systems 1 points 4 months ago (3 children)

I might have been wrong on the capabilities of ML, this is very impressive. (twitter link), was I wrong and Yud right?

load more comments (3 replies)
[–] jax@awful.systems 1 points 4 months ago (2 children)

nsfw: nice to see thejuicemedia jumping in with a quality sneer

load more comments (2 replies)
load more comments
view more: next ›