this post was submitted on 20 Jun 2024
8 points (90.0% liked)

Technology

57494 readers
3417 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can't manage this consistently with CRUD apps and people think that this number isn't laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

....

I don't believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

top 50 comments
sorted by: hot top controversial new old
[–] IHeartBadCode@kbin.run 4 points 2 months ago (8 children)

I had my fun with Copilot before I decided that it was making me stupider - it's impressive, but not actually suitable for anything more than churning out boilerplate.

This. Many of these tools are good at incredibly basic boilerplate that's just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.

There's a reality to these tools. That reality is they're helpful at times, but they are hardly transformative at the levels the grifters go on about.

[–] sugar_in_your_tea@sh.itjust.works 2 points 2 months ago (2 children)

I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.

The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn't correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn't catch it, and we ended up pointing it out again.

Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they'd need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they'd be ready to ship it.

They didn't pass the interview.

And that's generally my opinion about AI in general, it's probably making you stupider.

[–] deweydecibel@lemmy.world 1 points 2 months ago* (last edited 2 months ago)

I've seen people defend using AI this way by comparing it to using a calculator in a math class, i.e. if the technology knows it, I don't need to.

And I feel like, for the kind of people whose grasp of technology, knowledge, and education are so juvenile that they would believe such a thing, AI isn't making them dumber. They were already dumb. What the AI does is make code they don't understand more accessible, which is to say, it's just enabling dumb people to be more dangerous while instilling them with an unearned confidence that only compounds the danger.

[–] Excrubulent@slrpnk.net 1 points 2 months ago* (last edited 2 months ago) (1 children)

Wait wait wait so... this person forgot the pythagorean theorem?

Like that is the most basic task. It's d = sqrt((x1 - x2)^2 + (y1 - y2)^2), right?

That was off the top of my head, this person didn't understand that? Do I get a job now?

I have seen a lot of programmers talk about how much time it saves them. It's entirely possible it makes them very fast at making garbage code. One thing I've known for a long time is that understanding code is much harder than writing it, and so asking an LLM to generate your code sounds like it's just creating harder work for you, unless you don't care about getting it right.

[–] sugar_in_your_tea@sh.itjust.works 1 points 2 months ago (1 children)

Yup, you're hired as whatever position you want. :)

Our instructions were basically:

  1. randomly place N coordinates on a 2D grid, and a random target point
  2. report the closest of those N coordinates to the target point

It was technically different (we phrased it as a top-down game, but same gist). AI generated manhattan distance (abs(x2 - x1) + abs(x2 - x1)) probably due to other clues in the text, but the instructions were clear. The candidate didn't notice what it was doing, we pointed it out, then they asked for the algorithm, which we provided.

Our better candidates remember the equation like you did. But we don't require it, since not all applicants finished college (this one did). We're more concerned about code structure, asking proper questions, and software design process, but math knowledge is cool too (we do a bit of that).

[–] frezik@midwest.social 1 points 2 months ago (1 children)

College? Pythagorean Theorem is mid-level high school math.

I did once talk to a high school math teacher about a graphics program I was hacking away on at the time, and she was surprised that I actually use the stuff she teaches. Which is to say that I wouldn't expect most programmers to know it exactly off the top of their head, but I would expect they've been exposed to it and can look it up if needed. I happen to have it pretty well ingrained in my brain.

[–] sugar_in_your_tea@sh.itjust.works 1 points 2 months ago (1 children)

Yes, you learn it in the context of finding the hypotenuse of a triangle, but:

  • a lot of people are "bad" at math (more unconfident), but good with logic
  • geometry, trig, etc require a lot of memorization, so it's easy to forget things
  • interviews are stressful, and good applicants will space on basic things

So when I'm interviewing, I try to provide things like algorithms that they probably know but are likely to space on, and focus on the part I care about: can they reason their way through a problem and produce working code, and then turn around and review their code. Programming is mostly googling stuff (APIs, algorithms, etc), I want to know if they can google the right stuff.

And yeah, we let applicants look stuff up, we just short circuit the less important stuff so they have time to show us the important parts. We dedicate 20-30 min to coding (up to an hour if they rocked at questions and are struggling on code), and we expect a working solution and for them to ask questions about vague requirements. It's a software engineering test, not a math test.

[–] Excrubulent@slrpnk.net 2 points 2 months ago

Yeah, that's absolutely fair, and it's a bit snobby of me to get all up in arms about forgetting a formula - although it is high school level where I live. But to be handed the formula, informed that there's an issue and still not fix it is the really hard part to wrap my head around, given it's such a basic formula.

I guess I'm also remembering someone I knew who got a programming job off the back of someone else's portfolio, who absolutely couldn't program to save their life and revealed that to me in a glaring way when I was trying to help them out. It just makes me think of that study that was done that suggested that there might be a "programmer brain" that you either have or you don't. They ended up costing that company a lot to my knowledge.

[–] 0x0@programming.dev 1 points 2 months ago

I use them like wikipedia: it's a good starting point and that's it (and this comparison is a disservice to wikipedia).

[–] Zikeji@programming.dev 1 points 2 months ago

Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.

So it's helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it's not going to do the job itself.

load more comments (5 replies)
[–] KingThrillgore@lemmy.ml 3 points 2 months ago* (last edited 2 months ago) (2 children)

Hacker News was silencing this article outright. That's typically a sign that its factual enough to strike a nerve with the potential CxO libertarian [slur removed] crowd.

If this is satire, I don't see it. Because i've seen enough of the GenAI crowd openly undermine society/the environment/the culture and be brazen about it; violence is a perfectly normal response.

[–] xavier666@lemm.ee 1 points 2 months ago (2 children)

What happened to HN? I have now heard HN silencing cetain posts multiple times. Is this enshittification?

[–] KingThrillgore@lemmy.ml 2 points 2 months ago (1 children)

HN is run by a VC firm, Y Combinator. One of its largest supporters is OpenAI CEO Sam Altman. Do the math.

[–] xavier666@lemm.ee 2 points 2 months ago (1 children)
[–] AnxiousOtter@lemmy.world 2 points 2 months ago

Sam Altman was actually president of Y combinator from 2014 - 2019. An interesting connection when you realise HN has been actively removing news critical of the Gen AI bubble hype.

[–] elias_griffin@lemmy.world 1 points 2 months ago

I want to confirm this. Hacker News is nothing like it used to be and is approaching the cliff of "group think" narrator, the opposite of Entrepreneurship.

[–] TheFriar@lemm.ee 0 points 2 months ago (1 children)

What. When did we start censoring shit on lemmy.

[–] KingThrillgore@lemmy.ml 1 points 1 month ago

[slur removed] was my choice.

[–] tron@midwest.social 1 points 2 months ago (1 children)

Oh my god this whole post is amazing, thought I'd share my favorite excerpt:

This entire class of person is, to put it simply, abhorrent to right-thinking people. They're an embarrassment to people that are actually making advances in the field, a disgrace to people that know how to sensibly use technology to improve the world, and are also a bunch of tedious know-nothing bastards that should be thrown into Thought Leader Jail until they've learned their lesson, a prison I'm fundraising for. Every morning, a figure in a dark hood7, whose voice rasps like the etching of a tombstone, spends sixty minutes giving a TedX talk to the jailed managers about how the institution is revolutionizing corporal punishment, and then reveals that the innovation is, as it has been every day, kicking you in the stomach very hard.

Where the fuck do I donate???????

[–] WldFyre@lemm.ee 0 points 2 months ago (1 children)
[–] pyldriver@lemmy.world 0 points 2 months ago (1 children)

Right as in the actual definition of the word, no political

Conforming with or conformable to justice, law, or morality.

In accordance with fact, reason, or truth; correct.

Fitting, proper, or appropriate.

[–] WldFyre@lemm.ee 0 points 2 months ago (1 children)

I get that, didn't think it was a political meaning. Just seems like an iffy word to me personally, hard to put my finger on it.

Maybe since the inverse would be "wrong-think"?

[–] Cryophilia@lemmy.world 1 points 2 months ago

English your second language? Phrases that seem common to natives may seem off to those who learned English later in life. 'Tis a silly language.

[–] sasquash@sopuli.xyz 1 points 2 months ago

very interesting read thx for sharing. glad finally someone who actually knows something about "AI" said it.

[–] Spesknight@lemmy.world 1 points 2 months ago

Hey, we can always say: how can you check if an AI is working, it doesn't come to the office? 🤔

[–] deweydecibel@lemmy.world 1 points 2 months ago* (last edited 2 months ago)

Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India. Listen, I would just be some random dude in India if I swapped places with some of my cousins, so I'm going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you're an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)

This aspect of it isn't getting talked about enough. These companies are presenting these things as fully-formed AI, while completely neglecting the people behind the scenes constantly cleaning it up so it doesn't devolve into chaos. All of the shortcomings and failures of this technology are being masked by the fact that there's actual people working round the clock pruning and curating it.

You know, humans, with actual human intelligence, without which these miraculous "artificial intelligence" tools would not work as they seem to.

If the "AI' needs a human support team to keep it "intelligent", it's less AI and more a really fancy kind of puppet.

[–] BarbecueCowboy@lemmy.world 1 points 2 months ago (1 children)

It's consistently pretty good for writing items with low technical importance and minimal need for accuracy.

I'll never write a job description myself again and my need for getting with communications for mass correspondence is almost gone.

[–] UnderpantsWeevil@lemmy.world 0 points 2 months ago (1 children)

Good for writing things nobody will read and reading things nobody wrote.

[–] elias_griffin@lemmy.world -1 points 2 months ago

Masterful wordsmithing, I must find a place for this quote in my future writing. I'll save and credit.

[–] smaximov@lemmy.world 1 points 2 months ago

Is the AI boom the new Blockchain of scams?

[–] EnderMB@lemmy.world 1 points 2 months ago* (last edited 2 months ago)

I work in AI as a software engineer. Many of my peers have PhD's, and have sunk a lot of research into their field. I know probably more than the average techie, but in the grand scheme of things I know fuck all. Hell, if you were to ask the scientists I work with if they "know AI" they'll probably just say "yeah, a little".

Working in AI has exposed me to so much bullshit, whether it's job offers for obvious scams that'll never work, or for "visionaries" that work for consultancies that know as little about AI as the next person, but market themselves as AI experts. One guy had the fucking cheek to send me a message on LinkedIn to say "I see you work in AI, I'm hosting a webinar, maybe you'll learn something".

Don't get me wrong, there's a lot of cool stuff out there, and some companies are doing some legitimately cool stuff, but the actual use-cases for these tools where they won't just be productivity enhancers/tools is low at best. I fully support this guy's efforts to piledrive people, and will gladly lend him my sword.

[–] Spesknight@lemmy.world 0 points 2 months ago (1 children)

I don't fear Artificial Intelligence, I fear Administrative Idiocy. The managers are the problem.

[–] bionicjoey@lemmy.ca 0 points 2 months ago (1 children)

I know AI can't replace me. But my boss's boss's boss doesn't know that.

Fortunately, it's my job as your boss to convince my boss and boss' boss that AI can't replace you.

We had a candidate spectacularly fail an interview when they used AI and didn't catch the incredibly obvious errors it made. I keep a few examples of that handy to defend my peeps in case my boss or boss's boss decide AI is the way to go.

I hope your actual boss would do that for you.

[–] madsen@lemmy.world 0 points 2 months ago (12 children)

This is such a fun and insightful piece. Unfortunately, the people who really need to read it never will.

load more comments (12 replies)
[–] jaaake@lemmy.world 0 points 2 months ago (1 children)

After reading that entire post, I wish I had used AI to summarize it.

I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I'm no longer as confident that I know what's going on.

This pull quote feels like it’s antithetical to their entire argument and makes me feel like all they’re doing is whinging about the fact that people who don’t know what they’re talking about have loud voices. Which has always been true and has little to do with AI.

load more comments (1 replies)
[–] Roflmasterbigpimp@lemmy.world 0 points 2 months ago (1 children)

TLDR; AI-Bad, I'm smart.

Why is this on here?

[–] aniki@lemm.ee -1 points 2 months ago (1 children)
load more comments
view more: next ›