this post was submitted on 20 Nov 2023
10 points (100.0% liked)

Programmer Humor

32469 readers
411 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] xmunk@sh.itjust.works 2 points 1 year ago (4 children)

ChatGPT is hilariously incompetent... but on a serious note, I still firmly reject tools like copilot outside demos and the like because they drastically reduce code quality for short term acceleration. That's a terrible trade-off in terms of cost.

[–] ToothlessFairy@lemmy.world 2 points 1 year ago

I enjoy using copilot, but it is not made to think for you. It's a better autocomplete, but don't ever let it do more than a line at once.

[–] PlexSheep@feddit.de 0 points 1 year ago* (last edited 11 months ago)

I'm still convinced that GitHub copilot is actively violating copyleft licenses. If not in word, then in the spirit.

[–] stjobe@lemmy.world 0 points 1 year ago (1 children)

Biggest problem with it is that it lies with the exact same confidence it tells the truth. Or, put another way, it's confidently incorrect as often as it is confidently correct - and there's no way to tell the difference unless you already know the answer.

[–] Swedneck@discuss.tchncs.de 0 points 1 year ago (1 children)

it's kinda hilarious to me because one of the FIRST things ai researchers did was get models to identify things and output answers together with the confidence of each potential ID, and now we've somehow regressed back from that point

[–] tryptaminev@feddit.de 0 points 1 year ago

did we really regress back from that?

i mean giving a confidence for recognizing a certain object in a picture is relatively straightforward.

But LLMs put together words by their likeliness of belonging together under your input (terribly oversimplified).the confidence behind that has no direct relation to how likely the statements made are true. I remember an example where someone made chatgpt say that 2+2 equals 5 because his wife said so. So chatgpt was confident that something is right when the wife says it, simply because it thinks these words to belong together.

[–] TonyTonyChopper@mander.xyz 0 points 1 year ago (1 children)

they drastically reduce ... quality for short term acceleration

Western society is built on this principle

[–] PetDinosaurs@lemmy.world 0 points 1 year ago (1 children)

Tell me about it...

I left my more mature company for a startup.

I feel like Tyler Durden sometimes.

[–] noobdoomguy8658@feddit.de 0 points 11 months ago (1 children)

How you liking it? How many years have you aged in the months working at your startup?

[–] PetDinosaurs@lemmy.world 0 points 11 months ago (1 children)

My hairline has started receding very rapidly. There's there's these fine hairs all over my desk, and I see the photo I took when joining directly before turning on my camera every meeting.

[–] noobdoomguy8658@feddit.de 0 points 11 months ago

Doesn't sood good at all. I'm sorry to hear that, friend. I really hope there's enough upsides there compared to working at a more mature company for you.