mholiv

joined 1 year ago
[–] mholiv@lemmy.world 8 points 5 days ago* (last edited 5 days ago) (1 children)

Although I do see that that bot has a very slight right wing bias I like it. It prevents the normalization of the use of literal propaganda outlets as news sources.

I have a suggestion that might be a good compromise.

The bot only comments on posts that are from less factual news sources or are from extreme ends of the spectrum.

On a post from the AP the bot would just not comment.

On a post from Alex Jones or RT the bot would post a warning.

That way there is less “spam”, but people are made aware when misinformation or propaganda is being pushed.

Also with such a system smaller biasses are less relevant and therefore become less important.

[–] mholiv@lemmy.world 0 points 1 week ago

It’s a waste of everyone’s time for sure. It’s just good business sense to make your customers happy though.

As for typing speed perhaps ya lol. You could be faster. But I think the best approach here is using high quality locally run LLMs that don’t produce slop. For me I can count on one hand how many times I’ve had to correct things in the past month. It’s a mater of understanding how LLMs work and fine tuning. (Emphasis on the fine tuning)

[–] mholiv@lemmy.world 2 points 1 week ago

My main workstation runs Linux and I use Llama.cpp. I used it with mistral’s latest largest model but I have used others in the past.

I appreciate your thoughts here. Lemmy I think, in general, has an indistinguishing anti LLM bias.

[–] mholiv@lemmy.world 0 points 1 week ago (2 children)

The LLM responses are more verbose but not a crazy amount so. It’s mostly adding polite social padding that some people appreciate.

As for time totally. It’s faster to write “can’t go to meeting, suggest rescheduling it for Thursday.” And proofread than to write a full boomer style letter.

[–] mholiv@lemmy.world 6 points 1 week ago

In some cases literally yes. But at least for me I have to meet my customers where they are. If I try to force them to do things my way they just don’t use my services.

[–] mholiv@lemmy.world 16 points 1 week ago (2 children)

You’re not wrong but at least my emails will be taken seriously by some 60 year old company exec that’s still mad his secretary stopped printing his emails for him.

[–] mholiv@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (2 children)

I can understand that. I don’t actually use chatGPT to be fair. I use a locally run open source LLM. This all being said I do think it’s important to fine tune any LLM you use to match your writing style. Else you end up with chatGPT generic style writing.

I would argue that not fine tuning a LLM to match tone and style counts as either misuse or hobbyist use.

[–] mholiv@lemmy.world 14 points 1 week ago* (last edited 1 week ago) (15 children)

Because in my experience some business clients feel offended or upset that you aren’t being formal with them. American businesses seem to care less I noticed but outside of the USA (particularly in Germany) I noticed that formality serves better. Also the LLM uses the thread history to add context. Stuff like “I know we agreed on meeting on Tuesday at last meeting but unfortunately I can’t do that…” this stuff matters to clients.

I don’t offload because I don’t remember. I offload because it saves me time. Of course I read what is written before I send it out.

[–] mholiv@lemmy.world 50 points 1 week ago (32 children)

I think it might be because AI (aka LLMs) is genuinely useful when used properly.

I use AI all the time to write emails. I give the LLM the email thread along with instructions like “I can’t make it Tuesday ask if they can do Wednesday at 2pm”

The AI will write out an email that’s polite and relevant in context. Totally worth it.

I think the problem is people/companies trying to shove LLMs where they don’t make sense.

[–] mholiv@lemmy.world 26 points 1 week ago

Did you respond to the wrong message?

[–] mholiv@lemmy.world 3 points 1 week ago

English, German, a bit of Mandarin, and Toki Pona!

view more: next ›