this post was submitted on 26 Oct 2024
1155 points (99.2% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

55128 readers
644 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Ashelyn@lemmy.blahaj.zone 3 points 2 months ago* (last edited 2 months ago)

People developing local models generally have to know what they're doing on some level, and I'd hope they understand what their model is and isn't appropriate for by the time they have it up and running.

Don't get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn't know where to start. My concern is with the cultural issues and expectations/hype surrounding "AI". With how the tech is marketed, it's pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it's possible to shoehorn through.

Addendum: local models can help with this issue, as they're on one's own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.