this post was submitted on 24 May 2024
0 points (NaN% liked)

Technology

57448 readers
4619 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

archive.is

Shall we trust LM defining legal definitions, deepfake in this case? It seems the state rep. is unable to proof read the model output as he is "really struggling with the technical aspects of how to define what a deepfake was."

top 10 comments
sorted by: hot top controversial new old
[–] webghost0101@sopuli.xyz 1 points 3 months ago* (last edited 3 months ago) (1 children)

I understand the irony. But can we not pretend they blindly used an output or even generated a full page. It was a specific section to provide a technical definition of “what is a deepfake”.

“I was really struggling with the technical aspects of how to define what a deepfake was. So I thought to myself, ‘Well, why not ask the subject matter expert (i do not agree with that wording, lol) , ChatGPT?’” Kolodin said. 

The legislator from Maricopa County said he “uploaded the draft of the bill that I was working on and said, you know, please, please put a subparagraph in with that definition, and it spit out a subparagraph of that definition.”

“There’s also a robust process in the Legislature,” Kolodin continued. “If ChatGPT had effed up some of the language or did something that would have been harmful, I would have spotted it, one of the 10 stakeholder groups that worked on or looked at this bill, the ACLU would have spotted, the broadcasters association would have spotted it, it would have got brought out in committee testimony.”

But Kolodin said that portion of the bill fared better than other parts that were written by humans. “In fact, the portion of the bill that ChatGPT wrote was probably one of the least amended portions,” he said.

I do not agree on his statement that any mistakes made by ai could also be made by humans. The reasoning and errors in reasoning is quite different in my experience but the way chatgpt was used is absolutely fair.

[–] circuscritic@lemmy.ca -1 points 3 months ago* (last edited 3 months ago)

No kidding. When I read that, my first thought was, "He's clearly at least above the median intelligence of his fellow Arizona GOP reps, if not in the top 10% of their entire conference"

Anyone who read the article AND has experience with the Arizona GOP, probably thought the same thing.

The Arizona GOP collects some of the dumbest people alive.

[–] slurpinderpin@lemmy.world 1 points 3 months ago* (last edited 3 months ago) (1 children)

These types of things are exactly what Generative AI models are good for, as much as Internet people don’t want to hear it.

Things that are massively repeatable based off previous versions (like legislation, contracts, etc) are pretty much perfect for it. These are just tools for already competent people. So in theory you have GenAI crank out the boring stuff and have an expert “fill in the blanks” so to speak

[–] helenslunch@feddit.nl 0 points 1 month ago (1 children)

Ideally it would be a generative AI trained specifically on legal textbooks.

I don't know why there seem to be no LLMs trained specifically on expert subject matter.

[–] slurpinderpin@lemmy.world 1 points 1 month ago (1 children)

There are, just not available publicly. Tons of enterprises (law firms included) are paying to have models trained on their data

[–] helenslunch@feddit.nl 0 points 1 month ago

There are, just not available publicly.

I meant publicly available

[–] harsh3466@lemmy.ml 0 points 3 months ago (1 children)
[–] MonkderDritte@feddit.de 0 points 3 months ago

Yeah, side hurts. 🤣

[–] werefreeatlast@lemmy.world 0 points 2 months ago (1 children)

Someone should run all lawyer books through Chat-GPT so we can have a free opensource lawyer in our phones.

During a traffic stop: "Hold on officer, I gotta ask my lawyer. It says to shut the hell up."

Cop still shoots him in the head so he can learn his lesson. He pulled out his phone!

[–] helenslunch@feddit.nl 0 points 1 month ago* (last edited 1 month ago)

Honestly I think this is the inevitable future. There are lots of jobs where what you're paying for is the knowledge. And while LLMs likely won't be as good as an actual expert, most "professionals", in my experience, both in personal professional work, as well as contracting "professional" work, are not even remotely experts, and a properly-trained LLM will run circles around them.

You won't be able to buy them, because machines are, for some reason, not allowed to be fallible like humans, but I can certainly see a scenario where someone takes an open-source LLM and trains it with professional materials (obtained both legally and illegally) and releases it for free, and it does a better job than 70% of "professionals".