jarfil

joined 1 year ago
[–] jarfil@beehaw.org 8 points 23 hours ago (1 children)

Not many years ago, I've been told that rainbows are a symbol of being gay, and unicorns are a symbol of child abusers.

Like... WTF.

[–] jarfil@beehaw.org 11 points 23 hours ago

Redundant, like the server staff who told Elon it would take 6 months to move the servers... so he decided to move them himself on a whim... and it took 6 months to finish making them operational again?

Or redundant like the content moderation staff, whose redundancy has turned X into an even bigger dumpster fire?

Moderating and serving the content from 300 million users, worldwide, in near real time and no downtime, might seem like a simple task, but it really is not.

[–] jarfil@beehaw.org 2 points 1 day ago

Check out this one for a general overview:

https://youtu.be/OFS90-FX6pg

You may want to also check an intro to neural networks, and Q* is a somewhat new concept. Other than that... "the internet". There are plenty of places with info, not sure if there is a more centralized and structured one.

Learning to code with just ChatGPT is not the best idea. You need to join three areas:

  • general principles (data structures, algorithms, etc)
  • language rules (best described in a language reference)
  • business logic (computer science, software engineering, development patterns, etc)

ChatGPT's programming answers, give you an intersection of all those, often with some quirks, with the nice but only benefit of explaining what it thinks it is doing. You still need to have some basic understanding of those in order to understand what ChatGPT is talking about, how to double-check it, and how to look for more info. It can be a great timesaver as a way to generate drafts, though.

[–] jarfil@beehaw.org 3 points 1 day ago* (last edited 1 day ago) (2 children)

It's not a statistical method anymore. One of the breakthroughs of large model neural networks, has been that during training an emergent process, assigns neurons to both relatively high level and specific traits, which at the same time "cluster up" with other neurons assigned to related traits. Adding just a bit of randomness ("temperature") allows the AI to jump from activating one trait to a close one, but not to one too far away. Confidence becomes a measure of how close is the output, to a consistent set of traits trained into the network. Interestingly, a temperature of 0 gives a confidence of 100%... but produces gibberish.

If its data contains a commonly held belief, that is incorrect

This is where things start to get weird. An AI system based on an LLM, can iterate over its own answers looking for the optimal one (Q*), and even detect inconsistencies in them. What it does after that, depends on whoever programmed it:

  • Maybe it casts any doubt aside, and outputs the first answer anyway (original ChatGPT did that, didn't even bother self-checking too much)
  • Or it could ask an authoritative source (ChatGPT plugins work like that)
  • Or it could search the web for additional info (Copilot and Gemini do that)
  • Or it could alert the user to both the low confidence and the inconsistencies (...but people want omniscient AIs, not "err... I'm not sure, Dave" AIs)
  • ...or, sometime in the future (or present?) they could re-train themselves, maybe via generating a LoRa, that would bring in corrected biases, or even additional concepts.

Over time, I think different AI systems will evolve to target accuracy, consistency, creativity, etc. Current systems are kind of rudimentary compared to what's yet to come, and too many are used in very rudimentary ways by anyone who can slap an "AI" label and sell them.

[–] jarfil@beehaw.org 1 points 1 day ago

No pictures of kids for example

Meaning, an AI blind to kids.

Keep in mind that training data is required for both recognition, and generation. Legislating that kids "It doesn't look like anything to me", leads to things like:

  • Cars that don't stop for "It doesn't look like anything to me"
  • Spam filters that don't stop porn, or gore, or both, of "It doesn't look like anything to me"
  • Photo storage that erases empty photos which "It doesn't look like anything to me"

For porn specific AIs, don't allow users to upload custom images

Not sure how you think AIs work, but anyone can train a LoRa on their own laptop, no "uploading" to anywhere required.

Companies clearly can't be trusted to put in safeguards for themselves, so I guess it is time for legislation.

Cool, and I agree with that. I just think that example is horrific (for starters, it would make Lemmy's anti-CSAM filter illegal, since it's trained on pictures of kids).

Got any other proposals?

[–] jarfil@beehaw.org 3 points 2 days ago* (last edited 2 days ago) (4 children)

The current state of AI chatbots, assigns a "confidence level" to every piece of output. It signals perfectly well when and where they should look for more information... but humans have been pushing them to "output something, anything", instead of excusing itself for not knowing something, or running some additional processes in order to look for the missing information.

As of this year, Copilot has been running web searches to complement its lack of information, and Gemini is running both web searches, and iteratively self-checking its own answer in order to refine it (see "drafts"). It also seems like Gemini might be learning from humanity's reactions to its wrong answers.

[–] jarfil@beehaw.org 1 points 2 days ago

Feel free to partake in the tediousness:

https://en.wikipedia.org/wiki/Kardashian_family

It was tedious trash back then, and 17 years of beating a dead horse hasn't made it any less tedious.

[–] jarfil@beehaw.org 2 points 2 days ago

"Porn made of me"? You mean, by paying me to sign an agreement, or by drugging and/or forcing me...? Just to be perfectly clear: I'm not a photo.

The video game doesn't produce anything.

Are we talking about the game's video capture, or the feeling of wanting to puke onto that piece of shit until it drowns?

What do you propose reduces... porn fakes?

Something like "teaching your brat". Porn fakes don't even become a problem until they get distributed to others. Adults can go to jail, it works on some.

My problem with machine learning porn is that it's artless generic template spam clogging up my feed

That... has more to do with tagging and filtering, rather than anything mentioned above.

It's also somewhat weird to diss the "template" of an AI output, when porn videos have settled on a template script for about half a century already. If anything, I've seen more variety from people shoving their prompts into some AI, than from porn producers all my life (japanese "not-a-porn" ingenuity excluded).

[–] jarfil@beehaw.org 1 points 2 days ago

Was about to respond to your arguments, when...

This is not a good thing. And if this is the world you want, then there is no reasoning with you. You are truly lost.

"My way or the highway", huh? Ok, guess there is no reasoning with you. 🛣️

[–] jarfil@beehaw.org 7 points 3 days ago

Not exactly.

LLMs are predictive-associative token algorithms with a degree of randomness and some self-reflection. A key aspect is that anything can be a token, they can self-feed their own output, creating the basis for a thought cycle, as well as output control input for other algorithms. It remains to be seen whether the core of "(human) intelligence" is much more than that, and by how much.

Stable Diffusion is a random image generator that refines its output based on perceptual traits associated with a prompt. It's like a "lite" version of human dreaming, only with a super-human training set. Kind of an "uncanny valley" version of dreaming.

It just so happens that both algorithms have been showcased at about the same time, and it's the first time we can build a "set and forget" AI system that can both make decisions about its own next steps, and emulate human creativity... which has driven the hype into overdrive.

I don't think we'll stop hearing about it, but I do think there is much more to be done, and it's pretty much impossible to feed any of the algorithms with human experience data, without registering at least one human learning cycle, as in over many years from inside a humanoid robot.

[–] jarfil@beehaw.org 1 points 3 days ago* (last edited 3 days ago) (2 children)

So, like what kind of legislation? All the problematic uses of AI, already have legislation against them. I don't see any viable "anti-AI" legislation, just enforce the one already in place. Meanwhile, strengthening prevention and responsibility rules, would benefit all aspects of society, including the uses of AI.

1
Deleted posts (beehaw.org)
submitted 10 months ago* (last edited 10 months ago) by jarfil@beehaw.org to c/support@beehaw.org
 

It's unnerving to find an interesting post, with an interesting conversation, only to see it deleted (not even mod removed) with hanging replies in the inbox and no way to reply back.

Is there any feature that would allow continuing those conversations? Other than direct messages, which get "black holed" (no way to see own replies). Could these conversations be somehow continued, either recovered in Lemmy, or maybe via Mastodon?

view more: next ›