markon

joined 1 year ago
[–] markon@lemmy.world 10 points 2 weeks ago (2 children)

You'd think they'd have moved on by now. Well. Oh well.

[–] markon@lemmy.world 5 points 3 weeks ago

Yep they now get paid for the data we have them. I have no sympathy lol. At least these models can't actually store it all losslessly by any stretch of the imagination. The compression factors would have to be like 100-200X+ anything we've ever been able to achieve before. The numbers don't work out. The models do encode a lot though and some of it is going to include actual full text data etc but it'll still be kinda fuzzy.

I think we do need ALL OPEN SOURCE. Not just for AI, but I know on that point I'm preaching to the choir here lol

[–] markon@lemmy.world 3 points 3 weeks ago (2 children)

They should. Maybe all the angry people here would go bliss out. Lol

[–] markon@lemmy.world 0 points 3 weeks ago

What's AI? Fuck what AI? Which one? What kind?

[–] markon@lemmy.world 1 points 3 weeks ago

Lol this is a good one. I love my LLMs but this is it. The problem is most people don't even think at all anyway. Most of the time I don't either. If we're honest with ourselves, we're still just barely advanced apes.

I don't get marketing. The more gets shoved the father I retreat and ignore. I'll let em run on YouTube sometimes just so the advertiser has to pay out the fraction of a penny on a wasted ad. Actually, how about we do this!

Let's build software that goes around watching ads constantly so it makes their numbers go all to hell.

[–] markon@lemmy.world 1 points 3 weeks ago

Cool they should setup their own hidden services! 😂

[–] markon@lemmy.world -1 points 3 weeks ago

The funny thing is we hallucinate all our answers too. I don't know where these words are coming from and I am not reasoning about them other than construction of a grammatically correct sentence. Why did I type this? I don't have a fucking clue. 😂

We map our meanings onto whatever words we see fit. If I had a dollar for every time I've heard a Republican call Obama a Marxist still blows my mind.

Thank you for saying something too. Better than I could do. I've been thinking about AI since I was a little kid. I've watched it go from at best some heuristic pathfinding in video games all the way to what we have now. Most people just weren't ever paying attention. It's been incredible to see that any of this was even possible.

I watched Two Minute Papers from back when he was mostly doing light transport simulation (raytracing). It's incredible where we are, but baffling people can't see the tech as separate from good old capitalism and the owner class. It just so happens it takes a fuckton of money to build stuff like this, especially at first. This is super early.

[–] markon@lemmy.world 0 points 3 weeks ago

Just like us. Sometimes it's better to have bullshit predictions than none.

[–] markon@lemmy.world -3 points 3 weeks ago (1 children)

We should understand that 99.9% of what wee say and think and believe is what feels good to us and we then rationalize using very faulty reasoning, and that's only when really challenged! You know how I came up with these words? I hallucinated them. It's just a guided hallucination. People with certain mental illnesses are less guided by their senses. We aren't magic and I don't get why it is so hard for humans to accept how any individual is nearly useless for figuring anything out. We have to work as agents too, so why do we expect an early days LLM to be perfect? It's so odd to me. Computer is trying to understand our made up bullshit. A logic machine trying to comprehend bullshit. It is amazing it even appears to understand anything at all.

[–] markon@lemmy.world 0 points 3 weeks ago (1 children)

Uhm. Have you ever talked to a human being.

[–] markon@lemmy.world 0 points 3 weeks ago

Asking the chat models to have self-disccusion and use/simulate metacognition really seems to help. Play around with it. Often times I am deep in a chat and I learn from its mistakes, it kinda learns from my mistakes and feedback. It is all about working with and not against. Because at this time LLMs are just feed forward neural networks trained on supercomputer clusters. We really don't even know what they are capable of fully because it is so hard to quantify, especially when you don't really know what exactly has been learned.

Q-learning in language is also an interesting methodology I've been playing with. With an imagine generator for example though, you can just add (Q-learning quality) and you may get more interesting and quality results. Which itself is very interesting to me.

 
 
 
 
view more: next ›