Yprum

joined 6 months ago
[–] Yprum@lemmy.world 2 points 15 hours ago

Well damn, thank you so much for the answer. That has gone well and beyond what I'd have called a great answer.

First of all I just wanted to acknowledge the time you put into it, I just read it and in order to make a meaningful answer for discussion I probably need to read your comment a couple more times, and consider my own perspective on those topics, and also study a few drops of information you gave where sincerely you lost me :D (being a neutral monist, and about Searle and such, I need to study a bit that area). So, I want to give an adequate response to you as well and I'll need some time for that, but before anything, thanks for the conversation, I didn't want to wait to say that later on.

Also, worth mentioning that you did hit the nail in the head when you summed up all my rambling into a coherent one question/topic. I keep debating myself about how I can support creators while also appreciating the usefulness of a tool such as LLMs that can help me create things myself that I couldn't before. There has to be a balance somewhere there... (Fellow programmer brain here trying to solve things like if you are debugging software, no doubt the wrong perspective for such a complex context).

UBI is definitely a goal to be achieved that could help in many ways, just like a huge reform of copyright would also be necessary to remove all the predators that are already abusing creators by taking their legal rights on the content created.

The point you make of anthropomorphizing LLMs is absolutely a key point, in fact I avoid all I can mentioning AI because I believe it muddles the waters so much more than it should (but it's a great way of selling the software). For me it goes the other way actually and I wonder how different we are from an LLM (oversimplifying much...) in the methods we apply to create something and where's the line of being creative vs depending on previous things experienced and basing our creation in previous things.

Anyway, that starts getting a bit too philosophical, which can be fun but less practical. Respecting your other comment, I do indeed follow Doctorow, it's fascinating how much he writes, and how clear he can expose ideas. It's tough to catch up with him at times with so much content. I also got his books in the last humble bundle, so happy to buy books without DRM... I'll try to think a bit more these days on these topics and see what I can come up with. I don't want to continue rambling like a madman without setting some order to my own thoughts first. Anyway, thanks for the interesting conversation.

[–] Yprum@lemmy.world 2 points 2 days ago (2 children)

I would love to hear your opinion on something I keep thinking about. There's the whole idea that these LLMs are training on "available" data all over the internet, and then anyone can use the LLM and create something that could resemble the work of someone else. Then there's the people calling it theft (in my opinion wrong from any possible angle of consideration) and those calling it fair use (I kinda lean more on this side). But then we have the side of compensation for authors and such, which would be great if some form for it would be found. Any one person can learn about the style of an author and imitate it without really copying the same piece of art. That person cannot be sued for stealing "style", and it feels like the LLM is basically in the same area of creating content. And authors have never been compensated for someone imitating them.

So... What would make the case of LLMs different? What are good points against it that don't end up falling into the "stealing content" discussion? How to guarantee authors are compensated for their works? How can we guarantee that a company doesn't order a book (or a reading with your voice in the case of voice actors, or pictures and drawings, ...) and then reproduces the same content without you not having to pay you? How can we differentiate between a synthetic voice trained with thousand of voices but not the voice of person A but creates a voice similar to that of A against the case of a company "stealing" the voice of A directly? I feel there's a lot of nuances here and don't know what or how to cover all of it easily and most discussion I read are just "steal vs fair use" only.

Can this only end properly with a full reform of copyright? It's not like authors are nowadays very well protected either. Publishers basically take their creation to be used and abused without the author having any say in it (like in the case of spot if unpublished a artists relationship and payment agreements).

[–] Yprum@lemmy.world 3 points 2 days ago (1 children)

I just wanted to say, it's refreshing to read a well argumented comment such as this one. It's good to see every once in a while there are still some people thinking things through without falling for automatic hatred to either side of a discussion.

[–] Yprum@lemmy.world -1 points 1 week ago (2 children)

But the reason the planet burns is because of how we generate the energy, not because of using energy. I'm not defending all these fucked up greedy corporations and their use of AI, machine learning, LLMs or whatever crap they are trying to get us to use want or not, but our real problem is based on energy generation, not consumption.

[–] Yprum@lemmy.world 1 points 1 week ago (1 children)

But is it the tool that has the negative impact or is it the corporations that use the tool with a negative impact? I think it is an important distinction, even more so when this kind of blaming the AI stuff sounds a lot to distraction techniques, "no don't look at what has caused global warming for the last century, look at this tech that exploded over the last year and is consuming crazy amounts of energy". And saying that, I want to make sure its clear, that doesn't mean it shouldn't be handled, discussed or criticised (the use of AI I mean), as long as we don't fall into irrational blaming of a tool that has no such issue.

I didn't know about the mod stuff, but also not sure why you mention it, am I going to find myself mod of some weird shit now? X)

[–] Yprum@lemmy.world 0 points 1 week ago (3 children)

But then the problem is how google uses AI, not AI itself. I can have an LLM running locally not consuming crazy amounts of energy for my own purposes.

So blaming AI is absurd, we should blame OpenAI, Google, Amazon... This whole hatred for AI is absurd when it's not the real source of the problem. We should concentrate on blaming and ideally punishing companies for this kind of use (abuse more like) of energy. Energy usage also is not an issue in itself, as long as we use adequate energy sources. If companies start deploying huge solar panel fields on top of their buildings and parkings and whatnot to cover part of the energy use we could all end up better than before even.

[–] Yprum@lemmy.world 4 points 1 week ago (1 children)

I like the idea of the keyboard being offline and the LLM stuff but so far I can't see a way for multi language input. I'm guessing it's too early in the alpha state for that but I will keep an eye open for it, it is a promising project. In the meantime I'll test the heliboard others were mentioning.

[–] Yprum@lemmy.world 41 points 1 week ago (3 children)

Who are you and how did you read my diary?

[–] Yprum@lemmy.world 4 points 3 weeks ago (1 children)

Ah indeed, you are right, somehow I missed the unexpected part. I guess because this applies to just about any meeting to me, not only the unexpected ones so I just applied it generally x)

[–] Yprum@lemmy.world 2 points 3 weeks ago

Absolutely great write up. Thanks for sharing it, I didn't know of it and I'm saving it to share when applicable.

[–] Yprum@lemmy.world 17 points 3 weeks ago (4 children)

Actually the right side graph needs a correction from my point of view. The decline in productivity doesn't happen sharply when the meeting starts. For me the decline starts between 15 to 30 minutes before the meeting slowly, as I can hardly concentrate concerned that I might miss the start. If I'm ever in hyper concentration mode, most likely I'll miss the start of the meeting.

[–] Yprum@lemmy.world 1 points 3 months ago

It seems like a very reasonable policy.

Is there some way to handle setting a contact to have the chance to recover the account in case of someone passing away or being incapable of accessing the account before it is deleted?

view more: next ›