this post was submitted on 17 Aug 2024
611 points (98.4% liked)

Technology

59554 readers
3436 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Vince@lemmy.world 31 points 3 months ago (15 children)

Ok, dumb question time. I'm assuming no one has any significant issues, legal or otherwise, with a person studying all Van Gogh paintings, learning how to reproduce them, and using that knowledge to create new, derivative works and even selling them.

But when this is done with software, it seems wrong. I can't quite articulate why though. Is it because it takes much less effort? Anyone can press a button and do something that would presumably take the person from the example above years or decades to do? What if the person was somehow super talented and could do it in a week or a day?

[–] tyler@programming.dev 38 points 3 months ago
  1. Because it’s not human. We distinguish ourselves in everything, that’s why we think we’re special. The same applies to inventions, e.g. why monkeys can’t have a patent.
  2. Time. New “products” whether that be art, engineering, science, all take time for humans. So value is created with time, because it creates scarcity and demand.
  3. Talent. Due to the time factor, talent and practice are desired traits of a human. You mention that a talented human can do something in just a few days that might take someone else years, but it might only take them a few days because they spent years learning.
  4. Perfection. Striving for perfection is a human experience. A robot doing something perfect isn’t impressive, a human doing something perfect is amazing. Even the most amateur creator can strive for perfection.

Think about paintings vs prints. Paintings are much more valuable because they aren’t created as quickly as the prints are. Even the most amateur artwork is more valuable as a physical creation rather than a copy, like a child’s crayon drawing.

This even applies to digital art because the first instance of something is the most difficult thing to create, everything after that is then just a copy, and yes this does apply to some current Gen AI tech, but very soon that will no longer be the case.

This change from humans asking for something and having other humans create it to humans asking for something and having computers create it is a loss of our humanity, what makes us human.

[–] kibiz0r@midwest.social 20 points 3 months ago* (last edited 3 months ago)

If you're looking for a universally-applicable moral framework, join the thousands of years of philosophers striving for the same.

If you're just looking for an explanation that allows you to put one foot in front of the other...

Laws exist for us to spell out the kind of society we'd like to live in. Generally, we prefer that individuals be able to participate in cultural conversations and offer their own viewpoint. And generally, we prefer that groups of people don't accumulate massive amounts of power over other groups of people.

Dedicating your life to copying another artist's style is participating in a cultural conversation, and you won't be able to help yourself from infusing your own lived experience into your work of copying the artist. If only by the details that you focus on getting exactly right, the slight mistakes that repeat themselves or morph over the course of your career, the pieces you prioritize replicating over and over again. It says something about who you are, and that's worth appreciating.

Now, if you're trying to pass those off as originals and not your own tributes, then you're deceiving people and that's a problem because you're damaging the cultural conversation by lying about the elements you're putting into it. Even so, sometimes that's an interesting artistic enterprise in itself. Such as when artists pretend to be someone else. Warhol was a fan of this. His whole career revolved around messing with concepts of authenticity in art.

As for power, you don't gain that much leverage over another artist by simply copying their work. And if you riff on it to upstage them, you're just inviting them to do the same to you in turn.

But if you can do that mechanically, quickly, so that any creative twist they put out there to undermine your attempts to upstage them, you have an instant response at little cost to yourself, now you're in a position of great power. The more the original artist produces, the stronger your advantage over them becomes. The more they try, the harder it is for them to win.

We don't generally like when someone has accumulated tons of power, especially when they subsequently use that power to prevent others from being able to compete.

Edit: I'd also caution against trying to make an objective test for whether a particular act of copying is "okay". This invites two things:

  1. Artists can't help but question what's acceptable and play around with it. They will deliberately transgress in order to make a point, and you'll be forced to admit that your objective test is worthless.

  2. Tech companies are relentlessly horny for this kind of objective legal framework, because they want to be able to algorithmically approach the line and fill its border to fractal levels of granularity without technically crossing the line. RealPage, DoorDash, Uber, Amazon, OpenAI all want "illegal" to be as precisely and quantitatively defined as possible, so that they can optimize for "barely legal".

[–] Eccitaze@yiffit.net 11 points 3 months ago

I actually had some thoughts about this and posted this in a similar thread:

First, that artist will only learn from a few handful of artists instead of every artist's entire field of work all at the same time. They will also eventually develop their own unique style and voice--the art they make will reflect their own views in some fashion, instead of being a poor facsimile of someone else's work.

Second, mimicking the style of other artists is a generally poor way of learning how to draw. Just leaping straight into mimicry doesn't really teach you any of the fundamentals like perspective, color theory, shading, anatomy, etc. Mimicking an artist that draws lots of side profiles of animals in neutral lighting might teach you how to draw a side profile of a rabbit, but you'll be fucked the instant you try to draw that same rabbit from the front, or if you want to draw a rabbit at sunset. There's a reason why artists do so many drawings of random shit like cones casting a shadow, or a mannequin doll doing a ballet pose, and it ain't because they find the subject interesting.

Third, an artist spends anywhere from dozens to hundreds of hours practicing. Even if someone sets out expressly to mimic someone else's style, teaches themselves the fundamentals, it's still months and years of hard work and practice, and a constant cycle of self-improvement, critique, and study. This applies to every artist, regardless of how naturally talented or gifted they are.

Fourth, there's a sort of natural bottleneck in how much art that artist can produce. The quality of a given piece of art scales roughly linearly with the time the artist spends on it, and even artists that specialize in speed painting can only produce maybe a dozen pieces of art a day, and that kind of pace is simply not sustainable for any length of time. So even in the least charitable scenario, where a hypothetical person explicitly sets out to mimic a popular artist's style in order to leech off their success, it's extremely difficult for the mimic to produce enough output to truly threaten their victim's livelihood. In comparison, an AI can churn out dozens or hundreds of images in a day, easily drowning out the artist's output.

And one last, very important point: artists who trace other people's artwork and upload the traced art as their own are almost universally reviled in the art community. Getting caught tracing art is an almost guaranteed way to get yourself blacklisted from every art community and banned from every major art website I know of, especially if you're claiming it's your own original work. The only way it's even mildly acceptable is if the tracer explicitly says "this is traced artwork for practice, here's a link to the original piece, the artist gave full permission for me to post this." Every other creative community writing and music takes a similarly dim views of plagiarism, though it's much harder to prove outright than with art. Given this, why should the art community treat someone differently just because they laundered their plagiarism with some vector multiplication?

[–] aStonedSanta@lemm.ee 11 points 3 months ago (2 children)

They are copying your intellectual property and digitizing its knowledge. It’s a bit different as it’s PERMANENT. With humans knowledge can be lost, forgotten, or ignored. In these LLMs that’s not an option. Also the skill factor is a big issue imo. It’s very easy to setup an LLM to make AI imagery nowadays.

[–] Melt@lemm.ee 1 points 3 months ago

Your first sentence is truth

[–] Dkarma@lemmy.world -4 points 3 months ago (2 children)

Your first sentence is false.

[–] ForgotAboutDre@lemmy.world 4 points 3 months ago

They are copying. These LLM are a product of their input, and solely a product of their input. It’s why they’ll often directly output their training data. Using more data to train reduces this effect, that’s why all these companies are stealing and getting aggressive in stopping others stealing their data.

[–] aStonedSanta@lemm.ee 2 points 3 months ago

Proof? I am fairly certain I am correct but I will gladly admit fault. This whole LLM thing is indeed new to me also

[–] MinFapper@startrek.website 9 points 3 months ago* (last edited 3 months ago)

So, before the invention of the camera, the most valuable and most popular creative skill was replicating people on canvas as realistically as possible. Yes, we remember famous exceptions like Picasso, but by sheer number of paintings the most common were portraits of rich people.

After the cameras took that job away, prevailing art changed to become more abstract and "creative". But that still pissed off a lot of people that had spent a very long time honing a skill that was now no longer in demand.

What we're seeing is a similar shift. I think future generations of artists will value color theory, composition, etc. over specific brush stroke techniques. AI will make art much more accessible once enough time has passed for AI assisted art to be considered art. Make no mistake: it will always be people that actually create the art - AI will just reduce/remove the grunt work so they can focus more on creativity.

Now, whether billion dollar corporations deserve to exploit the labor of millions of people is a whole separate conversation, but tl;dr: they don't, but they're going to anyway because there is little to stop them in correct economic/governance models.

[–] Cornelius_Wangenheim@lemmy.world 9 points 3 months ago* (last edited 3 months ago) (1 children)

Artists who rips off other great works are still developing their talent and skills. They can then go on to use to make original works. The machine will never produce anything original. It is only capable of mixing together things it has seen in its training set.

There is a very real danger that of ai eviscerating the ability for artists to make a living, making it where very few people will have the financial ability to practice their craft day in and day out, resulting in a dearth of good original art.

[–] Dkarma@lemmy.world 5 points 3 months ago

The machine will never produce anything original. It is only capable of mixing together things it has seen in its training set.

This is patently false and shows you don't know a single thing about how ai works.

[–] FooBarrington@lemmy.world 6 points 3 months ago (1 children)

There's a simple argument: when a human studies Van Gogh and develops their own style based on it, it's only a single person with very limited output (they can only paint so much in a single day).

With AI you can train a model on Van Gogh and similar paintings, and infinitely replicate this knowledge. The output is almost unlimited.

This means that the skills of every single human artist are suddenly worth less, and the possessions of the rich are suddenly worth more. Wealth concentration is poison for a society, especially when we are still reliant on jobs for survival.

AI is problematic as long as it shifts power and wealth away from workers.

[–] saplyng@lemmy.world 1 points 3 months ago (1 children)

Just as an interesting "what if" scenario - a human making the effort to stylize Van Gogh is okay, and the problem with the AI model is that it can spit out endless results from endless sources.

What if I made a robot and put the Van Gogh painting AI in it, never releasing in elsewhere. The robot can visualize countless iterations of the piece it wants to make but its only way share it is to actually paint it - much in the same way a human must do the same process.

Does this scenario devalue human effort? Is it an acceptable use of AI? If so does that mean that the underlying issue with AI isn't that it exists in the first place but that its distribution is what makes it devalue humanity?

*This isn't a "gotcha", I just want a little discussion!

[–] FooBarrington@lemmy.world 2 points 3 months ago

It's an interesting question! From my point of view, "devaluing human effort" (from an artistic perspective) doesn't really matter - humans will still be creating new and interesting art. I'm solely concerned about the shift in economic power/leverage, as this is what materially affects artists.

This means that if your robot creates paintings with an output rate comparable to a human artist, I don't really see anything wrong with it. The issue arises once you're surpassing the limits of the individual, as this is where the power starts to shift.

As an aside, I'm still incredibly fascinated by the capabilities and development of current AI systems. We've created almost universal approximators that exhibit complex behavior which was pretty much unthinkable 15-20 years ago (in the sense that it was expected to take much longer to achieve current results). Sadly, like any other invention, this incredible technology is being abused by capitalists and populists for profit and gain at the expense of everyone else.

[–] taaz@biglemmowski.win 4 points 3 months ago (1 children)

I am guessing the closest opposite argument would be how close it is to outright copying the original work?

[–] Vince@lemmy.world 1 points 3 months ago

I'm more trying to figure out why it's generally acceptable when a human does it vs when a machine does it.

I don't know for sure, but I think they would be able to adjust settings so that it looks nothing like any original work, but still have the same style, as I've seen people do.

[–] wewbull@feddit.uk 2 points 3 months ago (1 children)

Dumb question: why do you feel you need to defend billion dollar companies getting even richer off somebody else's work?

Also Van Gogh's works are public domain now.

[–] Vince@lemmy.world 4 points 3 months ago (2 children)

I'm not defending any companies, just thinking out loud, but I supposed I can see if that's how it reads.

I was just asking myself why it feels wrong when a machine does it vs when a human does it. By your argument, would it be ok if some poor nobody invented and is using this technology vs a billion dollar company? Is that why it feels wrong?

[–] tjsauce@lemmy.world 3 points 3 months ago (1 children)

The issue isn't the final, individual art pieces, it's the scale. An AI can produce sub-par art quickly enough to threaten the livelyhood of artists, especially now that there is far too much art for anyone to consume and appreciate. AI art can win attention via spam, drowning out human artists.

[–] TheRealKuni@lemmy.world 1 points 3 months ago

The issue isn't the final, individual art pieces, it's the scale. An AI can produce sub-par art quickly enough to threaten the livelyhood of artists, especially now that there is far too much art for anyone to consume and appreciate. AI art can win attention via spam, drowning out human artists.

This is literally what people said about photography.

And they were right, painting became less prolific as photography became available to the masses. People generally don’t get their portrait painted.

But people also generally don’t go to photo studios to have their picture taken, either, and those used to be in every shopping mall. But now we all have camera phones that adjust lighting and color and focus for us, and we can send a sufficiently decent picture off to be printed and mailed back to us. For those who want it done professionally that option is available and will be higher quality, just like portrait painting is still available, but technology has shrunk those client pools.

Technology always changes job markets. Generative AI will, just as others have done. People will lose careers they thought were stable, and it will be awful, but this isn’t anything unique to generative AI.

The only constant is that things change.

[–] wewbull@feddit.uk 2 points 3 months ago

A generative AIs only purpose is to generate "works". So it's only purpose in consuming "work" is to use them as reference. It exists to produce derivative works. Therefore the person feading the original work into the machine is the one making the choice on how that work will be used.

A human can consume a "work" for no other reason but to admire it, be entertained by it, be educated by it, to evoke an emotion and finally to produce another work based on it. Here the consumer of the work is the one deciding how it will be used. They are the ones responsible.

[–] SkyNTP@lemmy.ml 2 points 3 months ago (1 children)

Generative AI is incapable of contributing new material, because Generative AI does not sense the world through a unique perspective. So the comparison to creators that incorporate prior artists work is a false comparison. Artists are allowed to incorporate other artists work in the same way that scientists cite other's work without it being plagiarism.

In art, in science, we stand on the shoulders of giants. AI models do not stand on the shoulders of giants. AI models just replicate the giants. Society has been fooled to think otherwise.

[–] TheRealKuni@lemmy.world 1 points 3 months ago

Generative AI is a tool. It is neither a creator nor an artist, any more than paintbrushes or cameras are. The problem arises not with the tool itself but with how it is used. The creativity must come from the user, just like the way Procreate or GIMP or even photography works.

The skill factor is certainly lower than other forms of artistic expression, but that is true of photography vs painting as well.

I am not trying to say all uses of generative AI are art, anymore than every photograph is art. But that doesn’t mean it cannot be a tool to create art, part of the workflow as utilized by someone with a vision willing to take the time to get the end product they want.

Generative AI doesn’t stand on the shoulders of giants, but neither does a camera.

[–] Dkarma@lemmy.world 2 points 3 months ago

Easier than that:

Google has been doing this for years for their search engine and no one said a thing. Why do you care now that it's a different program scanning your media?

[–] Dark_Dragon@lemmy.dbzer0.com 1 points 3 months ago (1 children)

So try doing Disney style animation and similar character and similar style story line. And start profiting from it. Lets see if the "Disney" the "corporation" will remain silent or sue you to oblivion.

[–] blazera@lemmy.world 0 points 3 months ago (1 children)

Damn you musta hated Don Bluth

[–] Dark_Dragon@lemmy.dbzer0.com 0 points 3 months ago (1 children)

I don't hate him. Its just that when corporation steals individual idea or data its for research and stuff. If its other way around, us as individual will have to face lawsuit.

So i hope they sue nvidia and other big corporations who are harvesting our data for AI.

[–] blazera@lemmy.world 1 points 3 months ago

Thats the thing, nothings being stolen. Beauty and the Beast didnt up and disappear because Bluth and Fox Studios made Anastasia. Theres style similarities but it is undeniably its own work. Dont even think about the style sharing going on in the thousands of Anime out there.

[–] primrosepathspeedrun@lemmy.world 1 points 3 months ago

tl;dr: copyright law has always been nonsense designed to protect corporations and fuck over artists+consumers

but now corpo daddy and corpo mommy are fighting, and we need to take sides.

and it's revealing that copyright law never existed to protect artists, and will continue to not do that, but MUCH more obviously, and all the cucks who whined about free culture violating laws are reaping what they fucking sowed.

[–] bluestribute@lemmy.world -1 points 3 months ago

If someone studies Van Gogh and reproduces images, they're still not making Van Gogh - they're making their art inspired by Van Gogh. It still has their quirks and qualms and history behind the brush making it unique. If a computer studies Van Gogh and reproduces those images, it's reproducing Van Gogh. It has no quirks or qualms or history. It's just making Van Gogh as if Van Gogh was making Van Gogh.

[–] FiskFisk33@startrek.website -4 points 3 months ago

agreed.

What if banksy sued anyone who shared or archived photos of his wall art, that wouldn't make sense