this post was submitted on 01 Sep 2023
192 points (93.6% liked)

Technology

59414 readers
2759 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Visual artists fight back against AI companies for repurposing their work::Three visual artists are suing artificial intelligence image-generators to protect their copyrights and careers.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 47 points 1 year ago (4 children)

It seems pretty obvious to me that the artists should win this assuming their images weren't poorly licenced. Training AI is absolutely a commercial use.

These companies adopted a run fast and don't look back legal strategy and now they're going to enter the 'find out' phase.

[–] [email protected] 13 points 1 year ago (1 children)

This is a pretty old story, the EFF already weighed in on it back in april.

[–] [email protected] 8 points 1 year ago (1 children)

"The Stable Diffusion model makes four gigabytes of observations regarding more than five billion images. That means that its model contains less than one byte of information per image analyzed (a byte is just eight bits—a zero or a one)."

What a great article, it really lays it out well and concisely. I like the above point especially.

[–] [email protected] 7 points 1 year ago

Yeah, there's gold wherever you look. I like:

First, copyright law doesn’t prevent you from making factual observations about a work or copying the facts embodied in a work (this is called the “idea/expression distinction”). Rather, copyright forbids you from copying the work’s creative expression in a way that could substitute for the original, and from making “derivative works” when those works copy too much creative expression from the original.

[–] [email protected] 9 points 1 year ago (1 children)

I would like to agree with you, but I have doubts this lawsuit will stick because of how prominent corporations are in US law.

[–] [email protected] 16 points 1 year ago* (last edited 1 year ago) (14 children)

There's nothing in copyright law that covers this scenario, so anyone that says it's "absolutely" one way or the other is telling you an opinion, not a fact.

[–] [email protected] 6 points 1 year ago

It's like sueing an artist because they learnt to paint based on your paintings. But also not because the company has acquired your art and fed it into an application.

It's a very tricky area.

load more comments (13 replies)
[–] [email protected] 9 points 1 year ago (1 children)

I don't think it's obvious at all. Both legally speaking - there is no consensus around this issue - and ethically speaking because AIs fundamentally function the same way humans do.

We take in input, some of which is bound to be copyrighted work, and we mesh them all together to create new things. This is essentially how art works. Someone's "style" cannot be copyrighted, only specific works.

The government announced an inquiry recently into the copyright questions surrounding AI. They are going to make recommendations to congress about potential legislation, if any, they think would be a good idea. I believe there's a period of public comment until mid October, if anyone wants to write a comment.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (45 children)

I really hope you're wrong.

And I think there's a difference. Humans can draw stuff, build structures, and make tools, in a way that improves upon the previous iteration. Each artists adds something, or combines things in a way that makes for something greater.

AI art, literally cannot do anything, without human training data. It can't take a previous result, be inspired by it, and make it better. There has to be actual human input, it can't train itself on its own data, the way humans do. It absolutely does not "work the same way".

AI art has NEVER made me feel like it's greater than the sum of its parts. Unlike art made by humans, which makes me feel that way all the time.

If a human does art without input, you still get "something".

With an AI, you don't have that. Without the training data, you have nothing.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (1 children)

If a human does art without input, you still get “something”.

Ok, take a human being that has never had any other interactions with any other human and has never consumed any content created by humans. Give him finger paint and have him paint something on a blank canvas. I think it wouldn't look any different than a chimpanzee doing finger paint.

it can’t train itself on its own data

In theory, it could. You would just need a way to quantify the "fitness" of a drawing. They do this by comparing to actual content. But you don't need actual content in some circumstances. For example, look at Alphazero - Deepmind's AI from a few years back for playing chess. All the AI knew was the rules of the game. It did not have access to any database of games. No data. The way it learned is it played millions of games against itself.

It trained itself on its own data. And that AI, at the time, beat the leading chess engine that has access to databases and other pre-built algorithms.

With art this gets trickier because art is subjective. You can quantify clearly whether you won or lost a chess game. How do you quantify if something is a good piece of art? If we can somehow quantify this, you could in theory create AI that generates art with no input.

We're in the infancy stages of this technology.

Humans can draw stuff, build structures, and make tools, in a way that improves upon the previous iteration. Each artists adds something, or combines things in a way that makes for something greater.

AI can do all of the same. I know it's scary but it's here and it isn't going away. AI designed systems are becoming more and more commonplace. Solar panels, medical devices, computer hardware, aircraft wings, potential drug compounds, etc. Certain things AI can be really good at, and designing things and testing it in a million different simulations is something that AI can do a lot better than humans.

AI art has NEVER made me feel like it’s greater than the sum of its parts

What is art? If I make something that means nothing and you find a meaning in it, is it meaningful? AI is a cold calculated mathematical model that produces meaningless output. But humans love finding patterns in noise.

Trust me, you will eventually see some sort of AI art that makes an impact on you. Math doesn't lie. If statistics can turn art into data and find the hidden patterns that make something impactful, then it can recreate it in a way that is impactful.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

The randomness used by current machine learning to train the neural networks, will never be able to do what a human does when they are being creative.

I have no doubt AI art will be able "say" things. But it wont be saying things, that haven't already been said.

And yes, AI can brute force its way to solutions in ways humans cannot beat. But that only works when there is a solution. So AI works with science, engineering, chess.

Art does not have a "solution". Every answer is valid. Humans are able to create good art, because they understand the question. "What is it to be human?" "Why are we here?" "What is adulthood?" "Why do I feel this?" "What is innocence?"

AI does not understand anything. All it is doing is mimicking art already created by humans, and coincidentally sometimes getting it right.

[–] [email protected] 5 points 1 year ago (3 children)

AI can brute force its way to solutions in ways humans cannot beat

It's not brute force. It seems like brute force because trying something millions of times seems impossible to us. But they identify patterns and then use those patterns to create output. It's learning. It's why we call it "machine learning". The mechanics are different than how humans do it, but fundamentally it's the same.

The only reason you know what a tree looks like is because you've seen a million different trees. Trees in person, trees in movies, trees in cartoons, trees in drawings, etc. Your brain has taken all of these different trees and merged them together in your brain to create an "ideal" of the tree. Sort of like Plato's "world of forms"

AI can recognize a tree through the same process. It views millions of trees and creates an "ideal" tree. It can then compare any image it sees against this ideal and determine the probability that it is or isn't a tree. Combine this with something that randomly pumps out images and you can now compare these generated images with the internal model of a tree and all of a sudden you have an AI that can create novel images of trees.

It's fundamentally the same thing we do. It's creating pictures of trees that didn't exist before. The only difference is it happens in a statistical model and it happens at a larger and faster scale than humans are capable of.

This is why the question of AI models having to pay copyright for content it parses is not obvious at all.

Art does not have a “solution”. Every answer is valid.

If every answer is valid then you would be sitting here saying that AI art is just as valid as anything else.

load more comments (3 replies)
[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I think it's a mistake to see the software as an independent entity. It's a tool just like the paintbrush or photoshop. So yes, there isn't any AI art without the human but that's true for every single art form.

The best art is a mix of different techniques and skills. Many digital artists are implementing ai into their workflow and there is definitely depth to what they are making.

load more comments (43 replies)
[–] [email protected] 3 points 1 year ago (1 children)

This is a tough one, because they are not directly making money from the copyrighted material.

Isn't this a bit same as using short samples of somebodys song in your own song or somebody getting inspired from somebodys artwork and creating something similar.

[–] [email protected] 6 points 1 year ago (2 children)

If you're sampling music you aught to be compensating the licence holder unless it's public domain or your work is under a fair use exception.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

Sampling music is literally placing parts of that music in the final product. Gen AI is not placing pieces of other people's art in the final image, in fact it doesn't store any image data at all. Using an image in the training data is akin to an artist including that image on their moodboard. Except the AI's moodboard has way more images and the odds of the work being too similar to a single particular image is lower than when a human does it.

[–] [email protected] 2 points 1 year ago

Are you speaking legally or morally when you say someone "aught" to do something?

[–] [email protected] 6 points 1 year ago

Resistance is futile

[–] [email protected] 3 points 1 year ago

This is the best summary I could come up with:


NEW YORK (AP) — Kelly McKernan’s acrylic and watercolor paintings are bold and vibrant, often featuring feminine figures rendered in bright greens, blues, pinks and purples.

The Nashville-based McKernan, 37, who creates both fine art and digital illustrations, soon learned that companies were feeding artwork into AI systems used to “train” image-generators — something that once sounded like a weird sci-fi movie but now threatens the livelihood of artists worldwide.

The lawsuit may serve as an early bellwether of how hard it will be for all kinds of creators — Hollywood actors, novelists, musicians and computer programmers — to stop AI developers from profiting off what humans have made.

The case was filed in January by McKernan and fellow artists Karla Ortiz and Sarah Andersen, on behalf of others like them, against Stability AI, the London-based maker of text-to-image generator Stable Diffusion.

The teacher, Christoph Schuhmann, said he has no regrets about the nonprofit project, which is not a defendant in the lawsuit and has largely escaped copyright challenges by creating an index of links to publicly accessible images without storing them.

The idea that such a development is inevitable — that it is, essentially, the future — was at the heart of a U.S. Senate hearing in July in which Ben Brooks, head of public policy for Stability AI, acknowledged that artists are not paid for their images.


The original article contains 1,215 words, the summary contains 229 words. Saved 81%. I'm a bot and I'm open source!

load more comments
view more: next ›