this post was submitted on 08 Aug 2024
219 points (83.7% liked)

Unpopular Opinion

6339 readers
35 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 1 year ago
MODERATORS
 

I've recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples' works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate "new" content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people's "likeness." I understand the hate for AI generated shit (because it is shit). I really don't understand where all this hate for using public data for building a "statistical" model to "learn" general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don't think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that's really just a problem with capitalism, and productivity increases are generally considered good.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 40 points 3 months ago (1 children)

Define "public".

Publicly available is not the same as public domain. You should respect the copyright, especially of small creators. I'm of the opinion that an ML model is a derivative work, and so if you've trawled every website under the sun for data to feed your model you've violated copyright.

[–] [email protected] 1 points 3 months ago (1 children)

There are multiple facets here that all kinda get mashed together when people discuss this topic and the publicly available/public domain difference kinda gets at that.

  • An AI company downloading a publicly available work isn't a violation of copyright law. Copyright gives the owner exclusive right to distribute their work. Publishing it for anybody to download is them exercising that right.
  • Of course, if the work isn't publicly available and the AI company got it, someone probably did violate copyright laws, likely the people who distributed the data set to the company because they're not supposed to be passing around the work without the owner's permission.
  • All that is to say, downloading something isn't making a copy. Sending the work is making a copy, as far as copyright is concerned. Whether the person downloading it is going to use it for something profitable doesn't really change anything there. Only if they were to become the sender at some later point does it matter. In other words, there's no violation of copyright law by the company that can really occur during the whole "training" phase of AI development.
  • Beyond that, AI isn't in the business of serving copies of works. They might come close in some specific instances, but that's largely a technical problem that developers want to fix than a fundamental purpose of these models.
  • The only real case that might work against them is whether or not the works they produce are derivative... But derivative/transformative has a pretty strict legal definition. It's not enough to show that the work was used in the creation of a new work. You can, for example, create a word cloud of your favorite book, analyze the tone of news article to help you trade stocks, or produce an image containing the most prominent color in every frame of a movie. None of these could exist without deriving from a copyrighted work but none of them count as a legally derivative work.
  • I chose those examples because they are basic statistical analyses not far from what AI training involves. There's a lot of aspects of a work that are not covered by copyright. Style, structure, factual information. The kinds of things that AI is mostly interested in replicating.
  • So I don't think we're going to see a lot of success in taking down AI companies with copyright. We might see some small scale success when an AI crosses a line here or there. But unless a judge radically alters the bounds of copyright law, at everyone's detriment, their opponents are going to have an uphill battle to fight here.
[–] [email protected] 4 points 3 months ago (2 children)

An AI model could be seen as an efficient but lossy compression scheme, especially when it comes to images... And a compressed jpeg of an image is still seen as a copy so why would an AI model trained on reproducing it be different?

[–] [email protected] 3 points 3 months ago

Are you suggesting that the model itself is a compressed version of its training data? I think it requires some stretches of how training works to accept that.

[–] [email protected] 2 points 3 months ago

It depends on how much you compress the jpeg. If it gets compressed down to 4 pixels, it cannot be seen as infringement. Technically, the word cloud is lossy compression too: it has all of the information of the text, but none of the structure. I think it depends largely on how well you can reconstruct the original from the data. A word cloud, for instance, cannot be used to reconstruct the original. Nor can a compressed jpeg, ofc; that’s the definition of lossy. But most of the information is still there, so a casual observer can quickly glean the gist of the image. There is a line somewhere between finding the average color of a work (compression down to one pixel) and jpeg compression levels.

Is the line where the main idea of the work becomes obscured? Surely not, since a summary hardly infringes on the copyright of a book. I don’t know where this line should be drawn (personally, I feel very Stallman-esque about copyright: IP is not a coherent concept), but if we want to put rules on these things, we need to well-define them, which requires venturing into the domain of information theory (what percentage of the entropy in the original is part of the redistributed work, for example), but I don’t know how realistic that is in the context of law.