this post was submitted on 19 Jan 2024
250 points (92.5% liked)
Technology
59648 readers
3170 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is the wrong way around. The NYT wants money for the use of its "intellectual property". This is about money for property owners. When building rents go up, you wouldn't expect construction workers to benefit, right?
In fact, more money for property owners means that workers lose out, because where else is the money going to come from? (well, "money")
AI, like all previous forms of automation, allows us to produce more and better goods and services with the same amount of labor. On average, society becomes richer. Whether these gains should go to the rich, or be more evenly distributed, is a choice that we, as a society, make. It's a matter of law, not technology.
The NYT lawsuit is about sending these gains to the rich. The NYT has already made its money from its articles. The authors were paid, in full, and will not get any more money. Giving money to these property owners will not make society any richer. It just moves wealth to property owners for being property owners. It's about more money for the rich.
If OpenAI has to pay these property owners for no additional labor, then it will eventually have to increase subscription fees to balance the cash flow. People, who pay a subscription, probably feel that it benefits them, whether they use it for creative writing, programming, or entertainment. They must feel that the benefit is worth, at least, that much in terms of money.
So, the subscription fees represent a part of the gains to society. If a part of these subscription fees is paid to property owners, who did not contribute anything, then that means that this part of the social gains is funneled to property owners, IE mainly the ultra-rich, simply for being owners/ultra-rich.
I do not find that to be an apt analogy. This is more like someone setting up shop in the NYT's lobby, stealing issues, and cutting them up to make their own newspaper that they sell from said lobby, without permission or compensation. OpenAI just refined a technology to parasitize off of others' labor and is using it to seev rent over intellectual property that they don't own or have rights to use.
I'm going to have to strongly disagree with here. The subscription fees are only going to the ultra-wealthy who are using LLMs to parasitize off of labor. The NYT is not who I'm worried about having their livelihoods destroyed, it's the individual artists, actors, and creatives, as well as those whose jobs are being replaced with terrible chatbots that cannot actually do the work but are implemented anyway to drive lay-offs and boost stock prices. The NYT and other suits are merely a proxy due to the wealth gap leading to it being nearly impossible for those most impacted to successfully utilize the courts to remedy their situation.
The point is that the people who create some property don't get a cut when the property rises in value. You keep calling the intellectual property of the NYT labor. I think there's something there you seriously misunderstand.
That's an analogy for a normal practice in journalism. Like when other news media (other websites, TV, radio, ...) reports that on what the NYT reports. I'm sure you have seen articles where it said something like "The NYT reported that...".
That's not what the case is mainly about. I'm not sure if anything like that is even mentioned.
I think that you are overlooking the part about aggressively competing against the original creation with something that is impossible without the existence of the initial creation. Also the part where the NYT isn't really the ones most impacted by the current pushes for adoption of LLMs and similar tech. It's not a case of punchcard computer operators becoming obsolete. It's a case of using technology to deny those involved in creating and evolving culture along with those in the few remaining jobs that allow one to get by the ability to make a living.
Humanity as a whole isn't benefiting, only the ultra-wealthy who are using and refining these tools for no other purbose but to further bludgeon and dehumanize workers, grow the number on the precipice of total ruin, and increase the wealth gap further. So, the NYT merely is playing the role of "the enemy of my enemy".
If the tools WEREN'T being used primarily to skim even more wealth and push more into poverty, there would be no problems (especially if the result were reform of the currently awful IP laws). But, we currently live in a world where billionaires are writing to profitable tech companies demanding mass lay-offs and deep salary cuts to increase stock prices, voice actors are thrown under the bus by their own union, and eating disorder helpline workers are fired en masse for unionizing to to be replaced with chatbots that cause measurable harm to vulnerable people. OpenAI deserves to be shutdown for the harm that they are enabling and profiting from.
At first, I laid out how a win for NYT will benefit mainly the wealthy. It will increase the wealth gap. Clearly you don't agree.
Could you please explain where you see an error in my reasoning/where it was not clear enough?
I think that the error in your reasoning is that OpenAI's tools are themselves greatly accelerating the expansion of the wealth gap. They have been greedily wreckless though and pissed off other wealthy groups. The NYT dosn't care about the rest of us but a victory from them might help establish precedents that help.
To put it another way, both orgs are terrible but, by negative impact on humanity, OpenAI is, measurably, magnitudes worse in multiple ways (harming workers, driving up demand for compute thus accelerating fossil fuel demand and global warming).
Would you be able to clarify your reasoning for thinking that OpenAI is less harmful?
It's not about who is less harmful. It is about what the effects of the precedent would be.
The precedent would be that the NYT can charge money for AI training. OpenAI is not likely to disappear if it loses, but even if it did, the NYT would just find some other buyer. A win for the NYT would establish that they can charge money for AI training on their archive. That's pure profit for the owners of the NYT. There is no reason why they should pay the authors again (or the printers, assistants, janitors, and anyone else involved in the making of the articles).
Whatever harm you see coming from OpenAI is not going away if the NYT wins. All a NYT win would mean, is that owners of intellectual property get money without having to do work.
I don't think that we are entirely on the same page for the impact of the precedent. If the ruling is not comically constrained, it would layout the path forward for other creators and IP owners. The vast majority of which are individuals which are those most harmed.
Honestly, the best possible (though unlikely in the current political climate) outcome would be an overhaul of IP laws to make them in any way sane and legally codify mechanisms to protect workers from career loss due to automation such as taxes on AI and automation tools that replace humans in order to fund re-skilling programs as well as, ideally, publicly-funded organizations to pay cultural workers and remove the possibility of the extinction of the working artist, like is done with Ireland's Art Council progams (I say this as one who works primarily in automation). Without such outcomes or a narrow ruling, yeah, it would effectively be just lead to NYT getting more money.
ETA: While we don't currently seem to be on the same page, I do want to say thank you for the civil conversation and good points, and my apologies for my ADHD habit of getting a bit verbose.
Everyone owns some property, but a tiny percentage of rich people own most of all property. EG quick google says that a daily newspaper contains 50,000 to 200,000 words. That's about as much as a novel. Seems about right. So in even a few months, a single newspaper exceeds the lifetime output of any single author.
It gets worse. Something like ChatGPT is trained on several hundred billion words. So even the NYT couldn't negotiate for more than a tiny part of the training data. For a single individual, the share would be so small, that the bureaucracy of handling the payments would eat a chunk. They'd have to go through middle men, like established publishers or stock photo site. So, a part of any money for "creators", would still be redirected to the rich. You can google how much shutterstock contributors get for the image AI trained on that data.
Mind that companies like Meta and Apple have their fingers on a lot of user data. They can use their TOS to train on that data. In the long run, it would only get worse because AI companies have their fingers on the creations.
Eventually, our system relies on the fact that people have to do something for others to get money. Even people like Zuckerberg or Bezos built companies that provided services that people wanted. The problem with Big Tech, with "enshittification", is that these companies are now in a position where they don't have to do that anymore. Anything that makes it possible to extract money just for being the owner of some property will make everything worse.
In Japan the law is that you can train AI on anything, unless it is a dataset made specifically for AI training (IIRC). You don't exactly need this provision. You would not put such a dataset on the open internet, if you don't want it to be free. But it might come in handy if a dataset gets leaked.
I am certain that human artists will not disappear any time soon. It's true that skill at manual drawing or painting is made less important. Digital artists, so far, could not emulate these looks well, certainly not without manually drawing on a graphics tablet. But digital artists are still working artists. I would think that the digital part of the skill set was already, pre-AI, the commercially more lucrative part.
I can see that genAI disrupts particular income streams, such as stock photography, or small commissions to draw role-playing characters. However, that does not mean that there will be a net loss of artist jobs or income. Say, pre-AI you could spend your limited money to either commission a single drawing or on a night out. Post-AI you might get 10 or a hundred images for the same money. You might be more inclined to take the images over the night out. In that case, waiters and bartenders would take the hit, and would have to become digital artists.
It's impossible to predict the eventual outcome. It depends on what people chose, what people value; on fashion. I am certain of one thing, though. As we need less and less labor to satisfy our basic needs, we will invest more into leisure, entertainment and such. Why is barista a thing? The luxury of a freshly made coffee, prepared with skill, has become fashionable.
Incidentally, the same worries (and mean-spirited attacks) also existed when photography was invented.
I agree on the social programs but I must point out that the issue is not new (and I'm not talking about photography or digital art). Which is why such programs exist, to some degree. We accept that this happens continuously for blue collar jobs and not just because of automation. We're not just making more and better robots, we're also switching from fossil fuels to renewables, from combustion engines to electric, and so on.
You may have heard of the loss of factory jobs over the last couple decades as a social issue. Automation has been blamed for the so-called hollowing out of the middle class. These former middle class jobs have disappeared, freeing up workers to EG deliver food. So you're left with jobs that require a university degree at the top and a driving license at the bottom.
What's new is that AI is coming for white collar jobs. I think that explains a lot of the reaction. It's the people who think and write in newspapers and on the internet who have reason to worry about their cushy careers. The threat is to people like them and the people they know; their class, if you will. I believe (and hope) that AI will benefit society by putting everyone into the same boat again. If (or when) an apprentice electrician can do the same task as an electrical engineer with a Master's degree, then the middle is no more hollow. Sure, the engineer loses out, but people on average are richer and more equal in income.
The major threat to this scenario is policy choices that entrench elites. The NYT lawsuit is an attempt at just that.
Sorry for the rambling wall of text. Thank you, too, for your politeness. It's perfectly usual to talk past another on occasion. I think many neurotypicals would not have reacted so well to my probing questions.