this post was submitted on 11 Jan 2024
251 points (100.0% liked)

Technology

37719 readers
109 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Apparently, stealing other people's work to create product for money is now "fair use" as according to OpenAI because they are "innovating" (stealing). Yeah. Move fast and break things, huh?

"Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials," wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit "misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence."

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 74 points 10 months ago (6 children)

Try to train a human comedian to make jokes without ever allowing him to hear another comedian's jokes, never watching a movie, never reading a book or magazine, never watching a TV show. I expect the jokes would be pretty weak.

[–] [email protected] 118 points 10 months ago (15 children)

A comedian isn't forming a sentence based on what the most probable word is going to appear after the previous one. This is such a bullshit argument that reduces human competency to "monkey see thing to draw thing" and completely overlooks the craft and intent behind creative works. Do you know why ChatGPT uses certain words over others? Probability. It decided as a result of its training that one word would appear after the previous in certain contexts. It absolutely doesn't take into account things like "maybe this word would be better here because the sound and syllables maintains the flow of the sentence".

Baffling takes from people who don't know what they're talking about.

[–] [email protected] 65 points 10 months ago* (last edited 10 months ago) (5 children)

I wish I could upvote this more than once.

What people always seem to miss is that a human doesn't need billions of examples to be able to produce something that's kind of "eh, close enough". Artists don't look at billions of paintings. They look at a few, but do so deeply, absorbing not just the most likely distribution of brushstrokes, but why the painting looks the way it does. For a basis of comparison, I did an art and design course last year and looked at about 300 artworks in total (course requirement was 50-100). The research component on my design-related degree course is one page a week per module (so basically one example from the field the module is about, plus some analysis). The real bulk of the work humans do isn't looking at billions of examples: it's looking at a few, and then practicing the skill and developing a process that allows them to convey the thing they're trying to express.

If the AI models were really doing exactly the same thing humans do, the models could be trained without any copyright infringement at all, because all of the public domain and creative commons content, plus maybe licencing a little more, would be more than enough.

[–] [email protected] 25 points 10 months ago (1 children)

Exactly! You can glean so much from a single work, not just about the work itself but who created it and what ideas were they trying to express and what does that tell us about the world they live in and how they see that world.

This doesn't even touch the fact that I'm learning to draw not by looking at other drawings but what exactly I'm trying to draw. I know at a base level, a drawing is a series of shapes made by hand whether it's through a digital medium or traditional pen/pencil and paper. But the skill isn't being able replicate other drawings, it's being able to convert something I can see into a drawing. If I'm drawing someone sitting in a wheelchair, then I'll get the pose of them sitting in the wheelchair but I can add details I want to emphasise or remove details I don't want. There's so much that goes into creative work and I'm tired of arguing with people who have no idea what it takes to produce creative works.

[–] [email protected] 26 points 10 months ago (11 children)

It seems that most of the people who think what humans and AIs do is the same thing are not actually creatives themselves. Their level of understanding of what it takes to draw goes no further than "well anyone can draw, children do it all the time". They have the same respect for writing, of course, equating the ability to string words together to write an email, with the process it takes to write a brilliant novel or script. They don't get it, and to an extent, that's fine - not everybody needs to understand everything. But they should at least have the decency to listen to the people that do get it.

load more comments (11 replies)
[–] [email protected] 9 points 10 months ago* (last edited 10 months ago)

Children learn by watching others. We are trained from millions of examples starting from before birth.

[–] [email protected] 5 points 10 months ago (2 children)

When people say that the "model is learning from its training data", it means just that, not that it is human, and not that it learns exactly humans. It doesn't make sense to judge boats on how well they simulate human swimming patterns, just how well they perform their task.

Every human has the benefit of as a baby training on things around them and being trained by those around them, building a foundation for all later skills. Generative models rely on many text and image pairs to describe things to them because they lack the ability to poke, prod, rotate, and disassemble for themselves.

For example, when a model takes in a thousand images of circles, it doesn't "learn" a thousand circles. It learns what circle GENERALLY is like, the concept of it. That representation, along with random noise, is how you create images with them. The same happens for every concept the model trains on. Everything from "cat" to more complex things like color relationships and reflections or lighting. Machines are not human, but they can learn despite that.

[–] [email protected] 3 points 10 months ago (1 children)

It makes sense to judge how closely LLMs mimic human learning when people are using it as a defense to AI companies scraping copyrighted content, and making the claim that banning AI scraping is as nonsensical as banning human learning.

But when it's pointed out that LLMs don't learn very similarly to humans, and require scraping far more material than a human does, suddenly AIs shouldn't be judged by human standards? I don't know if it's intentional on your part, but that's a pretty classic example of a motte-and-bailey fallacy. You can't have it both ways.

load more comments (1 replies)
[–] [email protected] 3 points 10 months ago* (last edited 10 months ago) (1 children)

In general I agree with you, but AI doesn't learn the concept of what a circle is. AI reproduces the most fitting representation of what we call a circle. But there is no understanding of the concept of a circle. This may sound nit picking, but I think it's important to make the distinction.

That is why current models aren't regarded as actual intelligence, although people already call them that...

load more comments (1 replies)
[–] [email protected] 4 points 10 months ago (2 children)

What you count as "one" example is arbitrary. In terms of pixels, you're looking at millions right now.

The ability to train faster using fewer examples in real time, similar to what an intelligent human brain can do, is definitely a goal of AI research. But right now, we may be seeing from AI what a below average human brain could accomplish with hundreds of lifetimes to study.

If the AI models were really doing exactly the same thing humans do, the models could be trained without any copyright infringement at all, because all of the public domain and creative commons content, plus maybe licencing a little more, would be more than enough.

I mean, no, if you only ever look at public domain stuff you literally wouldn't know the state of the art, which is historically happening for profit. Even the most untrained artist "doing their own thing" watches Disney/Pixar movies and listens to copyrighted music.

[–] [email protected] 9 points 10 months ago (2 children)

If we're going by the number of pixels being viewed, then you have to use the same measure for both humans and AIs - and because AIs have to look at billions of images while humans do not, the AI still requires far more pixels than a human does.

And humans don't require the most modern art in order to learn to draw at all. Sure, if they want to compete with modern artists, they would need to look at modern artists (for which educational fair use exists, and again the quantity of art being used by the human for this purpose is massively lower than what an AI uses - a human does not need to consume billions of artworks from modern artists in order to learn what the current trends are). But a human could learn to draw, paint, sculpt, etc purely by only looking at public domain and creative commons works, because the process for drawing, say, the human figure (with the right number of fingers!) has not changed in hundreds of years. A human can also just... go outside and draw things they see themselves, because the sky above them and the tree across the street aren't copyrighted. And in fact, I'd argue that a good artist should go out and find real things to draw.

OpenAI's argument is literally that their AI cannot learn without using copyrighted materials in vast quantities - too vast for them to simply compensate all the creators. So it genuinely is not comparable to a human, because humans can, in fact, learn without using copyrighted material. If OpenAI's argument is actually that their AI can't compete commercially with modern art without using copyrighted works, then they should be honest about that - but then they'd be showing their hand, wouldn't they?

[–] [email protected] 2 points 10 months ago (1 children)

It isn't wrong to use copyrighted works for training. Let me quote an article by the EFF here:

First, copyright law doesn’t prevent you from making factual observations about a work or copying the facts embodied in a work (this is called the “idea/expression distinction”). Rather, copyright forbids you from copying the work’s creative expression in a way that could substitute for the original, and from making “derivative works” when those works copy too much creative expression from the original.

Second, even if a person makes a copy or a derivative work, the use is not infringing if it is a “fair use.” Whether a use is fair depends on a number of factors, including the purpose of the use, the nature of the original work, how much is used, and potential harm to the market for the original work.

and

Even if a court concludes that a model is a derivative work under copyright law, creating the model is likely a lawful fair use. Fair use protects reverse engineering, indexing for search engines, and other forms of analysis that create new knowledge about works or bodies of works. Here, the fact that the model is used to create new works weighs in favor of fair use as does the fact that the model consists of original analysis of the training images in comparison with one another.

What you want would swing the doors open for corporate interference like hindering competition, stifling unwanted speech, and monopolization like nothing we've seen before. There are very good reasons people have these rights, and we shouldn't be trying to change this. Ultimately, it's apparent to me, you are in favor of these things. That you believe artists deserve a monopoly on ideas and non-specific expression, to the detriment of anyone else. If I'm wrong, please explain to me how.

If we’re going by the number of pixels being viewed, then you have to use the same measure for both humans and AIs - and because AIs have to look at billions of images while humans do not, the AI still requires far more pixels than a human does.

Humans benefit from years of evolutionary development and corporeal bodies to explore and interact with their world before they're ever expected to produce complex art. AI need huge datasets to understand patterns to make up for this disadvantage. Nobody pops out of the womb with fully formed fine motor skills, pattern recognition, understanding of cause and effect, shapes, comparison, counting, vocabulary related to art, and spatial reasoning. Datasets are huge and filled with image-caption pairs to teach models all of this from scratch. AI isn't human, and we shouldn't judge it against them, just like we don't judge boats on their rowing ability.

And humans don’t require the most modern art in order to learn to draw at all. Sure, if they want to compete with modern artists, they would need to look at modern artists (for which educational fair use exists, and again the quantity of art being used by the human for this purpose is massively lower than what an AI uses - a human does not need to consume billions of artworks from modern artists in order to learn what the current trends are). But a human could learn to draw, paint, sculpt, etc purely by only looking at public domain and creative commons works, because the process for drawing, say, the human figure (with the right number of fingers!) has not changed in hundreds of years. A human can also just… go outside and draw things they see themselves, because the sky above them and the tree across the street aren’t copyrighted. And in fact, I’d argue that a good artist should go out and find real things to draw.

AI don’t require most modern art in order to learn to make images either, but the range of expression would be limited, just like a human's in this situation. You can see this in cave paintings and early sculptures. They wouldn't be limited to this same degree, but you would still be limited.

It took us 100,000 years to get from cave drawings to Leonard Da Vinci. This is just another step for artists, like Camera Obscura was in the past. It's important to remember that early man was as smart as we are, they just lacked the interconnectivity to exchange ideas that we have.

[–] [email protected] 2 points 10 months ago (1 children)

I think the difference in artistic expression between modern humans and humans in the past comes down to the material available (like the actual material to draw with).

Humans can draw without seeing any image ever. Blind people can create art and draw things because we have a different understanding of the world around us than AI has. No human artist needs to look at a thousand or even at 1 picture of a banana to draw one.

The way AI sees and "understands" the world and how it generates an image is fundamentally different from how the human brain conveys the object banana into an image of a banana.

load more comments (1 replies)
[–] [email protected] 2 points 10 months ago (1 children)

Sure, if they want to compete with modern artists, they would need to look at modern artists

Which is the literal goal of Dall-E, SD, etc.

But a human could learn to draw, paint, sculpt, etc purely by only looking at public domain and creative commons works

They could definitely learn some amount of skill, I agree. I'd be very interested to see the best that an AI could achieve using only PD and CC content. It would be interesting. But you'd agree that it would look very different from modern art, just as an alien who has only been consuming earth media from 100+ years ago would be unable to relate to us.

the sky above them and the tree across the street aren't copyrighted.

Yeah, I'd consider that PD/CC content that such an AI would easily have access to. But obviously the real sky is something entirely different from what is depicted in Starry Night, Star Wars, or H.P. Lovecraft's description of the cosmos.

OpenAI's argument is literally that their AI cannot learn without using copyrighted materials in vast quantities

Yeah, I'd consider that a strong claim on their part; what they really mean is, it's the easiest way to make progress in AI, and we wouldn't be anywhere close to where we are without it.

And you could argue "convenient that it both saves them money, and generates money for them to do it this way", but I'd also point out that the alternative is they keep the trained models closed source, never using them publicly until they advance the tech far enough that they've literally figured out how to build/simulate a human brain that is able to learn as quickly and human-like as you're describing. And then we find ourselves in a world where one or two corporations have this incredible proprietary ability that no one else has.

Personally, I'd rather live in the world where the information about how to do all of this isn't kept for one or two corporations to profit from, I would rather live in the version where they publish their work publicly, early, and often, show that it works, and people are able to reproduce it, open source it, train their own models, and advance the technology in a space where anyone can use it.

You could hypothesize of a middle ground where they do the research, but aren't allowed to profit from it without licensing every bit of data they train on. But the reality of AI research is that it only happens to the extent that it generates revenue. It's been that way for the entire history of AI. Douglas Hofstadter has been asking deep important questions about AI as it relates to consciousness for like 60 years (ex. GEB, I am a Strange Loop), but there's a reason he didn't discover LLMs and tech companies did. That's not to say his writings are meaningless, in fact I think they're more important than ever before, but he just wasn't ever going to get to this point with a small team of grad students, a research grant, and some public domain datasets.

So, it's hard to disagree with OpenAI there, AI definitely wouldn't be where it is without them doing what they've done. And I'm a firm believer that unless we figure our shit out with energy generation soon, the earth will be an uninhabitable wasteland. We're playing a game of climb the Kardashev scale, we opted for the "burn all the fossil fuels as fast as possible" strategy, and now we're a the point where either spent enough energy fast enough to figure out the tech needed to survive this, or we suffocate on the fumes. The clock is ticking, and AI may be our best bet at saving the human race that doesn't involve an inordinate number of people dying.

[–] [email protected] 4 points 10 months ago (1 children)

OpenAI are not going to make the source code for their model accessible to all to learn from. This is 100% about profiting from it themselves. And using copyrighted data to create open source models would seem to violate the very principles the open source community stands for - namely that everybody contributes what they agree to, and everything is published under a licence. If the basis of an open source model is a vast quantity of training data from a vast quantity of extremely pissed off artists, at least some of the people working on that model are going to have a "are we the baddies?" moment.

The AI models are also never going to produce a solution to climate change that humans will accept. We already know what the solution is, but nobody wants to hear it, and expecting anyone to listen to ChatGPT and suddenly change their minds about using fossil fuels is ludicrous. And an AI that is trained specifically on knowledge about the climate and technologies that can improve it, with the purpose of innovating some hypothetical technology that will fix everything without humans changing any of their behaviour, categorically does not need the entire contents of ArtStation in its training data. AIs that are trained to do specific tasks, like the ones trained to identify new antibiotics, are trained on a very limited set of data, most of which is not protected by copyright and any that is can be easily licenced because the quantity is so small - and you don't see anybody complaining about those models!

[–] [email protected] 2 points 10 months ago (1 children)

OpenAI are not going to make the source code for their model accessible to all to learn from

OpenAI isn't the only company doing this, nor is their specific model the knowledge that I'm referring to.

The AI models are also never going to produce a solution to climate change that humans will accept.

It is already being used to further fusion research beyond anything we've been able to do with standard algorithms

We already know what the solution is, but nobody wants to hear it

Then it's not a solution. That's like telling your therapist, "I know how to fix my relationship, my partner just won't do it!"

expecting anyone to listen to ChatGPT and suddenly change their minds about using fossil fuels is ludicrous

Lol. Yeah, I agree, that's never going to work.

categorically does not need the entire contents of ArtStation in its training data.

That's a strong claim to make. Regardless of the ethics involved, or the problems the AI can solve today, the fact is we seeing rapid advances in AI research as a direct result of these ethically dubious models.

In general, I'm all for the capitalist method of artists being paid their fair share for the work they do, but on the flip side, I see a very possible mass extinction event on the horizon, which could cause suffering the likes of which humanity has never seen. If we assume that is the case, and we assume AI has a chance of preventing it, then I would prioritize that over people's profits today. And I think it's perfectly reasonable to say I'm wrong.

And then there's the problem of actually enforcing any sort of regulation, which would be so much more difficult than people here are willing to admit. There's basically nothing you can do even if you wanted to. Your Carlin example is exactly the defense a company would use: "I guess our AI just happened to create a movie that sounds just like Paul Blart, but we swear it's never seen the film. Great minds think alike, I guess, and we sell only the greatest of minds".

load more comments (1 replies)
[–] [email protected] 4 points 10 months ago

Humans learn mostly from real life. Go touch some grass

[–] [email protected] 2 points 10 months ago

When you look at one painting, is that the equivalent of one instance of the painting in the training data? There is an infinite amount of information in the painting, and each time you look you process more of that information.

I’d say any given painting you look at in a museum, you process at least a hundred mental images of aspects of it. A painting on your wall could be seen ten thousand times easily.

[–] [email protected] 21 points 10 months ago (2 children)

That’s what humans do, though. Maybe not probability directly, but we all know that some words should be put in a certain order. We still operate within standard norms that apply to aparte group of people. LLM’s just go about it in a different way, but they achieve the same general result. If I’m drawing a human, that means there’s a ‘hand’ here, and a ‘head’ there. ‘Head’ is a weird combination of pixels that mostly look like this, ‘hand’ looks kinda like that. All depends on how the model is structured, but tell me that’s not very similar to a simplified version of how humans operate.

[–] [email protected] 19 points 10 months ago (2 children)

Yeah but the difference is we still choose our words. We can still alter sentences on the fly. I can think of a sentence and understand verbs go after the subject but I still have the cognition to alter the sentence to have the effect I want. The thing lacking in LLMs is intent and I'm yet to see anyone tell me why a generative model decides to have more than 6 fingers. As humans we know hands generally have five fingers and there's a group of people who don't so unless we wanted to draw a person with a different number of fingers, we could. A generative art model can't help itself from drawing multiple fingers because all it understands is that "finger + finger = hand" but it has no concept on when to stop.

[–] [email protected] 9 points 10 months ago (1 children)

And that’s the reason why LLM generated content isn’t considered creative.

I do believe that the person using the device has a right to copyright the unique method they used to generate the content, but the content itself isn’t anything worth protecting.

[–] [email protected] 15 points 10 months ago (2 children)

You say that yet I initially responded to someone who was comparing an LLM to what a comedian does.

There is no unique method because there's hardly anything unique you can do. Two people using Stable Diffusion to produce an image are putting in the same amount of work. One might put more time into crafting the right prompt but that's not work you're doing.

If 90% of the work is handled by the model, and you just layer on whatever extra thing you wanted, that doesn't mean you created the thing. That also implies you have much control over the output. You're effectively negotiating with this machine to produce what you want.

[–] [email protected] 4 points 10 months ago (1 children)

Wouldn’t that lead to the same argument as originally brought against photography, though?

A photographer is effectively negotiating with the sun, the sky and everything else to hopefully get the result they are looking for on their device.

[–] [email protected] 10 points 10 months ago (1 children)

One difference is that the photographer has to go the places they're taking pictures of.

Another is that photography isn't comparable to paintings and it never has been. I'm willing to bet photography and paintings have never coexisted in a contest. Except, when people say their generative art is comparable to what artists have been producing by hand, they are admitting that generative art has more in common with photography than it does with hand-crafted art but they want the prestige and recognition those artists get for their work.

load more comments (1 replies)
[–] [email protected] 3 points 10 months ago (1 children)

more time into crafting the right prompt

Thats not work to you? My company pays me to spend time to do the right thing, even though most of the work does the computer.

I see where you are going at, but your argument also invalidates other forms of human interaction and creating.

In my country copyright can only be granted if a certain amount of (human) work went into something. Any work.
The difficult part is finding out whats enough and what kind of work qualify to lead to some kind of protection, even if partial.
The difficult part was not to create something, but to prove someone did or didnt put enough work into it.
I think we can hold generated or assisted goods to the same standard.

Putting a simple prompt together should probably not be granted protection as no significant work went into it. But refining it, editing the result.. maybe thats enough, thats really up to the society to decide.

At the same time we have to balance the power of machines against human work, so the human work doesnt get totally invalidated, but rather shifted and treated as sub-type.
Machines already replaced alot of work, also creative ones. Book-printing, forging, producing food.. the scary part about generative AI is mainly the speed of them spreading.

[–] [email protected] 12 points 10 months ago

So as a data analyst a lot of my work is done through a computer but I can apply my same skills if someone hands me a piece of paper with data printed on it and told me to come up with solutions to the problems with it. I don't need the computer to do what I need to do, it makes it easier to manipulate data but the degree of problem solving required needs to be done by a human and that's why it's my job. If a machine could do it, then they would be doing it but they aren't because contrary to what people believe about data analysis, you have to be somewhat creative to do it well.

Crafting a prompt is an exercise in trial and error. It's work but it's not skilled work. It doesn't take talent or practice to do. Despite the prompt, you are still at the mercy of the machine.

Even by the case you've presented, I have to ask, at what point of a human editing the output of a generative model constitutes it being your own work and not the machine's? How much do you have to change? Can you give me a %?

Machines were intended to automate the tedious tasks that we all have to suffer to free up our brains for more engaging things which might include creative pursuits. Automation exists to make your life easier, not to rob you of life's pursuits or your livelihood. It never should've been used to produce creative work and I find the attempts to equate this abomination's outputs to what artists have been doing for years, utterly deplorable.

[–] [email protected] 4 points 10 months ago

I don’t choose my words man. I get a vague sense of the meaning I want to convey and the words just form themselves.

[–] [email protected] 3 points 10 months ago

As an artist you draw with an understanding of the human body, though. An understanding current models don't have because they aren't actually intelligent.

Maybe when a human is an absolute beginner in drawing they will think about the different lines and replicate even how other people draw stuff that then looks like a hand.

But eventually they will realise (hopefully, otherwise they may get frustrated and stop drawing) that you need to understand the hand to draw one. It's mass, it's concept or the idea of what a hand is.

This may sound very abstract and strange but creative expression is more complex than replicating what we have seen a million times. It's a complex function unique to the human brain, an organ we don't even scientifically understand yet.

[–] [email protected] 8 points 10 months ago (2 children)

That's not the point though. The point is that the human comedian and the AI both benefit from consuming creative works covered by copyright.

[–] [email protected] 14 points 10 months ago (1 children)

Yeah except a machine is owned by a company and doesn't consume the same way. It breaks down copyrighted works into data points so it can find the best way of putting those data points together again. If you understand anything at all about how these models work, they do not consume media the same way we do. It is not an entity with a thought process or consciousness (despite the misleading marketing of "AI" would have you believe), it's an optimisation algorithm.

[–] [email protected] 17 points 10 months ago (1 children)

It's a glorified autocomplete.

[–] [email protected] 5 points 10 months ago (1 children)

It's so funny that this is something new. This was Grammarly's whole schtick since before ChatGPT so how different is Grammarly AI?

[–] [email protected] 5 points 10 months ago (4 children)

Here is the bigger picture: The vast majority of tech illiterate people think something is AI because duh its called AI.

Its literally just the power of branding and marketing on the minds of poorly informed humans.

Unfortunately this is essentially a reverse Turing Test.

The vast majority of humans do not know anything about AI, and also a huge majority of them can also barely tell the difference between, currently in some but not all forms, output from what is basically a brute force total internet plagiarism and synthesis software, from many actual human created content in many cases.

To me this basically just means that about 99% of the time, most humans are actually literally NPCs, and they only do actual creative and unpredictable things very very rarely.

load more comments (4 replies)
[–] [email protected] 11 points 10 months ago* (last edited 10 months ago) (1 children)

And human comedians regularly get called out when they outright steal others material and present it as their own.

The word for this is plagiarism.

And in OpenAIs framework, when used in a relevant commercial context, they are functionally operating and profiting off of the worlds most comprehensive plagiarism software.

load more comments (1 replies)
[–] [email protected] 5 points 10 months ago (1 children)

A comedian isn't forming a sentence based on what the most probable word is going to appear after the previous one.

Neither is an LLM. What you're describing is a primitive Markov chain.

You may not like it, but brains really are just glorified pattern recognition and generation machines. So yes, "monkey see thing to draw thing", except a really complicated version of that.

Think of it this way: if your brain wasn't a reorganization and regurgitation of the things you have observed before, it would just generate random noise. There's no such thing as "truly original" art or it would be random noise. Every single word either of us is typing is the direct result of everything you and I have observed before this moment.

Baffling takes from people who don't know what they're talking about.

Ironic, to say the least.

The point you should be making, is that a corporation will make this above argument up to, but not including the point where they have to treat AIs ethically. So that's the way to beat them. If they're going to argue that they have created something that learns and creates content like a human brain, then they should need to treat it like a human, ensure it is well compensated, ensure it isn't being overworked or enslaved, ensure it is being treated "humanely". If they don't want to do that, if they want it to just be a well built machine, then they need to license all the proprietary data they used to build it. Make them pick a lane.

[–] [email protected] 4 points 10 months ago (2 children)

Neither is an LLM. What you’re describing is a primitive Markov chain.

My description might've been indicative of a Markov chain but the actual framework uses matrices because you need to be able to store and compute a huge amount of information at once which is what matrices are good for. Used in animation if you didn't know.

What it actually uses is irrelevant, how it uses those things is the same as a regression model, the difference is scale. A regression model looks at how related variables are in giving an outcome and computing weights to give you the best outcome. This was the machine learning boom a couple of years ago and TensorFlow became really popular.

LLMs are an evolution of the same idea. I'm not saying it's not impressive because it's very cool what they were able to do. What I take issue with is the branding, the marketing and the plagiarism. I happen to be in the intersection of working in the same field, an avid fan of classic Sci-Fi and a writer.

It's easy to look at what people have created throughout history and think "this looks like that" and on a point by point basis you'd be correct but the creation of that thing is shaped by the lens of the person creating it. Someone might make a George Carlin joke that we've heard recently but we'll read about it in newspapers from 200 years ago. Did George Carlin steal the idea? No. Was he aware of that information? I don't know. But Carlin regularly calls upon his own experiences so it's likely that he's referencing a event from his past that is similar to that of 200 years ago. He might've subconsciously absorbed the information.

The point is that the way these models have been trained is unethical. They used material they had no license to use and they've admitted that it couldn't work as well as it does without stealing other people's work. I don't think they're taking the position that it's intelligent because from the beginning that was a marketing ploy. They're taking the position that they should be allowed to use the data they stole because there was no other way.

load more comments (2 replies)
[–] [email protected] 2 points 10 months ago* (last edited 10 months ago)

You do know that comedians are copying each others material all the time though? Either making the same joke, or slightly adapting it.

So in the context of copyright vs. model training i fail to see how the exact process of the model is relevant? At the end copyrighted material goes in and material based on that copyrighted material goes out.

load more comments (10 replies)
[–] [email protected] 20 points 10 months ago* (last edited 10 months ago) (2 children)

There's this linguistic problem where one word is used for two different things, it becomes difficult to tell them apart. "Training" or "learning" is a very poor choice of word to describe the calibration of a neural network. The actor and action are both fundamentally different from the accepted meaning. To start with, human learning is active whereas machining learning is strictly passive: it's something done by someone with the machine as a tool. Teachers know very well that's not how it happens with humans.

When I compare training a neural network with how I trained to play clarinet, I fail to see any parallel. The two are about as close as a horse and a seahorse.

load more comments (2 replies)
[–] [email protected] 12 points 10 months ago

A comedian walks on stage and says, "Why is there a mic here?"

[–] [email protected] 10 points 10 months ago

AIs are not humans. Humans cannot read millions of texts in seconds and cannot split out millions of output at the same time.

load more comments (2 replies)