this post was submitted on 10 Jul 2023
219 points (100.0% liked)
Technology
37737 readers
424 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So will a human. Let's stop extending copyright law. Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?
This is why I am pro AI art. It’s no different than a human taking inspiration from other work.
Nobody comes up with anything truly original. It’s all inspired by someone before them.
I don’t know how anyone is pro AI anything other than the pigs making money from it. Only bad can result of it. And will.
I don’t know how anyone can be anti AI.
It’s just a tool. To say that only bad can result of it is a bold claim that doesn’t make any sense.
Can you provide an example?
Just wait and see.
Only bad can result from it, just because some company is making profits?
No, that wasn’t a correlation. Only bad can result from it. Also, companies making profit love it. Separate things.
I'm not anti AI, I'm against companies making profit out of other peoples work without paying them.
everything is a remix
Here is an alternative Piped link(s): https://piped.video/X9RYuvPCQUA
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source, check me out at GitHub.
In the case of ChatGPT, it's hard to tell. OpenAI won't even reveal what their training dataset was.
Researchers have done some tests to tease this out, and they're pretty confident that it has read quite a few books and memorized them verbatim. See one of my favorite papers in a while, Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4.
Beyond that, it'll try to summarize a book, but it often can't do so successfully, although it will act like it has. Give it a try on something that is even a little bit obscure and it can't really give you good information. I tried with Blindsight, which is not something that's in the popular culture, but also a Hugo nominee, so not completely obscure. It knew who the characters were, and had a general sense of the tone, but it completely fabricated every major plot point that I asked about. Did the same with A Head Full of Ghosts, which is more well known but still not something everyone has read, and it did the same thing.
One thing I found that's really fun is to ask it a question and then follow up with something like "Are you sure about that?" It'll almost always correct itself and make up something else. It'll go one step further and incorporate details you ask about. Give it a prompt like "Are you sure this character died of natural causes? I thought they were killed by Bob" and it will very frequently say you're right and make up a story along those lines that's plausible within the text. It doesn't work on really popular stuff; you can't convince it that Optimus Prime saves Luke Skywalker in RotJ, but anything even a little less well known and it'll tell you details that it's making up whole cloth with complete confidence.
Another highly amusing thing to do is to ask it about non existent chemicals or antenna types. (Try "inverted tripole" or "dinitrogen azide") It always generates plausible but incorrect answers (eloquent bullshit).