this post was submitted on 30 Jul 2023
223 points (100.0% liked)

Technology

37707 readers
400 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski's style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski's art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago

Thanks for clarifying. There are a lot of misconceptions about how this technology works, and I think it's worth making sure that everyone in these thorny conversations has the right information.

I completely agree with your larger point about culture; to the best of my knowledge we haven't seen any real ability to innovate, because the current models are built to replicate the form and structure of what they've seen before. They're getting extremely good at combining those elements, but they can't really create anything new without a person involved. There's a risk of significant stagnation if we leave art to the machines, especially since we're already seeing issues with new models including the output of existing models in their training data. I don't know how likely that is; I think it's much more likely that we see these tools used to replace humans for more mundane, "boring" tasks, not really creative work.

And you're absolutely right that these are not artificial minds; the language models remind me of a quote from David Langford in his short story Answering Machine: "It's so very hard to realize something that talks is not intelligent." But we are getting to the point where the question of "how will we know" isn't purely theoretical anymore.