this post was submitted on 30 Jul 2023
184 points (97.9% liked)
Programming
17412 readers
49 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Where their creativity lies at the moment seems to be a controlled mixing of previous things. Which in some areas works for the definition of creativity, such as with artistic images or some literature. Less so with things that require precision to work, such as analysis or programming. The difference from LLMs and humans in using past works to bring new things to life is that a human is actually (usually) thinking throughout the process on what adds and subtracts. Right now the human feedback on the results is still important. I can't think of any example where we've yet successfully unleashed LLMs into the world confident enough of their output to not filter it. It's still only a tool of generation, albeit a very complex one.
What's troubling throughout the whole explosion of LLMs is how safety of the potentials is still an afterthought, or a "we'll figure it out" mentality. Not a great look for AGI research. I want to say if LLMs had been a door to AGI we would have been in serious trouble, but I'm not even sure I can say it hasn't sparked something, as an AGI that gains awareness fast enough sure isn't going to reveal itself if it has even a small idea of what humans are like. And LLMs were trained on apparently the whole internet, so...
I like your comment regarding the (usually) thoughtful effort that goes into creative endeavours. I know that there are those who claim that deliberate effort is antithetical to the creative process, but even serendipitous results have to be deliberately examined and refined. Until a system can say "oh, that's interesting enough to investigate further" I'm not convinced that it can be called creative. In the context of LLMs, I think that means giving them access to their own outputs in some way.
As for the dangers, I'm pretty sure that most of us, even those of us looking for danger, will not recognize it until we see it. That doesn't mean we should just barrel ahead, though. Just the opposite. That's why we need to move slowly. Our reflexes and analytical capabilities are pretty slow in comparison to the potential rate of development.
That's what the AUTOGPTs do (as well as others, there's so many now) they pick apart the task into smaller things and feed the results back in, building up a final result, and that works a lot better than just a one time mass input. The biggest advantage and main reason for these being developed was to keep the LLM on course without deviation.
Thanks, I didn't know that. I guess I need to broaden my reading.
It changes so much so fast. For a video source to grasp the latest stuff I'd recommend the Youtube channel "AI Explained".
Thanks, I'll check it out.