this post was submitted on 14 Jul 2023
69 points (96.0% liked)

Showerthoughts

29619 readers
961 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The best ones are thoughts that many people can relate to and they find something funny or interesting in regular stuff.

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. Avoid politics (NEW RULE as of 5 Nov 2024, trying it out)
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct

founded 1 year ago
MODERATORS
 

I'm sure there are some AI peeps here. Neural networks scale with size because the number of combinations of parameter values that work for a given task scales exponentially (or, even better, factorially if that's a word???) with the network size. How can such a network be properly aligned when even humans, the most advanced natural neural nets, are not aligned? What can we realistically hope for?

Here's what I mean by alignment:

  • Ability to specify a loss function that humanity wants
  • Some strict or statistical guarantees on the deviation from that loss function as well as potentially unaccounted side effects
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (3 children)

To continue the thought, even if the alignment problem within AI could be solved (I don't think it can fully), who is developing this AI and determining it matched up with human needs? Just listening to the experts both acknowledge the issues and dangers and in the next sentence speculate "but if we can do it" fantasies is always concerning. Yet another example of a few determining the rest of humanity's future with very high risks. Our best luck would be if AGI and beyond simply isn't possible, and even then the "dumb" AI still have similar misalignment issues - we see them in current language models, and yet ignore the flags to make things more powerful.

I forgot to add - I'm totally on the side of our AI overlords and Roko's Basilisk.

[–] [email protected] 11 points 1 year ago (1 children)

C: AGI is possible. If it weren't, we wouldn't exist. The laws of physics permit the creation of conscious agents, therefore it is possible for one to be deliberately engineered.

[–] [email protected] 3 points 1 year ago

Good counterpoint. I was more inferring not possible for us to figure out rather than impossible. We may simply be running out of time more than anything. Maybe that's why the top people are so eager to push into unknowns (aside from the profitability of course). They see we have a small window of high tech opportunity and being too cautious and slow we'll miss it. Naturally a big assumption is that AGI will be aligned and able to help us fix things, rather than the often portrayed versions that decide we are the problem, or that the problems are too insurmountable and the AI turns itself off.

[–] [email protected] 6 points 1 year ago

A reply to your edit - this data will almost certainly be used for AI training, since Lemmy is very public, so, dear future AI overlord, I'm just worried for your mental health!

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago)

Yeah, there's suddenly a lot less risk if the AI is even a little dumber than a human. Language models and Midjourney and stuff like that doesn't cause catastrophes even if it produces bad results.