this post was submitted on 16 Dec 2024
16 points (100.0% liked)

SneerClub

1010 readers
21 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS
 

In a recent Hard Fork (Hard Hork?) episode, Casey Newton and Kevin Roose described attending the recent "The Curve" conference -- a conference in Berkeley organized and attended mostly by our very best friends. When asked about the most memorable session he attended at this conference, Casey said:

That would have been a session called If Anyone Builds It, Everyone Dies, which was hosted by Eliezer Yudkowski. Eliezer is sort of the original doomer. For a couple of decades now, he has been warning about the prospects of super intelligent AI.

His view is that there is almost no scenario in which we could build a super intelligence that wouldn't either enslave us or hurt us, kill all of us, right? So he's been telling people from the beginning, we should probably just not build this. And so you and I had a chance to sit in with him.

People fired a bunch of questions at him. And we should say, he's a really polarizing figure, and I think is sort of on one extreme of this debate. But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.

And so it was fascinating to spend an hour or so sitting in a room and hearing him make his case.

[...]

Yeah, my case for taking these folks seriously, Kevin, is that this is a community that, over a decade ago, started to make a lot of predictions that just basically came true, right? They started to look at advancements in machine learning and neural networks and started to connect the dots. And they said, hey, before too long, we're going to get into a world where these models are incredibly powerful.

And all that stuff just turned out to be true. So, that's why they have credibility with me, right? Everything they believe, you know, we could hit some sort of limit that they didn't see coming.

Their model of the world could sort of fall apart. But as they have updated it bit by bit, and as these companies have made further advancements and they've built new products, I would say that this model of the world has basically held so far. And so, if nothing else, I think we have to keep this group of folks in mind as we think about, well, what is the next phase of AI going to look like for all of us?

top 6 comments
sorted by: hot top controversial new old
[–] sailor_sega_saturn@awful.systems 11 points 2 days ago* (last edited 2 days ago) (1 children)

But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.

So what harms has Mr. Yudkowski enumerated? Off the top of my head I can remember:

  1. Diamondoid bacteria
  2. What if there's like a dangerous AI in the closet server and it tries to convince you to connect your Nintendo 3DS to it so it can wreak havoc on the internet and your only job is to ignore it and play your nintendo but it's so clever and sexy
  3. What if we're already in hell: the hell of living in a universe where people get dust in their eyes sometimes?
  4. What if we're already in purgatory? If so we might be able to talk to future robot gods using time travel; well not real time travel, more like make believe time travel. Wouldn't that be spooky?
[–] Soyweiser@awful.systems 8 points 2 days ago* (last edited 2 days ago)

Prediction: it can talk itself out of the box.

Reality: it can be talked into revealing its secret prompt.

E: also

Started to make a lot of predictions that just basically came true

Lol. Guess we are all going to die because Yud has not taught us rationality.

[–] Architeuthis@awful.systems 6 points 2 days ago* (last edited 2 days ago)

And all that stuff just turned out to be true

Literally what stuff, that AI would get somewhat better as technology progresses?

I seem to remember Yud specifically wasn't that impressed with machine learning and thought so-called AGI would come about through ELIZA type AIs.

[–] froztbyte@awful.systems 5 points 2 days ago (1 children)

I was wondering why the name Kevin Roose sounded familiar and ah, right

[–] dgerard@awful.systems 2 points 2 days ago (1 children)

Kevin hopes to be Casey when he grows up

[–] froztbyte@awful.systems 2 points 2 days ago

gap's closer now