this post was submitted on 16 Jul 2023
169 points (94.2% liked)

Movies and TV Shows

3 readers
2 users here now

General discussion about movies and TV shows.


Spoilers are strictly forbidden in post titles.

Posts soliciting spoilers (endings, plot elements, twists, etc.) should contain [spoilers] in their title. Comments in these posts do not need to be hidden in spoiler MarkDown if they pertain to the title's subject matter.

Otherwise, spoilers but must be contained in MarkDown as follows:

::: your spoiler warning
the crazy movie ending that no one saw coming!
:::

Your mods are here to help if you need any clarification!


Subcommunities: The Bear (FX) - [[email protected]](/c/thebear @lemmy.film)


Related communities: [email protected] [email protected]

founded 1 year ago
MODERATORS
 

theverge.com

Around the time J. Robert Oppenheimer learned that Hiroshima had been struck (alongside everyone else in the world) he began to have profound regrets about his role in the creation of that bomb. At one point when meeting President Truman Oppenheimer wept and expressed that regret. Truman called him a crybaby and said he never wanted to see him again. And Christopher Nolan is hoping that when Silicon Valley audiences of his film Oppenheimer (out June 21) see his interpretation of all those events they’ll see something of themselves there too.

After a screening of Oppenheimer at the Whitby Hotel yesterday Christopher Nolan joined a panel of scientists and Kai Bird, one of the authors of the book Oppenheimer is based on to talk about the film, American Prometheus. The audience was filled mostly with scientists, who chuckled at jokes about the egos of physicists in the film, but there were a few reporters, including myself, there too.

We listened to all too brief debates on the success of nuclear deterrence and Dr. Thom Mason, the current director of Los Alamos, talked about how many current lab employees had cameos in the film because so much of it was shot nearby. But towards the end of the conversation the moderator, Chuck Todd of Meet the Press, asked Nolan what he hoped Silicon Valley might learn from the film. “I think what I would want them to take away is the concept of accountability,” he told Todd.

“Applied to AI? That’s a terrifying possibility. Terrifying.”

He then clarified, “When you innovate through technology, you have to make sure there is accountability.” He was referring to a wide variety of technological innovations that have been embraced by Silicon Valley, while those same companies have refused to acknowledge the harm they’ve repeatedly engendered. “The rise of companies over the last 15 years bandying about words like ‘algorithm,’ not knowing what they mean in any kind of meaningful, mathematical sense. They just don’t want to take responsibility for what that algorithm does.”

He continued, “And applied to AI? That’s a terrifying possibility. Terrifying. Not least because as AI systems go into the defense infrastructure, ultimately they’ll be charged with nuclear weapons and if we allow people to say that that’s a separate entity from the person’s whose wielding, programming, putting AI into use, then we’re doomed. It has to be about accountability. We have to hold people accountable for what they do with the tools that they have.”

While Nolan didn’t refer to any specific company it isn’t hard to know what he’s talking about. Companies like Google, Meta and even Netflix are heavily dependent on algorithms to acquire and maintain audiences and often there are unforeseen and frequently heinous outcomes to that reliance. Probably the most notable and truly awful being Meta’s contribution to genocide in Myanmar.

“At least is serves as a cautionary tale.”

While an apology tour is virtually guaranteed now days after a company’s algorithm does something terrible the algorithms remain. Threads even just launched with an exclusively algorithmic feed. Occasionally companies might give you a tool, as Facebook did, to turn it off, but these black box algorithms remain, with very little discussion of all the potential bad outcomes and plenty of discussion of the good ones.

“When I talk to the leading researchers in the field of AI they literally refer to this right now as their Oppenheimer moment,” Nolan said. “They’re looking to his story to say what are the responsibilities for scientists developing new technologies that may have unintended consequences.”

“Do you think Silicon Valley is thinking that right now?” Todd asked him.

“They say that they do,” Nolan replied. “And that’s,” he chuckled, “that’s helpful. That at least it’s in the conversation. And I hope that thought process will continue. I’m not saying Oppenheimer’s story offers any easy answers to these questions. But at least it serves a cautionary tale.”

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 1 year ago (2 children)

Excuse me, what? I fervently hope nobody is considering letting an "AI" get anywhere near anything nuclear! Despite whoever happens to be in the White House at any particular time, the upper echelons of the US military seem to be generally sane and smart enough to know that allowing glorified predictive text to control city-destroying superweapons is a bad idea.

AIs aren't anything of the sort. They're not intelligent at all, and we should stop calling them that. It just gives people weird, unrealistic ideas about their capabilities.

[–] [email protected] 11 points 1 year ago (1 children)

He's not warning of AI controlling nuclear weapons. He's speaking of the development of nuclear weapons as a cautionary tale that applies to the current development of AI: that, like the scientists who built the bomb, current AI researchers might one day wake up terrified of what they have created.

Whether current so-called AI is intelligent (I agree with you it isn't by most definitions of the world) doesn't preclude the possibility that the technology might cause irreparable harm. I mean, looking at how Facebook algorithms have zeroed in on outrage as a driving factor of engagement, it's easy to argue that the algorithmic approach to content delivery has already caused serious societal damage.

[–] [email protected] 6 points 1 year ago

Yeah, I got that, but this was the particular part I was reacting to:

"He continued, “And applied to AI? That’s a terrifying possibility. Terrifying. Not least because as AI systems go into the defense infrastructure, ultimately they’ll be charged with nuclear weapons and if we allow people to say that that’s a separate entity from the person’s whose wielding, programming, putting AI into use, then we’re doomed."

Possibly I misread it.