Long live!
Architeuthis
debate pervert in a reply-guy world
Well done.
There's a bit in the beginning where he talks about how actors handling and drinking from obviously weightless empty cups ruins suspension of disbelief, so I'm assuming it's a callback.
I kinda want to replay subnautica now.
"Manifest is open minded about eugenics and securing the existence of our people and a future for high IQ children."
Great quote from the article on why prediction markets and scientific racism currently appear to be at one degree of separation:
Daniel HoSang, a professor of American studies at Yale University and a part of the Anti-Eugenics Collective at Yale, said: βThe ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable. They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.β
Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.
You make his position sound way more measured and responsible than it is.
His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.
Wasn't 1994 right about when they stopped making movies in black and white?
This has got to be some sort of sucker filter, like it's not that he particularly means it, it's that he is after the exact type of rube who is unfazed by naked contrarianism and the categorically preposterous so long as it's said with a straight face,.
Maybe there's something to the whole pick up artistry but for nailing VCs thing.
Honestly, the evident plethora of poor programming practices is the least notable thing about all this; using roided autocomplete to cut corners was never going to be a well calculated decision, it's always the cherry on top of a shit-cake.
this isnβt really even related to GenAI at all
Besides the ocr there appears to be all sorts of image-to-text metadata recorded, the nadella demo had the journalist supposedly doing a search and getting results with terms that were neither typed at the time nor appearing in the stored screenshots.
Also, I thought they might be doing something image-to-text-to-image-again related (which - I read somewhere - was what bing copilot did when you asked it to edit an image) to save space, instead of storing eleventy billion multimonitor screenshots forever.
edit - in the demo the results included screens.
It's a sad fate that sometimes befalls engineers who are good at talking to audiences, and who work for a big enough company that can afford to have that be their primary role.
edit: I love that he's chief evangelist though, like he has a bunch of little google cloud clerics running around doing chores for him.