blakestacey

joined 1 year ago
MODERATOR OF
[–] blakestacey@awful.systems 8 points 1 week ago (1 children)

Mal Reynolds yeah um hmm wait err hmm.gif

[–] blakestacey@awful.systems 8 points 1 week ago

Sounds kind of like a boozy, fizzy Orange Julius...so, yeah, it could work.

[–] blakestacey@awful.systems 17 points 1 week ago* (last edited 1 week ago) (1 children)

First reaction: "Wait, that was in Nature?"

Second reaction: "Oh, Nature Scientific Reports. The 'we have Nature at home' of science journals."

Among many insights, Davis (politely) points out that one of the AI-generated Chaucer poems is just "the opening of the Prologue to the Canterbury Tales."

Whan that Aprille with the fuck?

[–] blakestacey@awful.systems 3 points 1 week ago* (last edited 1 week ago)

Throwback Thursday: Atlas Shrugged: The Cobra Commander Dialogues

(Based on blog posts now available here.)

[–] blakestacey@awful.systems 13 points 1 week ago (2 children)

From page 17:

Rather than encouraging critical thinking, in core EA the injunction to take unusual ideas seriously means taking one very specific set of unusual ideas seriously, and then providing increasingly convoluted philosophical justifications for why those particular ideas matter most.

ding ding ding

[–] blakestacey@awful.systems 10 points 1 week ago* (last edited 1 week ago) (3 children)

Abstract: This paper presents some of the initial empirical findings from a larger forth-coming study about Effective Altruism (EA). The purpose of presenting these findings disarticulated from the main study is to address a common misunderstanding in the public and academic consciousness about EA, recently pushed to the fore with the publication of EA movement co-founder Will MacAskill’s latest book, What We Owe the Future (WWOTF). Most people in the general public, media, and academia believe EA focuses on reducing global poverty through effective giving, and are struggling to understand EA’s seemingly sudden embrace of ‘longtermism’, futurism, artificial intelligence (AI), biotechnology, and ‘x-risk’ reduction. However, this agenda has been present in EA since its inception, where it was hidden in plain sight. From the very beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk, now lumped under the banner of ‘longtermism’). The article’s aim is narrowly focused onpresenting rich qualitative data to make legible the distinction between public-facing EA and core EA.

[–] blakestacey@awful.systems 19 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

From the linked Andrew Molitor item:

Why Extropic insists on talking about thermodynamics at all is a mystery, especially since “thermodynamic computing” is an established term that means something quite different from what Extropic is trying to do. This is one of several red flags.

I have a feeling this is related to wanking about physics in the e/acc holy gospels. They invoke thermodynamics the way that people trying to sell you healing crystals for your chakras invoke quantum mechanics.

[–] blakestacey@awful.systems 11 points 2 weeks ago (1 children)

They take a theory that is supposed to be about updating one's beliefs in the face of new evidence, and they use it as an excuse to never change what they think.

[–] blakestacey@awful.systems 10 points 2 weeks ago (1 children)

oof

The existence of a Wikipedia page for dinosaur erotica must prove that back in the days when humans co-existed with stegosaurs, the ones who fucked them lived better.

[–] blakestacey@awful.systems 6 points 2 weeks ago

Erin go Bleagh.

[–] blakestacey@awful.systems 17 points 3 weeks ago (2 children)

"Consider it from the perspective of someone who does not exist and therefore has no preferences. Who would they pick?"

 

AI doctors will revolutionize medicine! You'll go to a service hosted in Thailand that can't take credit cards, and pay in crypto, to get a correct diagnosis. Then another VISA-blocked AI will train you in following a script that will get a human doctor to give you the right diagnosis, without tipping that doctor off that you're following a script; so you can get the prescription the first AI told you to get.

Can't get mifepristone or puberty blockers? Just have a chatbot teach you how to cast Persuasion!

 

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

 

Aella:

Maybe catcalling isn't that bad? Maybe the demonizing of catcalling is actually racist, since most men who catcall are black

Quarantine Goth Ms. Frizzle (@spookperson):

your skull is full of wet cat food

 

Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk).

See, it's like marketing the idea, in a multilevel way

 

Emily M. Bender on the difference between academic research and bad fanfiction

 

From this post; featuring "probability" with no scale on the y-axis, and "trivial", "steam engine", "Apollo", "P vs. NP" and "Impossible" on the x-axis.

I am reminded of Tom Weller's world-line diagram from Science Made Stupid.

 

Scott tweeteth thusly:

The Latin word for God is "Deus" - or as the Romans would have written it, "DEVS". The people who create programs, games, and simulated worlds are also called "devs". As time goes on, the two meanings will grow closer and closer.

Now that's some top-quality ierking off!

 

Steven Pinker tweets thusly:

My friend & Harvard colleague Howard Gardner, offers a thoughtful critique of my book Rationality -- but undermines his cause, as all skeptics of rationality must do, by using rationality to make it.

"My colleague and fellow esteemed gentleman of Harvard neglects to consider the premise that I am rubber and he is glue."

 

In the far-off days of August 2022, Yudkowsky said of his brainchild,

If you think you can point to an unnecessary sentence within it, go ahead and try. Having a long story isn't the same fundamental kind of issue as having an extra sentence.

To which MarxBroshevik replied,

The first two sentences have a weird contradiction:

Every inch of wall space is covered by a bookcase. Each bookcase has six shelves, going almost to the ceiling.

So is it "every inch", or are the bookshelves going "almost" to the ceiling? Can't be both.

I've not read further than the first paragraph so there's probably other mistakes in the book too. There's kind of other 'mistakes' even in the first paragraph, not logical mistakes as such, just as an editor I would have... questions.

And I elaborated:

I'm not one to complain about the passive voice every time I see it. Like all matters of style, it's a choice that depends upon the tone the author desires, the point the author wishes to emphasize, even the way a character would speak. ("Oh, his throat was cut," Holmes concurred, "but not by his own hand.") Here, it contributes to a staid feeling. It emphasizes the walls and the shelves, not the books. This is all wrong for a story that is supposed to be about the pleasures of learning, a story whose main character can't walk past a bookstore without going in. Moreover, the instigating conceit of the fanfic is that their love of learning was nurtured, rather than neglected. Imagine that character, their family, their family home, and step into their library. What do you see?

Books — every wall, books to the ceiling.

Bam, done.

This is the living-room of the house occupied by the eminent Professor Michael Verres-Evans,

Calling a character "the eminent Professor" feels uncomfortably Dan Brown.

and his wife, Mrs. Petunia Evans-Verres, and their adopted son, Harry James Potter-Evans-Verres.

I hate the kid already.

And he said he wanted children, and that his first son would be named Dudley. And I thought to myself, what kind of parent names their child Dudley Dursley?

Congratulations, you've noticed the name in a children's book that was invented to sound stodgy and unpleasant. (In The Chocolate Factory of Rationality, a character asks "What kind of a name is 'Wonka' anyway?") And somehow you're trying to prove your cleverness and superiority over canon by mocking the name that was invented for children to mock. Of course, the Dursleys were also the start of Rowling using "physically unsightly by her standards" to indicate "morally evil", so joining in with that mockery feels ... It's aged badly, to be generous.

Also, is it just the people I know, or does having a name picked out for a child that far in advance seem a bit unusual? Is "Dudley" a name with history in his family — the father he honored but never really knew? His grandfather who died in the War? If you want to tell a grown-up story, where people aren't just named the way they are because those are names for children to laugh at, then you have to play by grown-up rules of characterization.

The whole stretch with Harry pointing out they can ask for a demonstration of magic is too long. Asking for proof is the obvious move, but it's presented as something only Harry is clever enough to think of, and as the end of a logic chain.

"Mum, your parents didn't have magic, did they?" [...] "Then no one in your family knew about magic when Lily got her letter. [...] If it's true, we can just get a Hogwarts professor here and see the magic for ourselves, and Dad will admit that it's true. And if not, then Mum will admit that it's false. That's what the experimental method is for, so that we don't have to resolve things just by arguing."

Jesus, this kid goes around with L's theme from Death Note playing in his head whenever he pours a bowl of breakfast crunchies.

Always Harry had been encouraged to study whatever caught his attention, bought all the books that caught his fancy, sponsored in whatever maths or science competitions he entered. He was given anything reasonable that he wanted, except, maybe, the slightest shred of respect.

Oh, sod off, you entitled little twit; the chip on your shoulder is bigger than you are. Your parents buy you college textbooks on physics instead of coloring books about rocketships, and you think you don't get respect? Because your adoptive father is incredulous about the existence of, let me check my notes here, literal magic? You know, the thing which would upend the body of known science, as you will yourself expound at great length.

"Mum," Harry said. "If you want to win this argument with Dad, look in chapter two of the first book of the Feynman Lectures on Physics.

Wesley Crusher would shove this kid into a locker.

view more: ‹ prev next ›