31
submitted 1 week ago by [email protected] to c/[email protected]

Excerpt:

A new study published on Thursday in The American Journal of Psychiatry suggests that dosage may play a role. It found that among people who took high doses of prescription amphetamines such as Vyvanse and Adderall, there was a fivefold increased risk of developing psychosis or mania for the first time compared with those who weren’t taking stimulants.

Perhaps this explains some of what goes on at LessWrong and in other rationalist circles.

22
submitted 2 months ago by [email protected] to c/[email protected]

Maybe she was there to give Moldbug some relationship advice.

[-] [email protected] 18 points 2 months ago

So now Steve Sailer has shown up in this essay's comments, complaining about how Wikipedia has been unfairly stifling scientific racism.

Birds of a feather and all that, I guess.

[-] [email protected] 20 points 2 months ago

Scott Alexander, by far the most popular rationalist writer besides perhaps Yudkowsky himself, had written the most comprehensive rebuttal of neoreactionary claims on the internet.

Hey Trace, since you're undoubtedly reading this thread, I'd like to make a plea. I know Scott Alexander Siskind is one of your personal heroes, but maybe you should consider digging up some dirt in his direction too. You might learn a thing or two.

[-] [email protected] 16 points 3 months ago

Until a month ago, TW was the long-time researcher for "Blocked and Reported", the podcast hosted by Katie 'TERF' Herzog and relentless sealion Jesse Singal.

31
OK doomer (www.newyorker.com)
submitted 6 months ago by [email protected] to c/[email protected]

The New Yorker has a piece on the Bay Area AI doomer and e/acc scenes.

Excerpts:

[Katja] Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include “Harry Potter and the Methods of Rationality,” a piece of fan fiction running to more than six hundred thousand words, and “The Sequences,” a gargantuan series of essays about how to sharpen one’s thinking.

[...]

A guest brought up Scott Alexander, one of the scene’s microcelebrities, who is often invoked mononymically. “I assume you read Scott’s post yesterday?” the guest asked [Katja] Grace, referring to an essay about “major AI safety advances,” among other things. “He was truly in top form.”

Grace looked sheepish. “Scott and I are dating,” she said—intermittently, nonexclusively—“but that doesn’t mean I always remember to read his stuff.”

[...]

“The same people cycle between selling AGI utopia and doom,” Timnit Gebru, a former Google computer scientist and now a critic of the industry, told me. “They are all endowed and funded by the tech billionaires who build all the systems we’re supposed to be worried about making us extinct.”

35
submitted 6 months ago by [email protected] to c/[email protected]

In her sentencing submission to the judge in the FTX trial, Barbara Fried argues that her son is just a misunderstood altruist, who doesn't deserve to go to prison for very long.

Excerpt:

One day, when he was about twelve, he popped out of his room to ask me a question about an argument made by Derik Parfit, a well-known moral philosopher. As it happens, | am quite familiar with the academic literature Parfi’s article is a part of, having written extensively on related questions myself. His question revealed a depth of understanding and critical thinking that is not all that common even among people who think about these issues for a living. ‘What on earth are you reading?” I asked. The answer, it turned out, was he was working his way through the vast literature on utiitarianism, a strain of moral philosophy that argues that each of us has a strong ethical obligation to live so as to alleviate the suffering of those less fortunate than ourselves. The premises of utilitarianism obviously resonated strongly with what Sam had already come to believe on his own, but gave him a more systematic way to think about the problem and connected him to an online community of like-minded people deeply engaged in the same intellectual and moral journey.

Yeah, that "online community" we all know and love.

[-] [email protected] 20 points 7 months ago

You know the doom cult is having an effect when it starts popping up in previously unlikely places. Last month the socialist magazine Jacobin had an extremely long cover feature on AI doom, which it bought into completely. The author is an effective altruist who interviewed and took seriously people like Katja Grace, Dan Hendrycks and Eliezer Yudkosky.

I used to be more sanguine about people's ability to see through this bullshit, but eschatological nonsense seems to tickle something fundamentally flawed in the human psyche. This LessWrong post is a perfect example.

[-] [email protected] 31 points 7 months ago

Eats the same bland meal every day of his life. Takes an ungodly number of pills every morning. Uses his son as his own personal blood boy. Has given himself a physical appearance that can only be described as "uncanny valley".

I'll never understand the extremes some of these tech bros will go to deny the inevitability of death.

[-] [email protected] 15 points 7 months ago

Happy Valentine's Day everybody!

71
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/[email protected]

Pass the popcorn, please.

(nitter link)

[-] [email protected] 20 points 8 months ago

Imagine thinking there is actually some identifiable thing called "white culture". As if a skin color defines a culture.

Yeah, sounds like a Nazi.

[-] [email protected] 17 points 9 months ago

What a bunch of monochromatic, hyper-privileged, rich-kid grifters. It's like a nonstop frat party for rich nerds. The photographs and captions make it obvious:

The gang going for a hiking adventure with AI safety leaders. Alice/Chloe were surrounded by a mix of uplifting, ambitious entrepreneurs and a steady influx of top people in the AI safety space.

The gang doing pool yoga. Later, we did pool karaoke. Iguanas everywhere.

Alice and Kat meeting in “The Nest” in our jungle Airbnb.

Alice using her surfboard as a desk, co-working with Chloe’s boyfriend.

The gang celebrating… something. I don’t know what. We celebrated everything.

Alice and Chloe working in a hot tub. Hot tub meetings are a thing at Nonlinear. We try to have meetings in the most exciting places. Kat’s favorite: a cave waterfall.

Alice’s “desk” even comes with a beach doggo friend!

Working by the villa pool. Watch for monkeys!

Sunset dinner with friends… every day!

These are not serious people. Effective altruism in a nutshell.

24
submitted 9 months ago by [email protected] to c/[email protected]

They've been pumping this bio-hacking startup on the Orange Site (TM) for the past few months. Now they've got Siskind shilling for them.

42
Effective Obfuscation (newsletter.mollywhite.net)
submitted 9 months ago by [email protected] to c/[email protected]

Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she's shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that's an excellent development. Molly's great.

16
submitted 10 months ago* (last edited 10 months ago) by [email protected] to c/[email protected]

Not 7.5% or 8%. 8.5%. Numbers are important.

17
submitted 11 months ago by [email protected] to c/[email protected]

Non-paywalled link: https://archive.ph/9Hihf

In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen's recent excrescence on so-called "techno-optimism". It wasn't exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.

But when Andreessen included "existential risk" and transhumanism on his list of enemy ideas, I'm sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox's EA-promoting "Future Perfect" vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.

So have at at, Marc and Ezra. Fight. And maybe take each other out.

[-] [email protected] 16 points 11 months ago* (last edited 11 months ago)

I mean, of course he loves unfettered technology and capitalism. He's a fucking billionaire. He hit the demographic lottery.

EDIT: I just noticed his list of "techno-optimist" patrons. On the list? John Galt. LMAO. The whole list is pretty much an orgy of libertarians.

[-] [email protected] 15 points 11 months ago* (last edited 11 months ago)

Roko's authoritative-toned "aktshually..." response to Annie's claims have me fuming. I don't know why. I mean I've known for years that this guy is a total boil on the ass of humanity. And yet he still manages to shock with the worst possible take on a topic -- even when the topic is sexual abuse of a child. If, like Roko, I were to play armchair psychiatrist, I'd diagnose him as a sociopath with psychopathic tendencies. But I'm not. So I won't.

57
submitted 11 months ago* (last edited 11 months ago) by [email protected] to c/[email protected]

Rationalist check-list:

  1. Incorrect use of analogy? Check.
  2. Pseudoscientific nonsense used to make your point seem more profound? Check.
  3. Tortured use of probability estimates? Check.
  4. Over-long description of a point that could just have easily been made in 1 sentence? Check.

This email by SBF is basically one big malapropism.

56
submitted 11 months ago by [email protected] to c/[email protected]

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

24
submitted 1 year ago by [email protected] to c/[email protected]
[-] [email protected] 15 points 1 year ago

Random blue check spouts disinformation about "seed oils" on the internet. Same random blue check runs a company selling "safe" alternatives to seed oils. Yud spreads this huckster's disinformation further. In the process he reveals his autodidactically-obtained expertise in biology:

Are you eating animals, especially non-cows? Pigs and chickens inherit linoleic acid from their feed. (Cows reprocess it more.)

Yes, Yud, because that's how it works. People directly "inherit" organic molecules totally unmetabolized from the animals they eat.

I don't know why Yud is fat, but armchair sciencing probably isn't going to fix it.

[-] [email protected] 30 points 1 year ago* (last edited 1 year ago)

This is good:

Take the sequence {1,2,3,4,x}. What should x be? Only someone who is clueless about induction would answer 5 as if it were the only answer (see Goodman’s problem in a philosophy textbook or ask your closest Fat Tony) [Note: We can also apply here Wittgenstein’s rule-following problem, which states that any of an infinite number of functions is compatible with any finite sequence. Source: Paul Bogossian]. Not only clueless, but obedient enough to want to think in a certain way.

Also this:

If, as psychologists show, MDs and academics tend to have a higher “IQ” that is slightly informative (higher, but on a noisy average), it is largely because to get into schools you need to score on a test similar to “IQ”. The mere presence of such a filter increases the visible mean and lower the visible variance. Probability and statistics confuse fools.

And:

If someone came up w/a numerical“Well Being Quotient” WBQ or “Sleep Quotient”, SQ, trying to mimic temperature or a physical quantity, you’d find it absurd. But put enough academics w/physics envy and race hatred on it and it will become an official measure.

view more: next ›

TinyTimmyTokyo

joined 1 year ago