SneerClub

1010 readers
25 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS
226
227
 
 

Bit of a rant but I genuinely hate decision theory. At first it seemed like a useful tool to make the best long term decisions for economics and such then LessWrong, EA, GPI, FHI, MIRI and co needed to take what was essentially a tool and turn it into the biggest philosophical disaster since Rand. I'm thinking about moral uncertainty, wagers, hedging, AGI, priors, bayesianism and all the shit that's grown out of this cesspit of rationalism.

What's funny about all this is that there's no actual way to argue against these people unless you have already been indoctrinated into the cult of Bayes, and even if you manage to get through one of their arguments they'll just pull out some other bullshit principle that they either made up or saw somewhere in a massively obscure book to essentially say 'nuh uh'.

What's more frustrating is that there's now evidence that people make moral judgements using a broadly bayesian approach, which I hope just stays in the descriptive realm.

But yeah, I hate decision theory, that is all.

228
229
230
231
232
233
234
235
236
 
 

Taleb dunking on IQ “research” at length. Technically a seriouspost I guess.

237
238
 
 

yes really, that’s literally the title of the post. (archive copy, older archive copy) LessWrong goes full Motte.

this was originally a LW front-page post, and was demoted to personal blog when it proved unpopular. it peaked at +10, dropped to -6 and is +17 right now.

but if anyone tries to make out this isn’t a normative rationalist: this guy, Michael “Valentine” Smith, is a cofounder of CFAR (the Center for Applied Rationality), a LessWrong offshoot that started being about how to do rational thinking … and finally admitted it was about “AI Risk”

this post is the Rationalist brain boys, the same guys who did FTX and Effective Altruism, going full IQ-Anon wondering how the market could fail so badly as not to care what weird disaster assholes think. this is the real Basilisk.

when they’re not spending charity money on buying themselves castles, this is what concerns the modern rationalist

several commenters answered “uh, the customers.” and tried to explain the concept of markets to OP, and how corporations like selling stuff to normal people and not just to barely-crypto-fash. they were duly downvoted to -20 by valiant culture warriors who weren’t putting up with that sort of SJW nonsense.

comment by author, who thinks “hard woke” is not only a thing, but a thing that profit-making corporations do so as not to make a profit: “For what it’s worth, I wouldn’t describe myself as leaning right.” lol ok dude

right-wingers really don’t believe in, or even understand, capitalism or markets at all. they believe in hierarchy. that’s what’s offended this dipshit.

now, you might think LessWrong Rationalists, Slate Star Codex readers, etc. tend towards behaving functionally indistinguishably from Nazis, but that’s only because they work so hard at learning from their neoreactionary comrades to reach that stage

why say in 10,000 words what you can say in 14

239
17
submitted 1 year ago* (last edited 1 year ago) by dgerard@awful.systems to c/sneerclub@awful.systems
 
 

Video games also have potential legal advantages over IQ tests for companies. You could argue that "we only hire people good at video games to get people who fit our corporate culture of liking video games" but that argument doesn't work as well for IQ tests.

yet again an original post title that self-sneers

240
 
 

[All non-sneerclub links below are archive.today links]

Diego Caleiro, who popped up on my radar after he commiserated with Roko's latest in a never-ending stream of denials that he's a sex pest, is worthy of a few sneers.

For example, he thinks Yud is the bestest, most awesomest, coolest person to ever breathe:

Yudkwosky is a genius and one of the best people in history. Not only he tried to save us by writing things unimaginably ahead of their time like LOGI. But he kind of invented Lesswrong. Wrote the sequences to train all of us mere mortals with 140-160IQs to think better. Then, not satisfied, he wrote Harry Potter and the Methods of Rationality to get the new generation to come play. And he founded the Singularity Institute, which became Miri. It is no overstatement that if we had pulled this off Eliezer could have been THE most important person in the history of the universe.

As you can see, he's really into superlatives. And Jordan Peterson:

Jordan is an intellectual titan who explores personality development and mythology using an evolutionary and neuroscientific lenses. He sifted through all the mythical and religious narratives, as well as the continental psychoanalysis and developmental psychology so you and I don’t have to.

At Burning Man, he dons a 7-year old alter ego named "Evergreen". Perhaps he has an infantilization fetish like Elon Musk:

Evergreen exists ephemerally during Burning Man. He is 7 days old and still in a very exploratory stage of life.

As he hinted in his tweet to Roko, he has an enlightened view about women and gender:

Men were once useful to protect women and children from strangers, and to bring home the bacon. Now the supermarket brings the bacon, and women can make enough money to raise kids, which again, they like more in the early years. So men have become useless.

And:

That leaves us with, you guessed, a metric ton of men who are no longer in families.

Yep, I guessed about 12 men.

241
242
11
submitted 1 year ago* (last edited 1 year ago) by salorarainriver@awful.systems to c/sneerclub@awful.systems
 
 

it got good reviews on the discord!

243
 
 

"I recommend just betting to maximize EV."

244
245
 
 

the r/SneerClub archives are finally online! this is an early v1 which contains 1,940 posts grabbed from the Reddit UI using Bulk Downloader for Reddit. this encompasses both the 1000 most recent posts on r/SneerClub as well as a set of popular historical posts

as a v1, you'll notice a lot of jank. known issues are:

  • this won't work at all on mobile because my css is garbage. it might not even work on anyone else's screen; good luck!
  • as mentioned above, only 1,940 posts are in this release. there's a full historical archive of r/SneerClub sourced from pushshift at the archive data git repo (or clone git://these.awful.systems/sneer-archive-data.git); the remaining work here is to merge the BDFR and pushshift data into the same JSON format so the archives can pull in everything
  • markdown is only rendered for posts and first-level comments; everything else just gets the raw markdown. I couldn't figure out how to make miller recursively parse JSON, so I might have to write some javascript for this
  • likewise, comments display a unix epoch instead of a rendered time
  • searching happens locally in your browser, but only post titles and authors are indexed to keep download sizes small
  • speaking of, there's a much larger r/SneerClub archive that includes the media files BDFR grabbed while archiving. it's a bit unmanageable to actually use directly, but is available for archival purposes (and could be included as part of the hosted archive if there's demand for it)

if you'd like the source code for the r/SneerClub archive static site, it lives here (or clone git://these.awful.systems/sneer-archive-site.git)

246
 
 

Been waiting to come back to the steeple of the sneer for a while. Its good to be back. I just really need to sneer, this ones been building for a long time.

Now I want to gush to you guys about something thats been really bothering me for a good long while now. WHY DO RATIONALISTS LOVE WAGERS SO FUCKING MUCH!?

I mean holy shit, theres a wager for everything now, I read a wager that said that we can just ignore moral anti-realism cos 'muh decision theory', that we must always hedge our bets on evidential decision theory, new pascals wagers, entirely new decision theories, the whole body of literature on moral uncertainty, Schwitzgebels 1% skepticism and so. much. more.

I'm beginning to think its the only type of argument that they can make, because it allows them to believe obviously problematic things on the basis that they 'might' be true. I don't know how decision theory went from a useful heuristic in certain situations and economics to arguing that no matter how likely it is that utilitarianism is true you have to follow it cos math, acausal robot gods, fuckin infinite ethics, basically providing the most egregiously smug escape hatch to ignore entire swathes of philosophy etc.

It genuinely pisses me off, because they can drown their opponents in mathematical formalisms, 50 page long essays all amounting to impenetrable 'wagers' that they can always defend no matter how stupid it is because this thing 'might' be true; and they can go off create another rule (something along the lines of 'the antecedent promulgation ex ante expected pareto ex post cornucopian malthusian utility principle) that they need for the argument to go through, do some calculus declare it 'plausible' and then call it a day. Like I said, all of this is so intentionally opaque that nobody other than their small clique can understand what the fuck they are going on about, and even then there is little to no disagreement within said clique!

Anyway, this one has been coming for a while, but I hope to have struck up some common ground between me and some other people here

247
248
 
 

I don't particularly disagree with the piece, but it's striking how little effort is put in to make this resemble a news piece or a typical Vox explainer. It's just blatant editorializing ("Please do this thing I want") and very blatantly carrying water for the--some how non-discredited--EA movement priorities.

249
 
 

he takes a couple pages to explain why he know that sightings of UFOs aren't alien because he can simply infer how superintelligent beings will operate + how advanced their technology is. he then undercuts his point by saying that he's very uncertain about both of those things, but wraps it up nicely with an excessively wordy speech about how making big bets on your beliefs is the responsible way to be a thought leader. bravo

250
view more: ‹ prev next ›