TechTakes

1430 readers
171 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
301
 
 

This isn't a sneer, more of a meta take. Written because I sit in a waiting room and is a bit bored, so I'm writing from memory, no exact quotes will be had.

A recent thread mentioning "No Logo" in combination with a comment in one of the mega-threads that pleaded for us to be more positive about AI got me thinking. I think that in our late stage capitalism it's the consumer's duty to be relentlessly negative, until proven otherwise.

"No Logo" contained a history of capitalism and how we got from a goods based industrial capitalism to a brand based one. I would argue that "No Logo" was written in the end of a longer period that contained both of these, the period of profit driven capital allocation. Profit, as everyone remembers from basic marxism, is the surplus value the capitalist acquire through paying less for labour and resources then the goods (or services, but Marx focused on goods) are sold for. Profits build capital, allowing the capitalist to accrue more and more capital and power.

Even in Marx times, it was not only profits that built capital, but new capital could be had from banks, jump-starting the business in exchange for future profits. Thus capital was still allocated in the 1990s when "No Logo" was written, even if the profits had shifted from the good to the brand. In this model, one could argue about ethical consumption, but that is no longer the world we live in, so I am just gonna leave it there.

In the 1990s there was also a tech bubble were capital allocation was following a different logic. The bubble logic is that capital formation is founded on hype, were capital is allocated to increase hype in hopes of selling to a bigger fool before it all collapses. The bigger the bubble grows, the more institutions are dragged in (by the greed and FOMO of their managers), like banks and pension funds. The bigger the bubble, the more it distorts the surrounding businesses and legislation. Notice how now that the crypto bubble has burst, the obvious crimes of the perpetrators can be prosecuted.

In short, the bigger the bubble, the bigger the damage.

If in a profit driven capital allocation, the consumer can deny corporations profit, in the hype driven capital allocation, the consumer can deny corporations hype. To point and laugh is damage minimisation.

302
303
 
 

Need to make a primal scream without gathering footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

304
 
 

there are some remarkable instances of bad behaviour in there already, but imagine being the sort of product team that thinks users being gaslit by a chatbot that they couldn’t even consent to choose to use is totes something to deliver without any modification or remark

305
 
 
306
307
308
309
310
 
 
311
 
 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

312
 
 

somehow I managed to miss this until now

archive link

313
 
 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

314
315
316
 
 

Source

I see Google's deal with Reddit is going just great...

317
 
 

Wake up honey, new Zitron just dropped.

Looks like Sammy boy has a crush on Scarlett Johansson and wanted to model his sexy chatbot after her role in the movie Her. The damage control is actually hilarious.

Altman subsequently claimed that the actress for Sky was cast before the company reached out to Johansson.

“Yeah, I don’t want to go out with you anyway. Also, I already have a girlfriend but she goes to a different school, so you wouldn’t know her. And no, I won’t tell you who it is!”

I mean, we all knew that OpenAI is a fucking clown show of a company run by wannabe nerd frat boys with way too much money, but I didn’t think we’d get high school level relationship drama this season.

318
319
 
 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

320
 
 

https://web.archive.org/web/20240517214034/https://www.404media.co/here-is-what-axons-bodycam-report-writing-ai-looks-like-draft-one/

tl;dr ai will write up officer reports used to help convict people, based on bodycam audio. of course we can expect it to display the usual features of fictionalizing, bias and diffusion of accountability

special highlight: end of the article includes a comment from the EFF, imploring municipalities not to purchase this tech. company responds with a statement pitching the product to prosecutors. make your own inference about who has control over this decision

321
 
 

Many magazines have closed their submission portals because people thought they could send in AI-written stories.

For years I would tell people who wanted to be writers that the only way to be a writer was to write your own stories because elves would not come in the night and do it for you.

With AI, drunk plagiaristic elves who cannot actually write and would not know an idea or a sentence if it bit their little elvish arses will actually turn up and write something unpublishable for you. This is not a good thing.

322
 
 

cross-posted from: https://infosec.pub/post/12406642

Body of the toot:

Absolutely unbelievable but here we are. #Slack by default using messages, files etc for building and training #LLM models, enabled by default and opting out requires a manual email from the workspace owner.

https://slack.com/intl/en-gb/trust/data-management/privacy-principles

What a time to be alive in IT. 🤦‍♂️

323
 
 
324
27
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]
 
 

Ilya tweet:

After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly. So long, and thanks for everything. I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.

Jan tweet:

I resigned

this comes precisely 6mo after Sam Altman's job at OpenAI was rescued by the Paperclip Maximiser. NYT: "Dr. Sutskever remained an OpenAI employee, but he never returned to work." lol

orange site discussion: https://news.ycombinator.com/item?id=40361128

lesswrong discussion: https://www.lesswrong.com/posts/JSWF2ZLt6YahyAauE/ilya-sutskever-and-jan-leike-resign-from-openai

325
view more: ‹ prev next ›