this post was submitted on 28 Oct 2024
35 points (100.0% liked)

TechTakes

1427 readers
134 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 13 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Amazon used an AI-generated image as a cover for 1922's Nosferatu, and it got publicly torn apart on Twitter:

On a personal note, it feels to me like any use of AI, regardless of context, is gonna be treated as a public slight against artists, if not art as a concept going forward. Arguably, it already has been treated that way for a while.

You want me to point to a high-profile example of this kinda thing, I'd say Eagan Tilghman provided a textbook example a year ago, after his Scooby Doo/FNAF fan crossover (a VA redub came out a year later BTW) accidentally ignited a major controversy over AI and nearly got him blacklisted from animation.

I specifically bring this up because Tilghman wasn't some random CEO or big-name animator - he was just some random college student making a non-profit passion project with basically zero budget or connections. It speaks volumes about how artists view AI that even someone like him got raked over the coals for using it.

[–] [email protected] 11 points 3 weeks ago

ffs it's in public domain just use a still from the staircase silhouette like everyone else

load more comments (2 replies)
[–] [email protected] 13 points 3 weeks ago (1 children)

Fortune magazine reports:

In separate investigations completed by the blockchain firms Chaos Labs and Inca Digital and shared exclusively with Fortune, analysts found that Polymarket activity exhibited signs of wash trading, a form of market manipulation where shares are bought and sold, often simultaneously and repeatedly, to create a false impression of volume and activity. Chaos Labs found that wash trading constituted around one-third of trading volume on Polymarket’s presidential market, while Inca Digital found that a “significant portion of the volume” on the market could be attributed to potential wash trading, according to its report.

[–] [email protected] 11 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Wait we created a market and people are manipulating it in order to profit because it turns out market manipulation pays the same or more than being a ~~banker~~ ~~investor~~ "superpredictor" but is much easier?

load more comments (1 replies)
[–] [email protected] 12 points 3 weeks ago (2 children)

Going outside awful's wheelhouse for a bit:

Logan Paul doxed and harassed a random employee for posting a sign saying Lunchly was recalled

You want my take, the employee in question (who also got a GoFundMe) should sue Logan for defamation - solid case aside, I wanna see that blonde fucker get humbled for once.

load more comments (2 replies)
[–] [email protected] 12 points 3 weeks ago (1 children)

Microsoft found a fitting way to punish AI for collaborating with SEO spammers in generating slop: make it use the GitHub code review tools. https://github.blog/changelog/2024-10-29-refine-and-validate-code-review-suggestions-with-copilot-workspace-public-preview/

[–] [email protected] 13 points 3 weeks ago (2 children)

we really shouldn’t have let Microsoft both fork an editor and buy GitHub, of course they were gonna turn one into a really shitty version of the other

anyway check this extremely valuable suggestion from Copilot in one of their screenshots:

The error message 'userld and score are required' is unclear. It should be more specific, such as 'Missing userld or score in the request body'.

aren’t you salivating for a Copilot subscription? it turns a lazy error message into… no that’s still lazy as shit actually, who is this for?

  • a human reading this still needs to consult external documentation to know what userId and score are
  • a machine can’t read this
  • if you’re going for consistent error messages or you’re looking to match the docs (extremely likely in a project that’s in production), arbitrarily changing that error so it doesn’t match anything else in the project probably isn’t a great idea, and we know LLMs don’t do consistency
[–] [email protected] 13 points 3 weeks ago (3 children)

I want someone to fork the Linux kernel and then unleash like 10 Copilots to make PRs and review each other. No human intervention. Then plot the number of critical security vulnerabilities introduced over time, assuming they can even keep it compilable for long enough.

[–] [email protected] 11 points 3 weeks ago (2 children)

Does a kernel that crashes itself before it can process any malicious inputs count as secure?

load more comments (2 replies)
load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 12 points 3 weeks ago (2 children)

FastCompany: "In Apple’s new ads for AI tools, we’re all total idiots"

It's interesting that not even Apple, with all their marketing knowledge, can come up with anything convincing why users might need "Apple Intelligence"[1]. These new ads are not quite as terrible as that previous "Crush" AI ad, but especially the one with the birthday... I find it just alienating.

Whatever one may think about Apple and their business practices, they are typically very good at marketing. So if even Apple can't find a good consumer pitch for GenAI crap, I don't think anyone can.

[1] I'd like to express support for this post from Jeff Johnson to call it "iSlop"

load more comments (2 replies)
[–] [email protected] 12 points 3 weeks ago (13 children)
load more comments (13 replies)
[–] [email protected] 11 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

I know it's Halloween, but this popped up in my feed and was too spooky even for me 😱

As a side note, what are peoples feelings about Wolfram? Smart dude for sho, but some of the shit he says just comes across as straight up pseudoscientific gobbledygook. But can he out guru Big Yud in a 1v1 on Final Destination (fox only, no items) ? 🤔

[–] [email protected] 11 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Is there a group that more consistently makes category errors than computer scientists? Can we mandate Philosophy 101 as a pre-req to shitting out research papers?

Edit: maybe I need to take a break from Mystery AI Hype Theater 3000.

load more comments (1 replies)
[–] [email protected] 11 points 3 weeks ago (1 children)
[–] [email protected] 11 points 3 weeks ago (2 children)

that article misses one of the delicious parts of that story: they called saltman a “podcast bro” in derision

[–] [email protected] 11 points 3 weeks ago

OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories known as "foundries" for chip manufacturing.

Oh man, that's a delicious understatement. If the allegations are true, this was a plan that would make the military-industrial complex envious.

load more comments (1 replies)
[–] [email protected] 11 points 3 weeks ago (6 children)

Quick update - Brian Merchant's list of "luddite horror" films ended up getting picked up by Fast Company:

To repeat a previous point of mine, it seems pretty safe to assume "luddite horror" is gonna become a bit of a trend. To make a specific (if unrelated) prediction, I imagine we're gonna see AI systems and/or their supporters become pretty popular villains in the future - the AI bubble's produces plenty of resentment towards AI specifically and tech more generally, and the public's gonna find plenty of catharsis in watching them go down.

load more comments (6 replies)
[–] [email protected] 11 points 3 weeks ago (2 children)
load more comments (2 replies)
[–] [email protected] 11 points 3 weeks ago

The AI lawsuit's going to discovery - I expect things are about to heat up massively for the AI industry:

[–] [email protected] 11 points 3 weeks ago
[–] [email protected] 11 points 3 weeks ago (3 children)
[–] [email protected] 11 points 3 weeks ago

I feel like Ed is underselling the degree to which this is just how businesses work now. The emphasis on growth mindset is particularly gross because of how it sells the CEOs book, but it's not unique in trying to find a feel-good vibes-based way to evaluate performance rather than relying on strict metrics that give management less power over their direct reports.

Of course he's also written at length about the overall problem that this feeds into (organizations run by people with no idea how to make the business do what it does but who can make the number go up for shareholders) but the most unique part of this is the AI integration, which is legitimately horrifying and I feel like the debunk of growth mindset takes some of the sting away.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›