this post was submitted on 10 Dec 2024
246 points (100.0% liked)

TechTakes

1480 readers
311 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

I can't wait for the spectacular implosion

top 50 comments
sorted by: hot top controversial new old
[–] Th4tGuyII@fedia.io 75 points 1 week ago (3 children)

I think AI tools have (and will have) their uses, but AI as a whole has been hyped up so much for so long now that the bubble is bound to burst sooner or later.

And when it does, hopefully we can go back to not having AI shoved into every fucking thing imaginable...

No I don't want an AI on my phone or computer trawling through all my data for you, just to give me some handy search feature I almost certainly won't use.

[–] db0@lemmy.dbzer0.com 39 points 1 week ago (1 children)

I expect a creative destruction, like what happened with the dotcom bubble. A ton of GenAI companies will go bust and the market will be flooded with cheap GPUs and other AI hw which will be snapped on the cheap, and enthusiasts and researches will use them to make actually useful stuff.

[–] dgerard@awful.systems 12 points 1 week ago* (last edited 1 week ago) (1 children)

these are compute GPUs that don't even have graphics ports

[–] db0@lemmy.dbzer0.com 16 points 1 week ago (2 children)

Yes, my point is that the compute from those chips can still be used. Maybe on actually useful machine learning tools that will be developed latter, or some other technology which might make use of parallel computing like this.

[–] JackRiddle@sh.itjust.works 5 points 6 days ago

I know of at least one company that uses cuda for ray-tracing for I believe ground research, so there is definitely already some usefull things happening.

I mean there are a lot of applications for linear algebra, although I admit I don't fully know in what way "AI" uses linear algebra and what other uses overlap with it.

I'm waiting on the a100 fire sale next year

[–] GregorGizeh@lemmy.zip 36 points 1 week ago (2 children)

I might want an ai on my computer, but only if it is a local, open source model that does not report any kind of data to outside parties in any way.

[–] Th4tGuyII@fedia.io 20 points 1 week ago

If that were the case, and it was something I chose, I certainly wouldn't mind it anywhere near as much - but the ones being forced upon you by every tech company alive right now are none of those things, and are all data harvesters disguised as utilities.

[–] blaue_Fledermaus@mstdn.io 13 points 1 week ago (2 children)

They are neat tools, if looked at realistically. They certainly don't deserve to be called AI. I like to call them High Coherence Media Transformers.

[–] Th4tGuyII@fedia.io 17 points 1 week ago (1 children)

Considering the "hallucinations" I've seen, they're definitely high something haha

[–] o7___o7@awful.systems 6 points 1 week ago

"Decepticons" is right there!

[–] froztbyte@awful.systems 7 points 1 week ago (1 children)

we call them autoplag(iarism) machines, much more honest

[–] blaue_Fledermaus@mstdn.io 1 points 1 week ago

Yes, that certainly applies to the most popular ones, but not necessarily true for all instances of these technologies.

[–] Moc@lemmy.world 44 points 1 week ago* (last edited 1 week ago) (3 children)
[–] homesweethomeMrL@lemmy.world 23 points 1 week ago

AI has already taken tens of thousands of jobs. Which it's not doing still.

I hope all the creative teams that got sacked because the idiot CEO blew ten years worth of capital on an AI gamble hold out for C-suite pay to return.

[–] phoneymouse@lemmy.world 8 points 1 week ago (2 children)

Feed that picture into ChatGPT and ask it to explain the joke

[–] brbposting@sh.itjust.works 5 points 1 week ago (1 children)
[–] phoneymouse@lemmy.world 0 points 1 week ago (1 children)

The original post was basically suggesting that AI won’t be smart enough to replace humans because it gets basic things wrong like showing salmon filets in a river instead of actual live salmon. However, when you feed this picture to chatGPT, an AI itself, it very much can discern the difference between the two, refuting the premise of the original post and suggesting that actually we are at risk of being replaced by AI.

[–] froztbyte@awful.systems 12 points 1 week ago (3 children)

both of you are not tall enough for this ride

[–] brbposting@sh.itjust.works -2 points 1 week ago (5 children)

I’m sure we’d both appreciate being filled in, if you have the time.

load more comments (5 replies)
load more comments (2 replies)
[–] brbposting@sh.itjust.works 5 points 1 week ago (1 children)
[–] froztbyte@awful.systems 11 points 1 week ago

it seems like you’ve had the joke fly right over your head

but then given your apparent predilection towards wanting outsourced thinking, the notion of using your own head must weigh far too heavy

[–] kn0wmad1c@programming.dev 4 points 1 week ago

The one thing LLMs will always be better at doing than humans is pattern detection.

Incidentally, the best job suited to their capability to detect patterns in large amounts of data is doing what CEOs currently do

[–] geneva_convenience@lemmy.ml 20 points 1 week ago

But they got an MKBHD ad. Surely MKBHD would not deceive everyone.

[–] Draegur@lemm.ee 19 points 1 week ago* (last edited 1 week ago) (13 children)

I for one can't wait for Dan Olson (Folding Ideas) to do a feature length deep dive takedown of it soon :D

load more comments (13 replies)
[–] Xenny@lemmy.world 8 points 1 week ago

I thought we were supposed to get like protocol droids with AI. We just get protocols that steal all our data.

load more comments
view more: next ›