this post was submitted on 09 Mar 2024
46 points (100.0% liked)
TechTakes
1430 readers
108 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
assume we're all fully up on the discourse and the players here
From your syntax I can divine that you are mad at me (or rather, my submission) but for the life of me I don't understand why. Is it because I wrote "AI" instead of "the bad AI from BigTech using TWh to generate shitty hero images for blogs but not the good AI from the heroic researchers constructing our glorious future for the pure love of science" ?
If so, nothing would please me more than for the "bad AI" to crash and burn, pauperize Sam Altman and all his bootlickers, and AI research to retreat to the academic caves to hibernate another AI winter.
every time I see responses like that I'm left wondering if it's from someone working in (or closely adjacent to) the field. someone with some eyes on the potential and Big Mad about the bullshit, but feeling unable to affect it in any manner
a charitable interpretation (and wishful thinking perhaps?), but still
I’m not mad at you at all, i basically agreed with you. i was just engaging in a more nuanced discussion…. Or trying to. the replies here seem fairly hostile so I think I’ll see myself out of this community.
ah yes, the type of nuance that can’t survive even the extremely mild amount of pushback you’ve experienced in this thread. but since we’re “fairly hostile” and all that, how about I make sure your lying AI-pushing ass can’t show up in any of our threads again
I should’ve known taking my time to explain our stance was a waste of my fucking time when you brought up nuance in the first place — the only time I see you shitheads give a fuck about that is when you’re looking to shift the Overton window while pretending to take a centrist position
why is that a given?
these results were extremely flawed and disappointing, in a way that’s highly reminiscent of the Bell Labs replication crisis
these get brought up a lot in marketing, but the academic results of attempting to apply LLMs and generative AI to these fields have also been extremely disappointing
if you’re here seeking nuance, I encourage you to learn more about the history of academic fraud that occurred during the first AI boom and led directly to the AI winter. the tragedy of AI as a field is that all of the obvious fraud is and was treated with the same respect as the occasional truly useful computational technique
I also encourage you to learn more about the Rationalist cult that steers a lot of decisions around AI (and especially AI with an AGI end goal) research. the communities on this instance have a long history of sneering at the Rationalists who would (years later) go on to become key researchers at essentially every large AI company, and that history has shaped the language we use. the podcast Behind the Bastards has a couple of episodes about the Rationalist cult and its relationship with AI research, and Robert Evans definitely does a better job describing it than I can
this is the general argument in favour of cryptocurrency, with the name changed. you don't seem to have argued that the actual reality of AI we have right now is not the same problem.
Because I’m not arguing with OP, I’m largely agreeing with them. Generating silly images and doing school kids homework is not the promised land of AI the corporate overlords keep promising. But that’s not to suggest the field in general has zero uses. Crypto and AI are apples and oranges and while I’m not exactly sure what you mean by the arguments being the same, it would be possible for the same argument to be true for AI and not true for crypto, because AI has much more obvious use cases to benefit the common good.
"AI" is a marketing term for various at best slightly related technologies. If you mean LLMs or whatever, you'd need to be specific else you're not even defining the goalposts before setting them up with wheels.
yeah, I definitely think machine learning has obvious use cases to benefit the common good (youtube auto captions being Actually Pretty Decent Now is one that comes to mind easily) but I'm much less certain about most of the stuff being presently marketed as "AI"
i'm pretty cool with ELIZA
Can you tell me more about why you're pretty cool with ELIZA? 😉
we're talking about you not me. come come elucidate your thoughts. can you elaborate on that?
(meta: has any llm actually exceeded this level of engagement? I can't recall seeing a single example. some changes in the sophistication of the language perhaps, but otherwise nothing)
AI is the name of the field of study. It has existed since the 60s. LLMs are neural networks one of the first and most widely used forms of AI.
how come the reply humans from programming dot dev have always the daftest takes?
who was this post for
It's not even the right decade; the Dartmouth Summer Research Project on Artificial Intelligence was in 1956.
rubber duck replying, with a stuck posting key
I hear what you're saying, but I think it's sort of a motte-and-bailey setup:
Motte: Many functions can be probably approximately learned, even some uncomputable functions
Bailey: Consciousness, appreciation for art, useful laboring, and careful argumentation are learnable functions
alphafold is mostly pattern matching on known proteins and the other bit, well google very quickly distanced themselves from these shitty results when they learned how shitty they are. i've made a post about it specifically https://discuss.tchncs.de/post/11138402 and i won't rewrite it again
non sequitur