this post was submitted on 08 Jun 2024
2106 points (98.9% liked)

Programmer Humor

19623 readers
507 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 101 points 5 months ago* (last edited 5 months ago) (3 children)

More like:

Computer scientist: We have made a text generator

Everyone: tExT iS iNtElLiGeNcE

[–] [email protected] 18 points 5 months ago* (last edited 5 months ago) (1 children)

That's why nonverbal (and sometimes speaking) autistic people are considered stupid even by professionals.

[–] [email protected] 5 points 5 months ago (1 children)
[–] [email protected] 0 points 5 months ago (1 children)

Wow, this looks worth reading. I'll read it if I remember.

[–] [email protected] 2 points 5 months ago

It’s also a movie too with Daniel Day-Lewis. He’s kinda hard to forget.

[–] [email protected] 17 points 5 months ago

Oh come on. It's called AI, as in artificial intelligence. None of these companies have ever called it a text generator, even though that's what it is.

[–] [email protected] 14 points 5 months ago (3 children)

I get that it's cool to hate on how AI is being shoved in our faces everywhere and I agree with that sentiment, but the technology is better than what you're giving it credit for.

You don't have to diminish the accomplishments of the actual people who studied and built these impressive things to point out that business are bandwagoning and rushing to get to market to satisfy investors. like with most technologies it's capitalism that's the problem.

LLMs emulate neural structures and have incredible natural language parsing capabilities that we've never even come close to accomplishing before. The prompt hacks alone are an incredibly interesting glance at how close these things come to "understanding." They're more like social engineering than any other kind of hack.

[–] [email protected] 45 points 5 months ago (3 children)

The trouble with phrases like 'neural structures' and 'language parsing' is that these descriptions still play into the "AI" narrative that's been used to oversell large language models.

Fundamentally, these are statistical weights randomly wired up to other statistical weights, tested and pruned against a huge database. That isn't language parsing, it's still just brute-force calculation. The understanding comes from us, from people assigning linguistic meaning to patterns in binary.

[–] [email protected] 5 points 5 months ago* (last edited 5 months ago)

Language parsing is a routine process that doesn't require AI and it's something we have been doing for decades. That phrase in no way plays into the hype of AI. Also, the weights may be random initially (though not uniformly random), but the way they are connected and relate to each other is not random. And after training, the weights are no longer random at all, so I don't see the point in bringing that up. Finally, machine learning models are not brute-force calculators. If they were, they would take billions of years to respond to even the simplest prompt because they would have to evaluate every possible response - even the nonsensical ones, before returning the best answer. They're better described as a greedy algorithm than a brute force algorithm.

I'm not going to get into an argument about whether these AIs understand anything, largely because I don't have a strong opinion on the matter, but also because that would require a definition of understanding which is an unsolved problem in philosophy. You can wax poetic about how humans are the only ones with true understanding and that LLMs are encoded in binary (which is somehow related to the point you're making in some unspecified way); however, your comment reveals how little you know about LLMs, machine learning, computer science, and the relevant philosophy in general. Your understanding of these AIs is just as shallow as those who claim that LLMs are intelligent agents of free will complete with conscious experience - you just happen to land closer to the mark.

[–] [email protected] 5 points 5 months ago

It is parsing and querying into a huge statistical database.

Both done at the same time and in an opaque manner. But that doesn't make it any less of parsing and querying.

[–] [email protected] 3 points 5 months ago (2 children)

Brain structures aren't so dissimilar, unless you believe there's some metaphysical quantity to consciousness this kind of technology will be how we do achieve general AI

[–] [email protected] 11 points 5 months ago (2 children)

Living, growing, changing cells are pretty damn dissimilar to static circuitry. Neural networks are based on an oversimplified model of neuron cells. The model ignores the fact neurons are constantly growing, shifting, and breaking connections with one another, and flat out does not consider structures and interactions within the cells.

Metaphysics is not required to make the observation that computer programmes are magnitudes less complex than a brain.

[–] [email protected] 15 points 5 months ago* (last edited 5 months ago) (1 children)

Neural networks are based on an oversimplified model of neuron cells.

As a programmer who has studied neuroanatomy and the structure/function of neurons themselves, I remain astonished at how not like real biological nervous systems computer neural networks still are. It's like the whole field is based on one person's poor understanding of the state of biological knowledge in the late 1970s. That doesn't mean it's not effective in some ways as it is, but you'd think there'd be more experimentation in neural networks based on current biological knowledge.

[–] [email protected] 2 points 5 months ago (1 children)

What sort of differences are we looking at exactly?

[–] [email protected] 4 points 5 months ago (1 children)

The one thing that stands out to me the most is that programmatic "neurons" are basically passive units that weigh inputs and decide to fire or not. The whole net is exposed to the input, the firing decisions are worked through the net, and then whatever output is triggered. In biological neural nets, most neurons are always firing at some rate and the inputs from pre-synaptic neurons affect that rate, so in a sense the passed information is coded as a change in rate rather than as an all-or-nothing decision to fire or not fire as is the case with (most) programmatic neurons. Implementing something like this in code would be more complicated, but it could produce something much more like a living organism which is always doing something rather than passively waiting for an input to produce some output.

And TBF there probably are a lot of people doing this kind of thing, but if so they don't get much press.

[–] [email protected] 1 points 5 months ago

Pretty much all artificial neural nets I have seen don't do all or nothing activation. They all seem to have activation states encoded as some kind of binary number. I think this is to mimic the effects of variable firing rates.

The idea of a neural network doing stuff in the background is interesting though.

[–] [email protected] 3 points 5 months ago

The fact that you believe software based neural networks are, as you put it, "static circuitry" betrays your apparent knowledge on the subject. I agree that many people overblow LLM tech, but many people like yourself grossly underestimate it as well.

[–] [email protected] 6 points 5 months ago

This is all theoretical. Today it’s quite basic with billions thrown at the problem. Maybe in decades these ideas can be expanded on.

[–] [email protected] -2 points 5 months ago

It's a shit post, relax

[–] [email protected] -3 points 5 months ago

We don't have to diminish their accomplishments, no; we choose to.