this post was submitted on 25 Jun 2024
117 points (81.3% liked)

Programmer Humor

19551 readers
990 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] [email protected] 29 points 4 months ago (4 children)

Pasting the same thing I commented last time this was posted:

After reading that entire post, I wish I had used AI to summarize it.

I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I'm no longer as confident that I know what's going on.

This pull quote feels like it’s antithetical to their entire argument and makes me feel like all they’re doing is whinging about the fact that people who don’t know what they’re talking about have loud voices. Which has always been true and has little to do with AI.

[–] [email protected] 9 points 4 months ago

There's an earlier bit that complements that nicely:

"it turns out that the core competency of smiling and promising people things that you can't actually deliver is highly transferable."

[–] [email protected] 8 points 4 months ago* (last edited 4 months ago)

I kinda get where he is coming for though. AI is being crammed into everything, and especially in things where they are not currently suited to be.

After learning about Machine learning, you kind realize that unlike "regular programs" that ML gives you "roughly what you want" answers. Approximations really. This is all fine and good for generating images for example, because minor details being off of what you wanted probably isn't too bad. A chat bot itself isn't wrong here, because there are many ways to say the same thing. The important thing is that there is a definite step after that where you evaluate the result. In simpler ML you can even figure out the specifics of the process, but for the most part we evaluate what the LLM said or if the image is accurate to our expectations. But we can't control or constrain the output to exactly our needs, because our restrictions largely are just input in a almost finished approximation engine.

The problem is, that companies take these approximation engines, put them in their product and consider their output fact. Like Ai chatbots doing customer support, and make up facts like the user that was told about rules that didn't exist for an airline, or the search engines that parrot jokes or harmful advice. Sure you and I might realize that these things come from a machine that doesn't actually think about it's answers, but others don't. And throwing a "*this might be wrong because its AI" on it is not an acceptable waiver of accountability.

Despite this, I use chatgpt and gemini a lot to help me program, they get a lot of things wrong but also do great. It's a great tool, exactly because I step in after the approximation step, review and decide. I'm aware of the limits. But putting these things in front of "users" without a review step means you are advertising that you are either unaware of this flaw, or just see the cost-benefit analysis and see that if noting else it'll generate interest during the hype.

There is a huge potential, but throwing AI into a situation where facts are needed when it's only making rough guesses, is the wrong way about it.

[–] [email protected] 6 points 4 months ago

Yeah, parts of this article feel like they've been written by a GenAI. Which... might have been the point, I suppose.

[–] [email protected] -3 points 4 months ago (1 children)

So to be clear: you didn't laugh?

[–] [email protected] 6 points 4 months ago

I thought it was hilarious

[–] [email protected] 25 points 4 months ago
[–] [email protected] 2 points 4 months ago

I need more blog posts like these...