this post was submitted on 25 Jan 2024
549 points (97.1% liked)

Greentext

4613 readers
1860 users here now

This is a place to share greentexts and witness the confounding life of Anon. If you're new to the Greentext community, think of it as a sort of zoo with Anon as the main attraction.

Be warned:

If you find yourself getting angry (or god forbid, agreeing) with something Anon has said, you might be doing it wrong.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 31 points 11 months ago* (last edited 11 months ago) (4 children)

Most questions on SO these days are very specific so I doubt ChatGPT would be able to come up with good answers for those. All the easy questions have been answered long ago.

[–] [email protected] 12 points 11 months ago (3 children)

Especially since ChatGPT can't think of a new answer, right? It's working off data that's already somewhere online. It's just using predictive text based to determine the next word based on what users have typed. So most of these answers people get from "AI" are out there for these people to get from real people.

[–] [email protected] 8 points 11 months ago (3 children)

I don't know why you're getting down voted. That is how it works to my understanding (as a layperson). It was fed training data and is very good at predictive text. I don't think it can take concepts it's learned and apply them in novel ways.

[–] [email protected] 4 points 11 months ago (1 children)
[–] [email protected] 2 points 10 months ago

This is hilarious but I don't think fully answers the question. This is a good example of something novel that GPT can do, ie manipulating language according to new rules to create rhythm and rhymes.

However, to give a more over the top example: if you removed all mention of planes from its corpus, leaving only information on air resistance and materials science, and then asked it for the best way to cross the Atlantic, it would never invent a plane for you.

[–] [email protected] 2 points 11 months ago (1 children)

Even if it could, there are a lot of APIs or documentation that it hasn't been trained on enough or at all to be able to answer. The models can, at least currently, only contain so much information, so the more specific or detailed the response you need, the worse it'll do.

[–] [email protected] 1 points 11 months ago

Deciding what to write next based on what it just wrote is reasoning. So saying "it's just predicting the next word" is very dismissive if you haven't used it.

My personal experience was I spent hours googling a for a script. I gave up and typed my problem into chatgpt. It gave working code in seconds.

It wasn't just cutting and pasting what was already on Google.

[–] [email protected] 2 points 11 months ago

I swear, uninformed people who underestimate AI will be the death of us

[–] [email protected] 1 points 11 months ago

Good thing every single programming line is already documented somewhere.

It doesn't need to think of new answers.

[–] [email protected] 9 points 11 months ago

I disagree. I use chatgpt all the time where I'll tell it "here's my block of code" then "here's the error message I'm getting, how should I resolve this?" I could easily see it working for stack exchange questions. Chatgpt is useful because it's able to answer specific questions.

Of course there is some percentage of the time where it's completely wrong, but I'd put that under 20% for the questions I ask it. And you can tell it's wrong because the solution doesn't work, but if I'm not familiar with the subject matter I could waste a lot of time before I figure out why it's wrong.

[–] [email protected] 2 points 11 months ago

If you look at new questions asked, there are a lot of easy to answer, low quality questions.

[–] [email protected] 2 points 11 months ago (1 children)

Which is probably how chatgpt learned to code in the first place.

[–] [email protected] 4 points 11 months ago (1 children)
[–] [email protected] 3 points 11 months ago (1 children)

You haven't learned to add "probably" when you're sure of something on lemmy?

[–] [email protected] 5 points 11 months ago