this post was submitted on 01 Apr 2024
264 points (99.3% liked)

Technology

1376 readers
262 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 59 points 7 months ago (18 children)

Until we either solve the problem of LLMs providing false information or the problem of people being too lazy to fact check their work, this is probably the correct course of action.

[–] [email protected] 27 points 7 months ago (4 children)

I can’t imagine using any LLM for anything factual. It’s useful for generating boilerplate and that’s basically it. Any time I try to get it to find errors in what I’ve written (either communication or code) it’s basically worthless.

[–] [email protected] 5 points 7 months ago (1 children)

My little brother was using gpt for homework and he asked it the probability of extra Sunday in a leap year(52 weeks 2 days) and it said 3/8. One of the possible outcomes it listed was fkng Sunday, Sunday. I asked how two sundays can come consecutively and it made up a whole bunch of bs. The answer is so simple 2/7. The sources it listed also had the correct answer.

[–] [email protected] 4 points 7 months ago

All it does it create answers that sound like they might be correct. It has no working cognition. People that ask questions like that expect a conversation about probability and days in a year. All it does is combine the two, it can't think about it.

[–] [email protected] 3 points 7 months ago (1 children)

Really? It spotted a missing push_back like 600 lines deep for me a few days ago. I’ve also had good success at getting it to spot missing semicolons that C++ compilers can’t because C++ is a stupid language.

[–] [email protected] 5 points 7 months ago (1 children)

You can thank all open source developers for that by supporting them.

[–] [email protected] 1 points 7 months ago (1 children)
[–] [email protected] 6 points 7 months ago (1 children)

All LLMs are trained on open source code without any acknowledgment or compliance with the licenses. So their hard work is responsible for you being able to take advantage of it now. You can say thank you by supporting them.

[–] [email protected] 2 points 7 months ago (1 children)

Ah yes, I am aware. Gotta love open source :)

Were you under the impression that I said anything to the contrary?

[–] [email protected] 3 points 7 months ago

No, just taking any opportunity to spread the word and support open source.

[–] [email protected] 1 points 7 months ago

I find it useful for quickly reformating smaller sample sizes of tables and similar for my reports. It's often far simpler and quicker to just drop that in there and say what to dp than to program a short python script

load more comments (13 replies)