Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
view the rest of the comments
That's because it's conning you, like a mentalist would.
I've read this text. It's a good piece, but unrelated to what OP is talking about.
The text boils down to "people who believe that LLMs are smart do so for the same reasons as people who believe that mentalists can read minds do." OP is not saying anything remotely close to that; instead, they're saying that LLMs lead to pleasing and insightful conversations in their experience.
Yeah, as would eliza (at a much lower cost).
It's what they're designed to do.
But the point is that calling them conversations is a long stretch.
You're just talking to yourself. You're enjoying the conversation because the LLM is simply saying what you want to hear.
There's no conversation whatsoever going on there.
Neither Eliza nor LLMs are "insightful", but that doesn't stop them from outputting utterances that a human being would subjectively interpret as such. And the later is considerably better at that.
Then your point boils down to an "ackshyually", on the same level as "When you play chess against Stockfish you aren't actually «playing chess» as a 2P game, you're just playing against yourself."
This shite doesn't need to be smart to be interesting to use and fulfil some [not all] social needs. Specially in the case of autists (as OP mentioned to be likely in the spectrum); I'm not an autist myself but I lived with them for long enough to know how the cookie crumbles for them, opening your mouth is like saying "please put words here, so you can screech at me afterwards".
You're gatekeeping what counts as a conversation now?
I can take this even further. I can have better conversations literally with myself inside my own head than with some people online.
This should be top comment. Yes people can be assholes and AI is polite and people can snd should learn not to be assholes.... But why is it being polite? Out of empathy or kindness? No, it is the plaything of billionaires and wants your money and everyone elses too
Thank you! I'm a professional part-time psychic entertainer and magician, and this was a delightful read. It's true, and A.I. takes advantage of people the same way as a psychic entertainer. Both tell you what j you want to hear. The difference is, the psychic is usually deemed entertainment, and the computer is often deemed an authoritative source.
It's a bit scary to think that I'm a few decades my job-hobby may be outsourced to A.I. However, I've always thought (predicted!) that live entertainment will become more valuable as the A.I. revolution occurs.
Hey, another spreader or the LLMentalist! There are at least three of us! :D
Idk, I think that article is a bit hyperbolic and self serving for validation of the writers and the readers to pander their own intelligence above others. The lengthy exposition on cold reading is plain filler material for the topic and yet it goes on. ChatGPT and LLM have been a thing for a while now and I doubt anyone technically literate believes it to be AI as in an actual individual entity. It's an interactive question-response machine that summarises what it knows about your query in flowing language or even formatted as lists or tables or whatever by your request. Yes, it has deep deep flaws with holes and hallucinations, but for reasonable expectations it is brilliant. Just like a computer or the software for it, it can do what it can do. Nobody expects a word processor or image editor or musical notation software to do more than what it can do. Even the world's most regarded encyclopedia have limits, both printed and interactive media alike. So I don't see why people feel the need to keep in patting themselves on the back of how clever they are by pointing out that LLM are in fact not a real world mystical oracle that knows everything. Maybe because they themselves were the once thinking it was and now they are overcompensating to save face.