It’s great as a mental prosthetic. When I am tackling a new complex topic like say a new cloud platform I’m learning, i can test my understanding of the implications of a change to the console settings. I tell it what i think and ask it to check my understanding. Really speeds up my learning, but I don’t rely on it exclusively. I will write my own dang emails, thank you.
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
Yeah, I mainly like it as a rubber ducking tool. And specifically in contexts where I already understand the topic well and just want prompts to stimulate more ideas on the subject.
That’s exactly right, you have to know enough about the subject to smell a bullshit answer.
LLMs are great for drafting emails and summaries. Not so sure about a fresh perspective but seems like in some situations that could happen.
I've found them to be helpful in that regard. I find it's really helpful when I want to express some concept and I can feed my half baked idea to a chat bot, and it can expand on it, and that helps me flesh out the idea more.
My issue is that they train their models on your data, so that fresh new idea becomes it's fresh new idea.
You can run models locally nowadays on fairly modest harware. I use GPT4all and it works great https://github.com/nomic-ai/gpt4all
Fair enough