this post was submitted on 17 Dec 2024
195 points (100.0% liked)

TechTakes

1480 readers
315 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Amoeba_Girl@awful.systems 25 points 1 day ago* (last edited 1 day ago) (2 children)

To be honest, as someone who's very interested in computer generated text and poetry and the like, I find generic LLMs far less interesting than more traditional markov chains because they're too good at reproducing clichés at the exclusion of anything surprising or whimsical. So I don't think they're very good for the unfactual either. Probably a homegrown neural network would have better results.

[–] dgerard@awful.systems 14 points 1 day ago (1 children)

GPT-2 was peak LLM because it was bad enough to be interesting, it was all downhill from there

[–] Amoeba_Girl@awful.systems 10 points 1 day ago

Absolutely, every single one of these tools has got less interesting as they refine it so it can only output the platonic ideal of kitsch.

[–] bitwolf@sh.itjust.works 11 points 1 day ago

Agreed, our chat server ran a Markov chain bot for fun.

In comparison to ChatGPT on a 2nd server I frequent it had much funnier and random responses.

ChatGPT tends to just agree with whatever it chose to respond to.

As for real world use. ChatGPT 90% of the time produces the wrong answer. I've enjoyed Circuit AI however. While it also produces incorrect responses, it shares its sources so I can more easily get the right answer.

All I really want from a chatbot is a gremlin that finds the hard things to Google on my behalf.