this post was submitted on 23 Aug 2024
2 points (100.0% liked)

TechTakes

1430 readers
133 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 3 months ago (1 children)

so this is actually the best the AI researchers can do

Highly unlikely. This is what corporation's public facing products can do.

[–] [email protected] 0 points 3 months ago (1 children)

are there mechanisms known to researchers that Microsoft’s not using that can prevent this type of failure case in an LLM without resorting to whack-a-mole with a regex?

[–] [email protected] 0 points 3 months ago (1 children)

Yeah there's already a lot of this in play.

You run the same query multiple times through multiple models and do a web search looking for conflicting data.

I've had copilot answer a query, then erase the output and tell me it couldn't answer it after about 5 seconds.

I've also seen responses contradict themselves later paragraphs saying there are other points of view.

It would be a simple matter to have it summarize the output it's about to give you and dump the output of it paints the subject in a negative light.

[–] [email protected] 1 points 3 months ago

It would be a simple matter to have it summarize the output it's about to give you and dump the output of it paints the subject in a negative light.

lol. like that’s a fix

(Hindenburg, hitler, great depression, ronald reagan, stalin, modi, putin, decades of north korea life, …)