123
submitted 2 months ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 8 points 2 months ago

Yeah, no doubt they will push to give the things built atop the shaky foundation of LLMs as much responsibility and access to credentials as they think they can get away with. Making the models trustworthy for such purposes has been the goal since DeepMind set off in that direction with such optimism. There are a lot of people eager to get there, and a lot of other people eager to give us the impression right now that they will get there soon. That in itself is one more reason they react with some alarm when the products are easily provoked into producing garbage.

I'm sure it will go wrong in many interesting ways. Seems to me there are risks they haven't begun to think about. There's a lot of focus on preventing the models producing output that's obviously morally offensive, very little thought given to the idea that output entirely within the bounds of what is thought acceptable might end up accidentally calibrated to reinforce and perpetuate the existing prejudices and misconceptions the machines have learned from us.

this post was submitted on 12 Jul 2024
123 points (100.0% liked)

TechTakes

1277 readers
98 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS