9
submitted 8 months ago by [email protected] to c/[email protected]

cross-posted from: https://programming.dev/post/8121843

~n (@[email protected]) writes:

This is fine...

"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?

top 2 comments
sorted by: hot top controversial new old
[-] [email protected] 4 points 8 months ago

This seems tied to the issues I've had when using LLMs, which is that it spits out what it thinks might work not what is best. Frequently I get suggestions that I need to clean up or ask follow-up guiding questions.

If I had to guess it's that, since there isn't anything enforcing quality on training data/generated text, it will tend towards the more frequent approaches and not the best.

[-] [email protected] 1 points 8 months ago

most likely, yes.

this post was submitted on 04 Jan 2024
9 points (100.0% liked)

Information Security

270 readers
1 users here now

founded 1 year ago
MODERATORS