this post was submitted on 04 Jan 2024
82 points (100.0% liked)

Programming

17526 readers
290 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
 

cross-posted from: https://programming.dev/post/8121843

~n (@[email protected]) writes:

This is fine...

"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 10 points 10 months ago (1 children)

I'm still of the opinion that....

Good programmers = best code

[–] [email protected] 5 points 10 months ago (2 children)

eh, I've known lots of good programmers who are super stuck in their ways. Teaching them to effectively use an LLM can help break you out of the mindset that there's only one way to do things.

[–] [email protected] 7 points 10 months ago* (last edited 10 months ago) (1 children)

I find it's useful when writing new code because it can give you a quick first draft of each function, but most of the time I'm modifying existing applications and it's less useful for that. And you still need to be able to judge for yourself whether the code it offers is any good.

[–] [email protected] 4 points 10 months ago (1 children)

I find it's great for explaining convoluted legacy code, it's all about utilizing it effectively

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago)

It really depends

  1. How widely used is the thing you want to use. For example it hallucinated caddyfile keys when I asked it about setting up early data support for a reverse proxy to a docker container, luckily caddy docs are really good and it was an issue with the framework I use anyway so I had to look it up myself after all. Ig it'd have been more likely to do this right at first attempt if say I wanted it to achieve that using Express with Nginx. For even less popular technology like Elixir it's borderline useless beyond very high level concepts than can apply to any programming language.
  2. How well documented it is, also more widespread use can sometimes make up for bad docs.
  3. How much has changed since it was trained. Also it might still include deprecated methods since it doesn't discriminate between official docs and other sources like SO in it's training data.

If you want to avoid these issues I'd suggest to first read the docs, then look up stack overflow or likely name of a function you need to write on grep.app, then use a LLM as your last resort. Good for prototyping usually, less so for more specific things.

[–] [email protected] 3 points 10 months ago

I think that's one of the best use cases for AI in programming; exploring other approaches.

It's very time-consuming to play out how your codebase would look like if you had decided differently at the beginning of the project. So actually comparing different implementations is very expensive. This incentivizes people to stick to what they know works well. Maybe even more so when they have more experience, which means they really know this works very well, and they know what can go wrong otherwise.

Being able to generate code instantly helps a lot in this regard, although it still has to be checked for errors.