65
submitted 3 months ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 22 points 3 months ago* (last edited 3 months ago)

The original post was removed, hence the archive link.

HN figures the real issue was the lack of testing/monitoring, not specifically the use of ChatGPT. But the kind of person who's ok with letting spicy autocomplete write their customer acquisition code is probably not the kind of person knowing how to test and monitor.

https://news.ycombinator.com/item?id=40627558

[-] [email protected] 16 points 3 months ago

I actually tried letting ChatGPT-4o write some tests the other day.

Easily 50% of the tests were wrong. They ignored DB uniqueness constrains or even datatypes. In a few cases, they just hallucinated field names that didn't exist.

I ended up spending just as much time cleaning up the cruft as writing them. I could easily see someone just starting out letting the code go through.

[-] [email protected] 14 points 3 months ago

Commit messages are overrated.

Now that's the kind of bad hot take I read awful.systems for! Let's all call ourselves "engineers" but write no documents but emoji laden jokes, and produce no work except for the copy-pasted excreta from a chatbot!

this post was submitted on 13 Jun 2024
65 points (100.0% liked)

TechTakes

1276 readers
154 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS