this post was submitted on 09 Jan 2024
23 points (96.0% liked)

Technology@stad

63 readers
4 users here now

Technology News and Opinion

founded 1 year ago
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 10 months ago

You can't really trust anything a human says because we're frequently wrong yet convinced we're right, or not nearly as competent as we think, yet we manage, because in a whole lot of endeavours being right often enough and being able to verify answers is sufficient.

There are plenty of situations where they are "right enough" and/or where checking the output is trivial enough. E.g. for software development, where I can easily tell if the output is "right enough", and where humans are often wrong, and where we rely on tests to verify correctness anyway.

Having to cross-check results is a nuisance, but when I can e.g. run things past it on subjects I know well enough to tell if the answers are bullshit and where it can often produce answers better than a lot of actual software developers, it's worth it. E.g. I recently had it give me a refresher on the algorithms to convert an Non-deterministic finite automata (NFA) to a deterministic finite automata (DFA) and it explained it perfectly (which is not a surprise; there will be plenty of material on that subject), but unlike if I'd just looked it up in google, I could also construct examples to test that I remembered it right and have it produce the expected output (which, yes, I verified was correct).

I also regularly has it write full functions. I have a web application where it has written ca 80% of the code without intervention from me. Plenty of my libraries now have functions it has written.

I use it regularly. It's saving me more than enough time to justify both the subscription to ChatGPT and API fees for other use.

As such, it is "actually useful" for me, and for many others.