this post was submitted on 23 Oct 2023
82 points (100.0% liked)

196

16484 readers
2402 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 

KICK TECH BROS OUT OF 196

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 year ago

Second, find me a person who's only feedback loop was internal.

Not my argument: find me a person you'd consider intelligent who is only influenced externally, with no autonomy of their own. I name that person vegetable.

First, feedback on correctness can be driven by end users.

You've never worked with end users, have you? Jesus Christ the last thing you want to give an end users is write access to your model. It doesn't matter what channels hat write access comes through, it will be used to destroy your model.

(Not to mention the extortionate cost of this constant training, but this isn't a discussion about economic feasibility)

Besides that doesn't solve autonomy, which is still an integral aspect of intelligence.

The "consciousness program" is a fiction for illustrative purposes. It doesn't exist, in case you misunderstood me.

I did not miss the point on the wrong bit: but an LLM saying it is uncertain is not the same as saying it is wrong, and LLMs do not evaluate true or false: they transform inputs into outputs. Optionally with a certainty level.

Feeding another LLM with the outputs of the first has shown in some cases to improve accuracy, but that's just hooking 2 models together: not solving the fundamental gaps in reasoning.

If a human encounters an unknown situation it can seek out context to try and figure more out. They can generalize what they know and seek for things that might help them understand more.

An LLM just has an output. It cannot "broaden it's search" or generalize. Anything that did so would be layers on top of the LLM running aforementioned fictious "consciousness", and that consciousness would have a significant amount of complexity in order to perform the functions described here and previously.

An LLM is not an actor, it is math.

You're anthropomorphizing bits.