this post was submitted on 19 Feb 2023
0 points (NaN% liked)
Machine Learning
1771 readers
5 users here now
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I wonder how we can "certainly" know that.
Because we built the model and know how it works. If you poke a dead brain a limb might twitch, but there is no consciousness in there. That's pretty much where we're at.
I would argue that we also know how brains work on a physical/chemical level, but that does not mean that we understand how they work on a system level. Just like we know how NNs work on a mathematical level, but not on a system level.
When someone claims that some object does not have a certain property, I would expect them to define what the necessary conditions for this property are, and then show that these conditions are not satisfied by the object.
As far as I know, the current consensus hypothesis is that sentience/consciousness emerges from certain patterns of information processing. So, one would have to show that the necessary kind of information processing is not happening in some object. One can argue that dead brains are not conscious, as there is not information processing going on at all. However, as it is unknown what kind of information processing is necessary for consciousness to arise, you can currently not exactly define the necessary conditions (beyond "there has to be some information processing"), and therefore not show that NNs do not fulfill these conditions. So, I think it is difficult to be "certain".
I disagree with your premise and I don’t think this is a productive discussion.
Which premise? Could you elaborate?