this post was submitted on 09 Aug 2024
29 points (100.0% liked)

MoreWrite

112 readers
8 users here now

post bits of your writing and links to stuff you’ve written here for constructive criticism.

if you post anything here try to specify what kind of feedback you would like. For example, are you looking for a critique of your assertions, creative feedback, or an unbiased editorial review?

if OP specifies what kind of feedback they'd like, please respect it. If they don't specify, don't take it as an invite to debate the semantics of what they are writing about. Honest feedback isn’t required to be nice, but don’t be an asshole.

founded 1 year ago
MODERATORS
 

This started as a summary of a random essay Robert Epstein (fuck, that's an unfortunate surname) cooked up back in 2016, and evolved into a diatribe about how the AI bubble affects how we think of human cognition.

This is probably a bit outside awful's wheelhouse, but hey, this is MoreWrite.

The TL;DR

The general article concerns two major metaphors for human intelligence:

  • The information processing (IP) metaphor, which views the brain as some form of computer (implicitly a classical one, though you could probably cram a quantum computer into that metaphor too)
  • The anti-representational metaphor, which views the brain as a living organism, which constantly changes in response to experiences and stimuli, and which contains jack shit in the way of any computer-like components (memory, processors, algorithms, etcetera)

Epstein's general view is, if the title didn't tip you off, firmly on the anti-rep metaphor's side, dismissing IP as "not even slightly valid" and openly arguing for dumping it straight into the dustbin of history.

His main major piece of evidence for this is a basic experiment, where he has a student draw two images of dollar bills - one from memory, and one with a real dollar bill as reference - and compare the two.

Unsurprisingly, the image made with a reference blows the image from memory out of the water every time, which Epstein uses to argue against any notion of the image of a dollar bill (or anything else, for that matter) being stored in one's brain like data in a hard drive.

Instead, he argues that the student making the image had re-experienced seeing the bill when drawing it from memory, with their ability to do so having come because their brain had changed at the sight of many a dollar bill up to this point to enable them to do it.

Another piece of evidence he brings up is a 1995 paper from Science by Michael McBeath regarding baseballers catching fly balls. Where the IP metaphor reportedly suggests the player roughly calculates the ball's flight path with estimates of several variables ("the force of the impact, the angle of the trajectory, that kind of thing"), the anti-rep metaphor (given by McBeath) simply suggests the player catches them by moving in a manner which keeps the ball, home plate and the surroundings in a constant visual relationship with each other.

The final piece I could glean from this is a report in Scientific American about the Human Brain Project (HBP), a $1.3 billion project launched by the EU in 2013, made with the goal of simulating the entire human brain on a supercomputer. Said project went on to become a "brain wreck" less than two years in (and eight years before its 2023 deadline) - a "brain wreck" Epstein implicitly blames on the whole thing being guided by the IP metaphor.

Said "brain wreck" is a good place to cap this section off - the essay is something I recommend reading for yourself (even if I do feel its arguments aren't particularly strong), and its not really the main focus of this little ramblefest. Anyways, onto my personal thoughts.

Some Personal Thoughts

Personally, I suspect the AI bubble's made the public a lot less receptive to the IP metaphor these days, for a few reasons:

  1. Articial Idiocy

The entire bubble was sold as a path to computers with human-like, if not godlike intelligence - artificial thinkers smarter than the best human geniuses, art generators better than the best human virtuosos, et cetera. Hell, the AIs at the centre of this bubble are running on neural networks, whose functioning is based on our current understanding of how the brain works. [Missed this incomplete sensence first time around :P]

What we instead got was Google telling us to eat rocks and put glue in pizza, chatbots hallucinating everything under the fucking sun, and art generators drowning the entire fucking internet in pure unfiltered slop, identifiable in the uniquely AI-like errors it makes. And all whilst burning through truly unholy amounts of power and receiving frankly embarrassing levels of hype in the process.

(Quick sidenote: Even a local model running on some rando's GPU is a power-hog compared to what its trying to imitate - digging around online indicates your brain uses only 20 watts of power to do what it does.)

With the parade of artificial stupidity the bubble's given us, I wouldn't fault anyone for coming to believe the brain isn't like a computer at all.

  1. Inhuman Learning

Additionally, AI bros have repeatedly and incessantly claimed that AIs are creative and that they learn like humans, usually in response to complaints about the Biblical amounts of art stolen for AI datasets.

Said claims are, of course, flat-out bullshit - last I checked, human artists only need a few references to actually produce something good and original, whilst your average LLM will produce nothing but slop no matter how many terabytes upon terabytes of data you throw at its dataset.

This all arguably falls under the "Artificial Idiocy" heading, but it felt necessary to point out - these things lack the creativity or learning capabilities of humans, and I wouldn't blame anyone for taking that to mean that brains are uniquely unlike computers.

  1. Eau de Tech Asshole

Given how much public resentment the AI bubble has built towards the tech industry (which I covered in my previous post), my gut instinct's telling me that the IP metaphor is also starting to be viewed in a harsher, more "tech asshole-ish" light - not just merely a reductive/incorrect view on human cognition, but as a sign you put tech over human lives, or don't see other people as human.

Of course, AI providing a general parade of the absolute worst scumbaggery we know (with Mira Murati being an anti-artist scumbag and Sam Altman being a general creep as the biggest examples) is probably helping that fact, alongside all the active attempts by AI bros to mimic real artists (exhibit A, exhibit B).

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 3 months ago (1 children)

but I do believe brains are computers, but only in the broadest sense of what computation could be

Agree. A human brain is capable of executing the steps of a TM with pen/paper, and in that sense the brain is absolutely capable of acting as a computer. But as far as all the other process a brain does (breathing/maintaining heart rate/etc.) describing that as 'a computer' seems such an abuse of notation as to render the original definition meaningless. We might as well call the moon a computer since it is 'calculating' the effect of a gravitational field on a moon sized object. What I think many people are really claiming when they say a brain is a computer is that if only we could identify the correct finite state deterministic program, there would be no difference between the brain and its implementation in silicon. Personally, I find claims of substrate independence to be less plausible, but of course many of our dear friends are willing to bite that bullet.

[–] [email protected] 3 points 3 months ago (2 children)

We might as well call the moon a computer since it is ‘calculating’ the effect of a gravitational field on a moon sized object.

Yes. In fact, that's sort of my point. There is no privileged sense of computation. They can be different even if they do, have invariants.

But as far as all the other process a brain does (breathing/maintaining heart rate/etc.) describing that as ‘a computer’ seems such an abuse of notation as to render the original definition meaningless.

I tend to agree that often times, the terminology of 'attendance' is better than the terminology of computation, but I don't think that there isn't -any- meaning in keeping the computer metaphor, because I do think it has practical implications.

At the risk of going down another rabbit hole, I'd really say that the Free Energy Principle does a pretty good job of showing why keeping a wide, but nonetheless useful, definition of computation on the table can, be useful. As in, a principled tool that can shed some light on scale free dynamics (and not in a absolute, definitive answer to all questions).

https://www.youtube.com/watch?v=KQk0AHu_nng

Maybe another reason I'm ok with the computer metaphor (in which we retain the lack of privelege, and in which the attendance metaphor is kept), is that it does sort provide us some interesting technical intuitions, too. Like, how the maximum power principle effects the design and building of technology of all kinds (whether it's chemistry, electronics, energy, gardening) , how ambiguity (that is, the unknowable embedded environment) is an important functional element of deploying any sort of technology (or policy, or behavior), and how, yeah.

One day, the fact that simple and even slow things (like water, or the moon, or chemicals, or rocks, or animals) are capable computationally, but attend to different things, is in fact. Going to be meaningful and important.

[–] [email protected] 4 points 3 months ago* (last edited 3 months ago) (4 children)

i dunno, this seem to me to lead in a straight line to Chalmers claiming rocks could be conscious you can't prove they're not.

sure you can expand "computation" to things outside Turing, but then you're setting yourself up for equivocation

[–] [email protected] 6 points 3 months ago* (last edited 3 months ago)

Careful David, if you deny that rocks (and therefore the moon) are conscious, you might make them angry.

[–] [email protected] 4 points 3 months ago

I definitely don't claim anything about consciousness. But I also don't think think things have to be conscious to be interesting, or for me to care about them.

Hell, my mom is dead, and definitely not conscious. But I still think about her and care about her. And my memories of her, still impact my life and behavior in strange ways.

I get where you're coming from, and I'm not trying to make normalizing reductive claims that things -are the same-. But things that are different by some means can also share things by other. I think it is a useful perspective to have.

Computation and computer metaphors are helpful, atleast to my thinking. But even I don't argue that it's a privileged position. Lots of words and metaphors can work.

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago) (1 children)

Well, we could argue that computers don't really "compute," either. What a computer does is measure the flow of electrons through a transistor, albeit billions of them. If the flow of electrons passes an arbitrary threshold on a certain transistor, then we call it a "1". If it doesn't, we call it a "0". The "computation" is just us interpreting the flow of elections into something more useful. ^It^ ^was^ ^explained^ ^to^ ^me^ ^that^ ^the^ ^"threshold"^ ^was^ ^over^ ^and^ ^under^ ^5^ ^volts,^ ^but^ ^I^ ^think^ ^if^ ^you^ ^put^ ^5^ ^volts^ ^into^ ^a^ ^modern^ ^transistor^ ^it^ ^would^ ^just^ ^fry^ ^it.^

Obviously, because our brains are made of cells instead of silicon transistors, we wouldn't "compute" the same way a transistor does. If we decide that computation is only something that transistors can do, then obviously the brain couldn't compute, but, for now, that line would be arbitrary.

[–] [email protected] 4 points 3 months ago

This definitely reminds me of something I heard in the above video, which I think is super important. Like of course things like memory or computers are metaphors. But like, isn't everything metaphors? To your point, the "computation" of a transistor is in fact our interpretation of an activity that obviously isn't actually the thing we're seeing it as. Even a von neuman machine isn't actually, a turing machine -- it has practical limitations that theoritical turing machines don't!

But just because something or anything is a metaphor, doesn't mean it isn't useful. It's just, incomplete.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago)

I wouldn't take Chalmers' opinions on things that seriously. Chalmers is a metaphysical realist, a very dubious philosophical position, and thus all his positions are inherently circular.

Metaphysical realism presumes dualism from the get-go, that there is some fundamental gap between an unobservable "objective" reality beyond everything we can ever hope to perceive, and then everything we do perceive is some not real and a unique property associated with mammalian brains. To be not real suggests it is outside of reality, that it somehow transcends reality.

This was what Thomas Nagel argued in his famous paper "What is it like to be a Bat?" and then Chalmers merely cites this as the basis for saying the brain has a property that transcends reality, and then concludes if explaining the function of the brain (what he calls the "easy problem") is not enough to explain this transcendence, how is it that an entirely invisible reality gives rise to the reality we observe.

But the entire thing is circular, as there's no convincing justification the brain transcends reality in the first place, and you only run into this "hard problem" if you presume such a transcendence takes place. Bizarrely, idealists and dualists love to demand that people who are not convinced that this transcendental "consciousness" even exists have to solve the "hard problem" or idealism and dualism are proven. But it's literally the opposite: idealism and dualism (as well as metaphysical materialism) are entirely untenable positions until they solve the philosophical problem their position creates.

I am especially not going to be convinced that this transcendental consciousness even exists if, as Chalmers has shown, it leads to "hard" philosophical paradoxes. Metaphysical realists for some reason don't see their philosophy leading to a massive paradox as a reason for questioning its foundations, but then turn around and insist reality itself must be inherently paradoxical, that there really is a fundamental gap between mind and body. Chalmers himself is a self-described dualist.

It's from this basis that Chalmers says you cannot prove whether or not something is conscious, because for him consciousness is something transcendental that we can't concretely tie back to anything demonstrably real. It has no tangible definition, there are no set of obervables associated with it. If you have one transcendentally conscious person next to another non-transcendentally conscious person, Chalmers would say that there is simply no conceivable observation you could ever make to distinguish between the two.

Yet, if there are no conceivable ways to distinguish the two, then this transcendental property of "consciousness" is just not conceivable at all. It's a word without concrete meaning, a floating abstraction, and should not be taken particularly seriously. At least, not until Chalmers solves the hard problem of consciousness and proves his metaphysical realist worldview can be made internally consistent, only then will I take his philosophy as even worthy of consideration.

[–] [email protected] 2 points 3 months ago