Why Sergio’s computationalism is not enough

Not that long ago Sergio Graziosi wrote two guest posts for Conscious Entities blog. The blog is entirely devoted to the topic of consciousness – problems, theories, concepts, ideas, all there is to it. The blog is run by Peter Hankins, author of a book about the subject The Shadow of Consciousness (A Little Less Wrong).

In this post I would like to dissect ideas and arguments behind the idea of computationalism as the best means of understanding brains, minds, and maybe even consciousness.

Sergio’s Computational Functionalism

The first post, entitled Sergio’s Computational Functionalism, presents the problem and tries to deal with it with the tools of concept of computation and physicality (being embodied).

Sergio took a challenge:

Jochen and Charles Wolverton showed that “computations” are arbitrary interpretations of physical phenomena. Because Turing machines are pure abstractions, it is always possible to arbitrarily define a mapping between the evolving states of any physical object and abstract computations. Therefore asking, “what does this system compute?” does not admit a single answer, the answer can be anything and nothing. In terms of one of our main explananda: “how do brains generate meanings?” the claim is that answering “by performing some computation” is therefore always going to be an incomplete answer. The reason is that computations are abstract: physical processes acquire computational meaning only when we (intentional beings) arbitrarily interpret these processes in terms of computation. From this point of view, it becomes impossible to say that computations within the brain generate the meanings that our minds deal with, because this view requires meanings to be a matter of interpretation. Once one accepts this point of view, meanings always pre-exist as an interpretation map held by an observer. Therefore, “just” computations, can only trade pre-existing (and externally defined!) meanings and it would seem that generating new meanings from scratch entails an infinite regression.

The main points of interest are:

– computations are seen not as physical processes (like, for example chemical reactions, protein construction by cells, thermodynamic interactions between particles, gravitational interactions between masses), but as “interpretations”,

– Turing machines are “pure abstractions”, therefore everything that works according to the specifications of Turing machines is also arbitrarily interpretable,

– there is a lot of talk about “meaning” having to be on the one hand computed and on the other hand derived from interpretation. But what is this “meaning” in the first place? I’m not trying to downplay the role of this concept. I’m genuinely concerned about our non-rigorous approach – we think we know what we are talking about, but I’m not so sure that’s really the case.

We understand through the use of interpretative maps

Sergio sees the core of meaning or meaning-generating in interpretative maps – maps from symbols to interpretations (or meaning). He states that postulating this notion generates another problem:

“how do you generate the first map, the first seed of meaning, a fixed reference point, which gets the recursive process started?”

Jochen gave the famous example of a stone. Because the internal state of the stone changes all the time, for any given computation, we can create an ad-hoc map that specifies the correspondence of a series of computational steps with the sequence of internal steps of our stone. Thus, we can show that the stone computes whatever we want, and therefore, if we had a computational reduction of a mind/brain, we could say that the same mind exists within every stone.

Sergio thinks that this criticism is serious, because “Dissolving this apparently unsolvable conundrum is equivalent to showing why a mechanism can generate a mind”. However I would point to the idea of mechanisms known in other fields, for example in biology. There are specific mechanisms that enable the cell to produce proteins or to procreate (divide itself). Now, you can reinterpret the cell’s internal states all you want, but you it won’t have any bearing on that it does – produces proteins for example. You may interpret these proteins in whatever way you wish, but it doesn’t matter, because these proteins have specific functions in the behavior and life-cycle of the cell. A similar thing, I claim, happens in the brain/mind – external observer’s interpretation (or ad-hoc mappings) of brain “states” can be the observer’s amusement or an aid in understanding, but they certainly have specific effects on the overall functioning of the neural system and the whole organism. Identification of a best explanation of functioning of neural systems should be possible with or without the notion of computation, as it is done for example in cell or molecular biology. So it seems that we are in agreement here, as Sergio also thinks that the problem “is about finding the correct map” or proper description of what is doing on. The problem is ultimately put to rest in the second post:

If any mechanism can be interpreted to implement any computation, you have to explain me why I spend my time writing software/algorithms. Clearly, I could not be using your idea of computations because I won’t be able to discriminate between different programs (they are all equivalent).

Unfortunately things get a tad difficult: “All of the above has left out the hard side of the quest, I haven’t even tried to address the problem of how computations can generate a “meaningful map” on its own.” So we have a “meaningful map” and this crazy “meaning” which should, I suppose, evoke a warm and fuzzy feeling that we know what we’re talking about. What I would like to see is to have a serious intro to the notion of meaning. It is needed in the context of present discussion as we are talking a great deal about aboutness, signals, and meaning itself.

Intrinsic meaning of basic perceptual elements

In the lines that immediately follow, Sergio explains that the first, basic map that allows the mind to interpret everything – and I presume, to allow to create further interpretations or “meanings” – is rooted in the system itself, the organism is a fixed reference point, which filters incoming stimuli in such a way as to function as a sort of an invariant – thereby allowing the changing perceptions to be interpreted with reference to that which does not change. This is of course a unambitious reference do Gibson’s invariants (Gibson, 1986; “Ambient optic array” in Wikipedia). Of course, Gibson himself didn’t feel the need to bring any computations to his account. On the contrary, Gibson was going counter to – then gaining traction – cognitivism – he thought that the organism and – more importantly – the environment provide rich informational structure that there is simply no need to use heavy-weight cognitive representations and perform complicated calculations in one’s head. The environment gives clues (the now famous affordances) about potential possibilities for actions.

This is a very interesting point and I would like to read more about that. Of course the length of the post had to be kept within bounds.

When it comes to consciousness or “meaning”, Sergio asserts that computations + specific physicality are the best way to go about these two connected problems. However, a stone is physical and it is often subjected to manipulations, for example rain, tremors, movement (by animals for instance), chemical and structural changes (when it breaks or a part chips off). If a brain can be thought of as computational entity, just because it is rule based, then we could do the same with a stone – all those physical processes are rule governed and therefore can be modeled computationally. And if a brain can be said to derive “meaning” from it being embedded in a physical body, then why not say the same about a stone?

The computationalism and physicalism / embodiment marriage – the idea that receptors in the eye transmit signals to the brain, receptors in the tongue transmit signals to the brain, receptors in the ear transmit signals to the brain. They are about various features of the world. How is it that we see some of them, we taste some of them, and we hear some of them? What makes those signals that are about the world be of one modality or the other? There are no images, no sounds in the world. Computationalism, even with added physicalism, is mute about this.

The problem of embodiment and the relation between organism’s body and its cognitive faculties is being seriously explored for over over two decades now (Anderson, 2008; Barsalou, 2003, 2008, 2010; Cangelosi & Riga, 2006; Chrisley, 2003; Harnard, 1990; Varela, Thompson, Rosch, 1991). The main notion in embodied cognition framework states that mind doesn’t rely as much on heavy representations and computations – instead it uses its whole body to do the thinking. The signals coming to the mind have meaning because they are grounded in the organism’s perceptions, actions, interactions with the world. What is important here is that all those researchers (as far as I’m aware) can be said to be anti-computationalists – that is they do not see mind/brain as a computing organ. So it certainly is not at all clear that computer metaphor of the brain is the only viable option. We are exploring other avenues and paths.

Conclusions in the first post

Now, briefly about the conclusions that Sergio came to in the first post:

1. “The computational metaphor should be able to capture the mechanisms of the brain and thus describe the (supposed) equivalence between brain events and mind events.”

I’m not really convinced. Maybe the computational metaphor will be able to capture the interesting parts of neurology and psychology, but I don’t get any such impression after reading the post.

2. “Such description would count as a weak explanation as it spells out a list of “what” but doesn’t even try to produce a conclusive “why”.”

Some say (citation needed ;)) that science is good as answering the “how?” question, while “what?” is left for ontology and “why?” for… Hmmm…

3. “To be conscious, an entity needs to be physical. Its physicality is the source of the ability of generating its own, subjective meanings.”

OK, it would be hard to think about non-physical entities, let alone conscious non-physical entities! That would be just a word game, so I’ll leave it there.

But do we really need to bring into the table all those computations, intentionalities, meanings to say that only physical entities can be conscious?…

The “cognition merely shuffles symbols around” idea

The hardest and the most orthodox part is at the end, when we encounter the following:

because cognition can only generate and use interpretative maps (or translation rules), it “just” shuffles symbols around, it cannot, in no way or form ultimately explain why the physical world exists (or what exactly the physical world is, this is why I steered away from ontology!). Because all knowledge is symbolic, some aspect of reality always has to remain unaccounted and unexplained.

We are given such unfounded claims as “cognition can only generate and use interpretative maps”, “[cognition] “just” shuffles symbols around”, “all knowledge is symbolic”. These are the fundamental axioms of the whole argumentation. Very controversial, poorly supported, and in my opinion they very well may be false. These premises are extremely problematic, therefore the conclusions that one can derive from them are not to be taken lightly.

I think that cognition is a physical, biological process, more similar to the functioning of the machinery of a single cell, the ecosystem of a forest, or an ant colony. I don’t see it as something “abstract” that “can only generate and use interpretative maps” or something that “”just” shuffles symbols around”, whatever those symbols might be.

In the comments

Jochen – who has AFAIK started the whole discussion – points again to the arbitrariness of “maps” in a comment:

we need a map to decide whether the representation matches up with the physical system; but in deciding which map to use, we again have arbitrary freedom. Hence, different ‘matching’-maps will lead to different maps from brain states to cognitive content being singled out.

This, I think, is the crux of the problem. It stems from confusing a scientific explanation – or a scientific theory – with the phenomenon that we attempt to explain and describe. We of course need a “map”, that is a theory, in order to properly understand the phenomenon – how the mind works, how does consciousness arise, etc. Some theories, or maps, are thought to be better than others – when they explain more, better, make novel predictions, are in alignment with other fields of inquiry. So even though they may look like they are arbitrary, the theories are indeed undergoing an evolutionary process – better ones last longer, their notions show up in later theories, those weaker ones are selected out from scientific practice. A good theory (or a “map”) will tell us a lot about how minds and neural systems work. A bad, arbitrary one won’t tell us anything interesting or useful or testable. Sergio tackles the problem of arbitrariness in a later comment (also referred to below).

That was the “science part”. Now to the “phenomenon part”. It is assumed here, in the comments section that a brain also uses something akin to a theory or a “map” to understand the sensory impressions. More importantly, it is assumed that those two things (the scientific theory and the “brain’s map”) have to be in accord! Of course if we would try to think like that about other things – metabolism, weather, thermonuclear reactions in stars – we would make zero progress. Metabolism doesn’t have to have a “map”, weather certainly has no use of one, the same is for stars. Why would brains need “maps”? Ah! The trap of computationalism set up for unsuspecting thinkers.

Another active person in the comments section, Callan S., neatly killed the “even a stone can be thought of as a computer” / “interpretative maps are arbitrary, therefore even a stone can be said to be performing computations / thinking” argument:

The way a computer works/alters state is by physics, not by anyone insisting it does computations. When one material lacks the physics that are involved in that process, obviously it doesn’t happen. The rock lacks the physics of a computer – that is the refutation that applies. OR does a rock somehow have the same physical/materials as a computer, somehow?

Saying the rock can do computations is like saying glass can conduct electricity as much as copper wire can, surely?

To which Sci replied with

Borrowing from Searle’s Is the Brain a Digital Computer + Lanier’s You Can’t Argue with a Zombie -> You can take a snow storm, the motion at the molecular level, or meteors and then attach computatonalist interpretations to that isolated aspect of the physical world. This is ultimately arbitrary because these are all cases of derived intentionality applied to an indeterminate physical world. The only way around this I can see is if the particular placement of physical stuff intrinsically pointed to something. As Rosenberg notes, there’s no evidence this is the case for any kind of combination of matter.

This is interesting. We now need to have “intentionality” – and to complicate matters further, it has to “intrinsically point” to something. I have to agree with Sci here – for computationalism to work we would have to have such exotic things. However I don’t really see them, no matter how hard I look or think. Another dead end for computationalism, if you ask me.

Sergio himself points out that computations are only one of two parts of the puzzle, the second being physicality. Sergio explains how we can deal with intentionality problem, specifically as it related to computationalism. He does this by presenting us with a very lucid and intelligible example of a bacterium that wiggles more vigorously when there is more food in its environment, and “going to sleep” when there is little food.

Sergio wants us to say that the bacterium “computes” its liveliness based on the availability of food. As a supporting argument he adds that biologists write equations that nearly perfectly model this very process. But wait, physicists write equations of motion of planets and satellites. Are we then justified in saying that our Moon computes its distance from Earth, its speed, based on the masses of the Earth? I doubt that.

Sergio wants us to think of internal chemicals of the bacterium as being about the amount of food in the vicinity of the organism. He goes as far as saying that thinking otherwise is almost unreasonable. And I concur. One of the ways we use the word “about” fits perfectly in this situation. For us, the chemicals (and reactions between them) are about the amount of food. But are they about food for the bacterium, at least in a comparable sense? I don’t know. I would like to see some argument for that. Signals coming from my thermoreceptors are about temperature, because they are causally linked with temperature. Am I justified in saying that a copper wire also knows about temperature – yet it physically reacts to heat. The wire may change color, expand – all due to signals from the environment.

Aboutness – I don’t like this concept of aboutness. What are triangles about? What are dividing cells about? How do you differentiate between things that are about some things and those that are not? I don’t know. It feels like a convoluted, psychotic maze with no exit. Once you step into intentionality, it is hard to get out of this confusing feeling that it’s something deep and we should seriously solve this problem. It’s like the forest scene in the movie Hobbit: The Desolation of Smaug. The forest is huge, it is paradoxical, it fogs your mind… And of course monsters await.

The two points above – bacterium computes the amount of food, and the chemicals in its body are about food – when accepted, push us to the conclusion that a bacterium possesses at least a minimal mind capable of intentionality, and functioning according to computational principles. A relatively “simple” thing has a simple mind. That’s the most interesting part for me. The example of a bacterium, its world, its “senses”, its “thinking” are astonishing and very alluring.

There is only one problem with this thread of thought – questions that immediately arise: what is a mind? What does have a mind? Does everything have a mind?

I will get back to them below (in the “Will you know a mind when you see one?” section), as they are extremely important.

Sergio Resurgent

The second post tells the same story, maybe a bit more thoroughly. I will just skim over it, as I believe I have already included all the most interesting items from Sergio’s conception of embodied computational mind.

Chinese Room will understand Chinese if it will learn and interact with the world

Sergio provides one of the best and clearest challenges to the Chinese room argument against computationalism. I wholeheartedly suggest you read it (in the “The challenges that I did not receive” section”). It is too long to quote it here. It gave me much pleasure reading it.

Scott Bakker, author of sci-fi books and Blind Brain Theory, replied with

the question of whether any system cobbling ‘Chinese Rooms’ together ‘understands x’ is simply an epistemological fact of the system. It does or does not to the extent that we cognize it as such.

I strongly sympathize with this perspective – it seems that it is a matter of convention of how we use the phrase “understands x” in the context of Chinese Room (or any other for that matter). The Chinese Room argument is very focused on ontology – does the room really understand Chinese? I also think this is a misguided and unscientific way to think about the problem. This is an epistemological or a language problem.

Grounding of sensory signals once again

Sergio took another shot on the subject of sensory stimuli – or signals – acquiring their meaning by way of their being directly connected to the structures that generate them (elements of the world) and the structure that receives them (the body + brain). I will not include any quotes nor will I discuss the issue as I’ve already talked about that above. You can also check out the references, maybe starting from an overview article Embodied Cognition on The Stanford Encyclopedia of Philosophy.

General remarks about computationalism

I have a few general remarks about computationalism that may only be loosely related to Sergio’s post. Here they are:

– The fact that something can be described by general rules doesn’t mean that the studied phenomenon performs any “computations”. For example we can describe motion of the Moon, but no one is talking about Earth computing the motion of the Moon. It seems that: Sergio’s computationalism == everything rule-based. This is obviously not the case.

– Psychoanalysis, behaviorism, embodiment, psychological theories (development of personality) – what they all have in common? I really don’t know. But I know what they don’t have in common – computationalism. No one talks today about psychoanalytic theory of the mind as being computational just because there are specific mechanisms that for example generate dreams (dream-work). No one says that behaviorism has anything to do with computation just because there are specific rules for linking behaviors together (classical conditioning).

– Sergio’s conception of mind/cognition is basically a “slot-machine mind” – it feels nothing, it works according to computations. How do emotions fit into this conception of computer-brain?
– Information theory is very general and we can use it to aid in our understanding of many things. So is thermodynamics. Should we think of the brain as an engine of some sorts? Engine that transforms (“computes”) energy, produces heat and work (“behaviors”)? If not, then we not? Why think of the brain as an information processor (a computer) then?

From all the above you can clearly see that I am not particularly in favor of the computer metaphor of the brain.

Will you know a mind when you see one?

There are many animals, systems, even non-living, or composed of living systems, that seem to sustain itself – weather, ocean currents, a dog, sand dunes, colonies of ants or termites, trends and styles in art, philosophical schools, scientific fields, languages, forests. All of them change, all of them are in one way or another connected with external elements, with the world, all of them are actuated by things external to them, all of them exhibit some effects on things external to them. They are systems.

We would without hesitation ascribe cognition, minds to some of them. Not necessarily to all of them. How then do we discern what has a mind and what doesn’t? This is a question for decades or maybe even for thousands of years. I don’t think that computationalism with added spice in the form of physicality gets us any closer to answering this question or thinking about the problem from a newer, fresher perspective.

References

Ambient optic array. (2015, January 19). In Wikipedia, The Free Encyclopedia. Retrieved 18:50, August 2, 2015, from https://en.wikipedia.org/w/index.php?title=Ambient_optic_array&oldid=643238553

Anderson, M. L. (2008). On the grounds of (x)-grounded cognition. In Calvo, P. and Gomila, T., editors, The handbook of cognitive science: An embodied approach, pages 423–435.

Barsalou, L. (2003). Abstraction in perceptual symbol systems. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 358(1435):1177–1187.

Barsalou, L. (2008). Grounded cognition. Annual Review of Psychology, 59:617–645.

Barsalou, L. (2010). Grounded cognition: past, present, and future. Topics in Cognitive Science, 2(4):716–724.

Cangelosi, A. and Riga, T. (2006). An Embodied Model for Sensorimotor Grounding and Grounding Transfer: Experiments with Epigenetic Robots. Cognitive Science, 30:673–689.

Chrisley, R. (2003). Embodied artificial intelligence. Artificial Intelligence, 149:131–150.

Gibson, J. J. (1986). The Ecological Approach to Visual Perception. Hillsdale (N.J.): Lawrence Erlbaum Associates.

Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3):335–346.

Wilson, Robert A. and Foglia, Lucia, “Embodied Cognition”, The Stanford Encyclopedia of Philosophy (Fall 2011 Edition), Edward N. Zalta (ed.), from http://plato.stanford.edu/archives/fall2011/entries/embodied-cognition/

Varela, F., Thompson, E., Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. MIT Press.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s