On the Mind, Consciousness, and the Subjects of Reality
Thus far in these essays, I have argued from my metaphilosophy to my general philosophy of commensurablism, which is any philosophy that is neither dogmatic nor cynical, and neither transcendent nor relativist.
Then I explored the implications of commensurablism on the philosophy of language, including both logic and rhetoric.
Then I began exploring its implications on the specific subtopics of philosophy concerning reality and knowledge, beginning with ontology.
In this essay I will now continue that, exploring the implications of that empirical realist ontology on the philosophy of mind.
Philosophy of mind is, just as the name says, philosophizing about the nature of the mind. It has its origins in what is called the mind-body problem, which began from an assumed dualist ontology, in which the body was seen as a material substance and the mind as a completely different, mental type of substance, prompting the question of how exactly the mind and body interact: how do sensations from the body's senses get into the mind, and how do the mind's intentions control the behavior of the body?
In this essay I intend to address mostly issues akin to the first half of that question, reserving discussion of issues akin to the latter half for my later essay on the will. That is, this essay is mostly about the mind inasmuch as that means the capacity to be subject to experiences (and thus a subject of reality), and to process those experiences into sensations, perceptions, and beliefs; and less about anything to do with behavior, intention, desire, or appetite, though some of this will be groundwork for my later essay on the will where that will be covered.
The philosophy of mind that I am about to lay out is a hybrid of different positions in philosophy of mind, a different kind of position for each of three different senses of the word "mind":
- About one sense of the word "mind", you could say that my position is that "nothing has a mind".
- About another sense, you could say that my position is that "everything has a mind".
- I think that those are both unhelpful senses of the word "mind", however, and that in the ordinary sense that we normally mean the word "mind", only some things have minds and others don't, just as we ordinarily think.
As already elaborated in my previous essay on existence, I reject that dualist ontology that originally prompted the mind-body problem, and hold that there is only one kind of stuff of which minds and bodies both are made. It is only in that sense of "mind", mental substances distinct from physical substances, that I am an eliminativist. That is far from a unique position though, and does not in itself automatically answer all questions about the nature of mind, or consciousness, in a physicalist ontology like my own. Much has already been written in the field of philosophy of mind on that topic, but I will explain here my own take on the nature of a mind that is not necessarily immaterial.
It is useful to distinguish between other different things that we might mean by "consciousness" to be clear exactly which of several questions on the topic we wish to address. There is a sense of the word "consciousness" that simply means wakefulness, the opposite of being unconscious or asleep; that sense is not of much philosophical interest. There is another sense of the word that means awareness of something, or knowledge of it; that topic is not directly relevant to philosophy of mind, but it will be covered in my later essay on knowledge.
Of more interest in philosophy of mind are two other senses of the word. One of them is what Ned Block calls access consciousness
, which is the sense of the word that means self-awareness or self-knowledge, and is the topic of what David Chalmers calls the easy problem of consciousness
; though I find that topic more substantial, and in a sense harder, and will cover it first in this essay. The other of them is what Block calls phenomenal consciousness
, which is the difficult-to-define capacity for experience itself, of any sort, and is the topic of what Chalmers calls the hard problem of consciousness
; though I find that topic significantly less substantial, and in a sense easier, and will cover it last in this essay.
On Access Consciousness
When it comes to access consciousness, I hold a view called functionalism, which holds that a mental state is not strictly identical to any particular physical state, but rather to the functional role that a physical state holds in the physical system of which it is a part. That is, the experience that a thing has is a product of that thing's function, just as its behavior is also a product of function. Mental states are therefore multiply-realizable: different physical systems can instantiate the same functionality, and therefore the same mental states.
For instance, if it is possible in principle (as I hold it to be) to build an artificially intelligent computer, that computer could be built using semiconductors, or vacuum tubes, or pneumatic or hydraulic valves, or any other physical substrate of switching signals down different paths, and so long as it still maps the same inputs into it to the same outputs, it still instantiates the same function, and so will still have the same mental states no matter whether they are instantiated in voltages in electric current flowing through wires, pressures in water flowing through pipes, or anything else.
As already detailed in my previous essay on existence, I hold that the function of an object, the mapping of the inputs it experiences to the behaviors it outputs, defines every kind of object, not just minds as we ordinarily mean that word. Inasmuch as being a subject of phenomenal experience might make something worth calling "a mind", we might thus considered everything to be "a mind". But in ordinary usage, something being a mind means more than just being some kind of prototypical subject of phenomenal experience, or instantiating any old function or another. It means instantiating some specific kinds of functions that we recognize as mental.
Defining exactly what those functions are in full detail is more the work of psychology (mapping the functions of naturally evolved minds) and computer science (developing functions for artificially created minds) than it is the proper domain of philosophy, but I will outline a brief sketch of the kinds of functions that I think are important to qualify something as a conscious mind, in the ordinary sense by which we would say that a human definitely has one, and a dog probably has one, but a tree probably does not, and a rock definitely does not.
On Sensations and Perceptions
The first of these important functions, which I call "sentience", is to differentiate experiences toward the construction of two separate models, one of them a model of the world as it is, and the other a model of the world as it ought to be. These differentiate aspects of an experience, which as outlined in my essay on existence is an interaction between oneself and the world, into those that inform about about the world, including what kind of things are most suited to it, which form the sensitive aspect of the experience; and those that inform about oneself, and what kind of world would be most suited to oneself, which form the appetitive aspect of the experience.
From these two models we then derive the output behavior from a comparison of the two, so as to attempt to make the world that is into the world that ought to be. This is in distinction from the simpler function of most primitive objects, where experiences directly provoke behaviors in a much simpler stimulus-response mechanism, and no experience is merely indicative of the nature of the world, but all are directly imperative on the next behavior of the object.
Those experiences that are channelled into the model of the world as it ought to be I call "appetites", and I will discuss more on them, their interpretations into desires, and the reflection upon desires to arrive at intentions, in my later essay on the will. Meanwhile, those experiences that are channelled into the model of the world as it is I call "sensations".
Sensations are the raw, uninterpreted experiences, like the seeing of a color, or the hearing of a pitch. When those sensations are then interpreted, patterns in them detected, identified as abstractions, that can then be related to each other symbolically, analytically, that is part of the function that I call "intelligence" (the other part of intelligence handling the equivalent process with appetites), and those interpreted, abstracted sensations output by intelligence are what I call "perceptions", or "intuitions".
On Beliefs
None of this is yet sufficient to call a mind conscious in our ordinary sense of the word. For that, we need all of the above plus also another function, a reflexive function that turns that sentient intelligence back upon the being in question itself, and forms perceptions and desires about its own process of interpreting experiences, and then acts upon itself to critique and judge itself and then filter the conclusions it has come to, accepting or rejecting them as either soundly concluded or not. That reflexive function in general I call "sapience", and the aspect of it concerned with critiquing and judging and filtering perceptions I call "consciousness" proper.
(I see the concepts of "id", "ego", and "superego" as put forward by Sigmund Freud arising out of this reflexive judgement as well, with the third-person view of oneself that one is casting judgement upon being the "id", the third-person view of oneself casting judgement down on one being the "superego", and the first-person view of oneself, being judged by the superego while in turn judging the id, being the "ego"; an illusory tripartite self, as though in a mental hall of mirrors).
The output of that function – an experience taken as indicative, interpreted into a perception, and accepted by sapient reflection – is what I call a "belief".
The proper conducting of this process of belief-formation is the subject of the next essay on belief, which, as promised earlier, will tackle the last of the three philosophically interesting senses of the word "consciousness" laid out near the start of this essay. But first, I must address the second of those senses of "consciousness", phenomenal consciousness.
On Phenomenal Consciousness
Phenomenal consciousness is perhaps best defined in distinction from what it is not. It is not anything to do with any behavioral properties of a thing. If we stipulate the existence of some being, like a computer artificial intelligence, that behaves identically to a human being, but isn't one, some would still ask whether such a being would actually have the thoughts and feelings, the internal experience, that a real human being would have, or whether it would be merely simulating the external behavior of a being with such thoughts and feelings, such as uttering statements claiming that it feels some way or another.
Philosophers such as David Chalmers have raised the question of whether it is conceivable for there to be a being in every way physically identical to a human being, and so identical in all of its external behavior as well – including the behavior of saying that it has internal experiences just like a human would say they had – that nevertheless does not have the internal experience that humans supposedly have, a so-called "philosophical zombie". That kind of experience, independent from anything to do with behavior, is what is meant by "phenomenal consciousness".
There are generally three possibilities when it comes to what kinds of beings have phenomenal consciousness in a physicalist ontology: either nothing has it, not even human beings, because the concept is simply confused nonsense; some beings, like humans, have it, but not all beings, because it only arises in certain complex interactions between physical parts; or all beings, not just humans but everything down to trees and rocks and electrons, have it.
Against Eliminativism
I am against eliminativism for the simple reason that I am directly aware of my own conscious experience, and whatever the nature of that may be, it seems that any philosophical argument that concludes that I am not actually having any conscious experience must have made some misstep somewhere and at best proven that something else mistakenly called "conscious experience" doesn't exist. But beyond my own personal experience, I find arguments put forth by other philosophers, such as Frank Jackson's "Mary's room" thought experiment, to convincingly defeat eliminativism, though not to defeat physicalism itself as they are intended to do.
In the "Mary's room" thought experiment, we imagine a woman named Mary who has been raised her entire life in a black-and-white room experiencing the world only through a black-and-white TV screen, but who has extensively studied and become an expert on the topic of color. She knows everything there is to know about the frequencies of electromagnetic radiation produced by various physical processes, how those interact with nerves in the eye and create signals that are processed by the brain, even the cultural significances of various colors, but she has never herself actually experienced color. We then imagine Mary leaving her room and seeing the color red for the first time, and in doing so, learning something new, despite supposedly knowing everything there was to know about color already: what the color red looks like.
This thought experiment was originally put forth to argue that there is something non-physical involved in the experience of color that Mary could not have learned about by studying the physical science of color, and I don't think it succeeds at all in establishing that, but I do think that it conclusively establishes that there is a difference between knowing, in a third-person fashion, how physical systems behave in various circumstances, and knowing, in the first person, what it's like to be such a physical system in such circumstances.
In essence, I think it succeeds merely in showing that we are not philosophical zombies. A more visceral analogous thought experiment I like to think of is that no amount of studying the physics, biology, psychology, or sociology of sex will ever suffice to answer the question "what's it like to have sex?" Actually doing it yourself is the only way to have that first-person experience; at best, that third-person knowledge of the way things behave can be instrumentally useful to recreating a first-person experience. But even then, you have to actually subject yourself to the experience to experience it, and that experience that can only be known in the first person is all that's meant by phenomenal consciousness.
Against Strong Emergentism
I am also against strong emergentism, as already elaborated in my previous essay on exitence.
But as also elaborated in that earlier essay on ontology, I am not against weak emergentism, and I do think that some kinds of phenomenal consciousness weakly emerge from other kinds. But as described above, phenomenal consciousness is defined in large part by its irreducibility, thus ruling out merely weak emergence. So if phenomenal consciousness is supposed to exist in some things, but not in others, it could only do so by strongly emerging.
As specifically regards philosophy of mind, strong emergentism holds that when physical objects are arranged into the right relations with each other, wholly new mental properties apply to the composite object they create, mental properties that cannot be decomposed into aggregates of the physical properties of the physical objects that went into making the composite object that has these new mental properties. Since no tractable account is given of just when or why that emergence occurs when the constituents are aggregated some way – because that would be weakly reductive, and so not strongly emergent – strong emergence asks us to just take someone's word for it where the line is between conscious and not, and so amounts to dogmatism.
I do agree with what I think is the intended thrust of the general emergentist position, that consciousness as we ordinarily speak of it is something that just comes about when physical things are arranged in the right way. But I think that consciousness as we ordinarily speak of it is access consciousness, to be addressed later in this essay, and that access consciousness is a purely functional, basically mechanistic property that is built up out of, or weakly emerges from, the ordinary physical properties of the physical things that compose an access-conscious being. Phenomenal consciousness, on the other hand, is defined precisely by not being thus reducible.
So when it comes to phenomenal consciousness, either it is wholly absent from the most fundamental building blocks of physical things and so is still absent from anything built out of them, including humans – which I've already rejected above – or else it is present at least in humans, as concluded above, and so at least some precursor of it must be present in the stuff out of which humans are built, and the stuff out of which that stuff is built, and so on so that at least something prototypical of phenomenal consciousness as humans experience it is already present in everything, to serve as the building blocks of more advanced kinds of phenomenal consciousness like humans experience.
Panpsychism
That latter position is a kind of position called panpsychism, more specifically the narrower position called pan-proto-experientialism. Panpsychism most broadly defined says that everything has a mind, whatever "mind" may be taken to mean. Pan-experientialism is a form of panpsychism that says everything is at least the subject of mental experience, without making any broader claims about everything having higher mental functions like sentience, intelligence, or sapience.
Pan-proto-experientialism is a subform of that, in turn, that says that everything at least has something prototypical of mental experience as we mean it regarding human consciousness, without making any broader claims to the depth or richness of that experience. That is the position that I hold. But in saying that everything has phenomenal consciousness, I'm not really saying very much of substance.
It's a bit akin to how in quantum mechanics, one physical system can be said to "observe" another physical system and in doing so collapse the "observed" system from a state of probabilistic superposition into a definite classical state, but that doesn't really imply anything substantial about the "observer"; it doesn't require something like a human being to do the observation, it just requires any kind of object to interact with the other object.
I would even go so far as to say that that quantum-mechanical "observation" can reasonably be equated with the kind of "experience" that I hold to constitute phenomenal consciousness, for as elaborated in my previous essay on existence, I hold experience to be but one perspective on what is really more fundamental, interaction, and likewise quantum mechanical "observation" really just means "interaction".
I'm only really saying that in addition to there being the third-person experience of observing a thing as an object of experience, there is also the first-person experience of being that thing as the subject of experience – because we each know first-hand that there is such a first-person experience of what it's like to be ourselves, which is different from the third-person experiences we have of each other – but that first-person experience needn't amount to much if the thing having the experience is so simple as a rock or atom or electron.
That equivalence of experience and behavior as two perspectives, first- and third-person, on the same kind of thing, interaction, absolves my form of panpsychism from a common criticism called the Combination Problem, which asks how exactly the experiences of things like atoms combine into the experiences of things like human brains. I see that as a clear non-problem for my kind of account, because the content of any particular experience is just a reversed perspective on the equivalent behavior.
What a brain does is a combination of what all its neurons do, which is a combination of what all their molecules do, and so on down. And what a brain experiences is an exactly equivalent, literally identical combination of what all its neurons experience, which is a combination of what all their molecules experience, and so on down. There's no problem combining different simple behaviors together into more complex behaviors, so there's equivalently no problem combining different simple experiences together into more complex experiences.
This panpsychism about phenomenal consciousness is not in any way meant to contradict the physicalism I espoused in my earlier essay on existence. As explained there, I think there are only physical things, and that physical things consist only of their empirical properties, which are actually just functional dispositions to interact with observers (who are just other physical things) in particular ways.
A subject's phenomenal experience of an object is, on my account, the same event as that object's behavior upon the subject, and the web of such events is what reality is made out of, with the nodes in that web being the objects of reality, each defined by its function in that web of interactions, how it observably behaves in response to what it experiences, or in other words what it does in response to what is done to it.
My only trivial point of agreement with philosophers like Jackson, who fashion themselves to be against physicalism, could be summed up as simply agreeing that we are not philosophical zombies. By definition philosophical zombies could not be discerned from non-zombies from the third person, as only in the first person can one know that oneself is not a philosophical zombie; and the only trivial thing I think Jackson proves is that there is such a first-person experience that we have, the likes of which philosophical zombies would not have, and that, by the rejection of strong emergentism, everything else must also have.
I don't think philosophical zombies are actually possible or even coherent, but then I also don't think supernatural things are possible or even coherent, so I don't think the predicates "is natural" or "is not a philosophical zombie" really communicate much of interest – they are complete trivialisms when properly understood.
(Supernatural beings and philosophical zombies are ontologically quite similar on my account, as for something to be supernatural would be for it to have no observable behavior, and for something to be a philosophical zombie would be for it to have no phenomenal experience. Both of those are just different perspectives on the thing in question being completely cut off from the web of interactions that is reality, and therefore unreal.)
So only in an extremely trivial and useless sense does everything thus "have a mind", inasmuch as everything is subject to the behavior of other things and so has an experience of them. But "minds" in a more useful and robust sense are particular types of complex self-interacting objects, and therefore as subjects have an experience that is heavily of themselves as much as it is of the rest of the world. Everything has "awareness" of some sort, in that it reacts to things that are done to it – otherwise they would not appear to exist at all, and so not be real at all on my empirical realist account of ontology – but only some things have self-awareness.
I find a useful analogy to be how in a computer, all programs are just data being executed (like how all minds are just physical objects doing things), and all data is executable in principle (like how everything is metaphysically capable of subjective experience), but most data does nothing interesting when executed (most physical objects have no interesting subjective experiences).
Continue to the next essay, On Epistemology, Belief, and the Methods of Knowledge.