# Consciousness and Falsifiability

So g and I were discussing the nature of Consciousness in another thread, and he said something here that I've been meaning to reply to for a while.  We were discussing Chalmers' arguments (described in this paper and elsewhere) that Consciousness cannot be deduced from the Laws of Physics.

g wrote in part:

Consciousness-mysterians have in effect adopted a strategy that guarantees that their questions cannot be answered. There simply isn't any evidence one could possibly present, any argument one could possibly make, that would count as showing that consciousness is a physical phenomenon.  [This] is the key difference that Chalmers points out, though of course he does so in terms more sympathetic than mine to consciousness-mysterianism. And it's also what you draw attention to, again with a spin different from mine :-).

But, really, doesn't making that argument trigger at least a feeling of unease? What you're saying comes down to this: nonphysicalism about consciousness is unfalsifiable even in principle: no possible evidence could ever suffice. Usually unfalsifiability is a serious problem for a theory. Personally, I'm only comfortable holding an uncheckable-even-in-principle belief with much confidence if (1) I think I can actually prove it from first principles (note: observing that it's unfalsifiable doesn't count!) as with pure mathematics or statements that are true by definition, or (2) I can't avoid holding it because it's an unavoidable load-bearing element of my cognitive apparatus, as with those first principles themselves. In Bayesian terms, an uncheckable belief can't accumulate evidence, so it has to come from your prior, and I prefer my priors without too much unnecessary stuff built into them :-). And nonphysicalism about consciousness seems to me very much not the sort of thing covered by either #1 or #2. -- Of course, your attitude to unfalsifiability need not be the same as mine.

Yeah, as you guessed, I don't think this is a proper use of the criterion of falsifiability.  Let me try to explain why I think this.

In what follows, I will be assuming that my audience is familiar with some basic philosophy lingo, as well as the first Chalmers essay I linked to.

Also, please note that I am only a "nonphysicalis[t] about consciousness" in a very specific sense which will hopefully become clear in what follows.  (I'm okay with somebody who wants to say that the mind and brain are in some sense identical, as long as they don't claim to be able to prove this identity from the laws of physics.)

I. Goodbye, Eliminativism

Before I begin, I want to clear one bugbear out of the way; readers who wish to cut to the chase might want to skip this section.  Some philosophers of mind are eliminative materialists, they think that Consciousness isn't really even a thing that exists, and that the concept should be completely removed from a truly scientific account of the world.  (This position is very different from the reductionist position the rest of my essay will be discussing, where you say that Consciousness does exist but that it can be derived from more fundamental concepts.)

I'm not sure that Eliminativism even deserves to be given the time of day, since to me it is just obvious that conscious and perceptual experience is a thing.  <checks mind>  Yup, I have experiences!  Furthermore as many people have noted, it is impossible to even argue for eliminativism without using language which presupposes the existence of mind and beliefs (e.g. "I think Eliminativism is true", "I believe the hypothesis for this reason", "it is justified by our knowledge of these observations in the laboratory", "appearances are an illusion; they merely appear to exist", "no rational and scientifically-minded person could avoid realizing that..."  etc. etc.)  A consistent eliminativist would have to give up all the terms in red, and that would make them unable to express any theories at all.

If that were not enough, anyone who wants to talk about falsifiability (or any other version of empiricism) had better keep the idea of Consciousness around.  For the core of that idea, is that a good scientific theory ought to make at least some predictions about things we actually experience so that they can be ruled out by the data if they are wrong.  The technique of observation—which is conscious by definition—is implicit in the scientific method.  What is the point of even doing an experiment in the laboratory, with some elaborate but mindless machine if, at the end of the day, no human being checks to see what the results of the experiment are?

Experience is bedrock; that is what we use to test the existence of other, unobserved things! If you doubt the existence of experiences, then you have no reason to accept the existence of anything else.  So this is one of those "first principles" that g refers to in his comment.

Hence Consciousness exists.  Now of course, there is one very obvious sense in which the existence of consciousness isn't falsifiable.  Namely, that if there weren't any conscious beings, then you wouldn't be around to notice their absence.  But, that is not the kind of falsifiability puzzle that g was talking about.  He wasn't suggesting that the existence of consciousness should be falsifiable, but rather that certain kinds of theories about its true nature should be falsifiable.  Let us see.

II. Why Conceptual Truths aren't Falsifiable.

If we ask a question like "Are p-zombies conceivable", then it seems to me we're basically asking a question about the structure of logically possible worlds.  Is there a logically possible world in which there are entities physically like us which do not have the property of Consciousness?  (In what follows I will treat "logical possibility" and "conceivability" as synonyms, although some philosophers are likely to wish to make a distinction between them.)

Now, questions about what is logically possible are not really empirical questions.  Because, empiricism can only tell us which of the logically possible worlds we actually live in.  It cannot tell us which worlds are possibilities in the first place.  Instead, we reason about possible worlds by doing a conceptual analysis of the concepts in question.  This seems like it is necessarily an a priori sort of analysis, because the space of possible worlds should not depend on which of the worlds is actually the case.  And if such truths are a priori, then we shouldn't expect it to be falsifiable, we should in fact expect it to be nonfalsifiable, like the truths of mathematics.

(I've previously written a bit about Reasonable Unfalsifiable Beliefs before.  I'm not sure it really gets into the issue I'm describing here, but one of the things I discussed there is how certain propositions can be unfalsifiable while still possessing significant evidence in their favor.)

Now, that doesn't mean that positions about the logical conceivability of worlds should always be held in a completely dogmatic way.  It may be that in some cases, you have to do a tricky conceptual analysis of a concept (in this case "Consciousness") to determine what we in fact mean by the word, before you can decide what is or is not entailed by its existence.  Nor does it mean that you should be impervious to updating your beliefs; it just means that the proper method for changing your beliefs is through philosophical discussion rather than through scientific collection of data: somebody might say something like "You think that X is impossible, but what if it happened in way Y, did you think of that?"

(And then you might say "No I didn't think of possibility Y, thanks for pointing out the flaw in my argument, I owe you big time!" or maybe "You idiot, Y isn't at all applicable to what I said because blah blah blah..." and then the conversation could continue from there...)

Thus, believing that something can be demonstrated a priori on conceptual grounds without resort to empericism, is not quite the same thing as assigning a strictly 0 prior probability to being wrong.  A complicated math proof is true a priori, but there is still the possibility of having made an error somewhere in the proof.  Rather, it is a statement about by what methodology one knows the truth in question.

III. Can you tell me a story?

Although empirical observation doesn't directly tell us about which worlds are logically possible, there is still a limited role that observation plays through developing the exercise of the imagination.  We may become more aware of certain logical possibilities as a result of learning certain things about the world.  So for example, if somebody stupidly said that it was a priori impossible for Newtonian mechanics to be wrong and then we did experiments and found it was wrong, then that might be taken to refute the position.  But in this case the foolishness of the claim could have been revealed beforehand by imagining with sufficient clarity the scenario in which Newtonian mechanics is false.  It needn't have actually happened that way to refute the position.  (It's a bit like Nature saying, in a particularly hard-to-ignore voice, "Have you considered the possibility that Newtonian mechanics is wrong?")

What that means, is that if you think that Science will eventually show that Consciousness can be deduced from the physical facts about the brain, then in principle you ought to be able to write a science fiction story now about a set of observations, such that reasonable people would agree that if those observations came to pass, then Consciousness would be fully explained in physical terms.  You see, the most magical thing about Science is its ability to check things through observation, but I am waiving that requirement here by allowing you to make up whatever set of observations you like.  And that makes it harder to say "Science will one day show...", since if you can't write the science fiction story you can't plead lack of funding or experiments.  You can only plead lack of imagination.

(In this very, very limited sense, the position that Consciousness can't be reduced to the Laws of Physics can be falsified.  It would be falsified if we found some scientific facts that made reasonable people spot the error in the philosophical arguments of people like Chalmers.  But then again it would also be falsified if you can even write a science fiction story that points out the errors in Chalmers' arguments!  On the other hand, once one is willing to accept the possibility that Science could refute seeming conceptual truths, then the belief that Science can explain Consciousness now becomes the unfalsifiable belief, because even in the face of a complete failure to imagine what an explanation would look like, one could always hope that a future scientific revolution will change everything!)

One test of a priori knowledge is that we cannot even conceive of a scenario in which something isn't true.  (For example, I can't conceive of a scenario in which 2+2=5.)  If that is really true, then it actually implies that the position isn't falsifiable.  But that shouldn't make us uncomfortable unless it's the kind of proposition we wouldn't have expected to be a priori.

(Of course you can always imagine an idiotic position which can't be falsified because the person who holds it insists on holding it no matter what and keeps modifying the hypothesis to save it.  For example, someone (it is just barely possible) might believe in Young Earth Creationism no matter what the experiments of Biology, Geology, and Physics find, because they think that this is merely God testing them or whatever.  But that is not really so much because YEC is unfalsifiable, it's more because the person refuses to recognize that their position is falsified even when the facts do falsify it.  It's a very different case if you can't think of any facts which would convince a reasonable person that the belief is wrong.)

IV. A Primer on Modal Logic

When it comes to the Philosophy of Mind, many of these disputed propositions are explicitly about what is logically possible (or conceivable).  In particular, I think the dispute between Chalmers and more reductionistic philosophers—for example Daniel Dennett—is like this.

If Chalmers is right about Consciousness, then he has to be right a priori.  But the same goes for Dennett—if he's right that Consciousness can in principle be reduced to physical statements about the brain, then I think his position that this is conceivable would also have to be right a priori. [1]  As I have been saying, any true statement about which things are logically possible, must itself be logically necessary: if true, necessarily true; if false, necessarily false.  Thus, whoever is correct, we can't really expect that their position will be empirically falsifiable.

We can formalize the arguments I've been making a little bit using Modal Logic.  In this system of notation, if $p$ represents that a proposition is true, and $\neg p$ (i.e. not p) that it is false, then

is the statement that $p$ is a necessary truth, while

is the statement that $p$ is a possible truth.  One then assumes certain reasonable seeming axioms, including (N) that the theorems of Modal Logic are necessary truths and (K) that $\Box(p \to q) \to (\Box p \to \Box q)$.  People also usually stipulate that $\Box p \to p \to \Diamond p$, since necessity implies actuality, while actuality implies possibility.

There are actually multiple possible interpretations of exactly what we mean by necessary and possible, but the one I currently have in mind is the notion of analytic possibility, where $\Box p$ means that $p$ follows from pure logic, together with the conceptual meanings of whatever words enter into the proposition $p$.

Under this particular interpretation, it seems unreasonable not to accept the following axioms of modal logic:

(S4) $\Box p \to \Box \Box p$

(S5) $\Diamond p \to \Box \Diamond p$

These axioms formalize the idea, which I've defended above, that logic is true for a priori conceptual reasons, so that the same rules of logic are valid in all logically possible worlds.

(Of course in normal life we often talk about necessity in a much looser way, e.g. you can say that if Joe is a bachelor it is logically impossible (hence necessarily false) for him to have a wife, but since he could have gotten married to Sally 5 years ago, it wasn't necessarily impossible for him to be married.  This forms a seeming counterexample to S4 but this is only because the scope of the necessities are different.  If $\Box$ always means absolute logical necessity, taking into account all possible variations, then such counterexamples do not arise.)

The axioms (S4) and (S5) have an interesting consequence.  Any time a proposition has multiple modal symbols in front of it, for example $\Box \Diamond \Diamond \Box \Diamond p$, this assertion is always equivalent to to removing all but the last modal operator.  So this complicated proposition is equivalent to simply $\Diamond p$.  This fact will be useful in the next section.

V. The Burden of Proof

Since both philosophers are making a priori claims, we have to be very careful about determining which of them has the "burden of proof".

Usually I find it annoying and unproductive when philosophical arguments degenerate into discussions of who has the burden of proof.  Nevertheless, it's fairly reasonable to take claims that something is logically necessary (or logically impossible) to have a very high burden of proof; if there isn't a good reason to believe it, then we disbelieve it.  It is an unreasonably strong claim to say that logic proves that pigs cannot fly.  Even though in the real world, they usually tend not to.  (But there are always exceptions.  When we were flying my cat to the East Coast, my Grandpa took the opportunity to ask the animal handler there.  It turns out that pigs do fly, at least on United Arlines.)

Conversely, claims of logical possibility have a low burden of proof; if we don't know of any proof that something is impossible, then it is probably possible.  (And if we know there can't be a proof that something is logically impossible, then presumably it must be logically possible, since logical possibility just is that which does not lead to any logical inconsistency. [2])

But in this case both philosophers' views can be phrased as making strong claims of logical necessity!  To paraphrase:

Team Chalmers: It is conceptually impossible (i.e. necessarily false) for Consciousness to be fully explained in strictly physical terms.

Team Dennett: It is conceptually impossible (i.e. necessarily false) for p-zombies to exist (at least, given sufficient information about the workings of the brain).

So here we have two conflicting philosophical positions, and both sides are staring at the other, thinking that the other team is making an absurdly overconfident claim.  So who is really being cocky here?

I think we can resolve this issue by using modal logic.  What Team Dennett is really committed to is this proposition:

Strong Physicalism: Given the Laws of Physics (taking the usual form of mathematical field equations), one can logically deduce that certain physical systems such as the brain (assuming they exist) possess the property of Consciousness.

While it is an empirical physics question what the exact Laws of Nature are, and an empirical biology question how exactly our brain is wired, these empirical propositions are not really the essential part of the hypothesis in question.  It seems unlikely that the dispute between Chalmers and Dennett really comes down to the exact equations of the Standard Model, or the exact way in which the neurons are connected.  Let us suppose hypothetically that all of these scientific details are known, the interesting question is whether assuming all that, Consciousness follows by purely logical considerations.

I have called this position Strong Physicalism, because one could imagine a Weak Physicalist position which states that Consciousness follows by some weaker mode of necessity, for example metaphysical necessity (that which is necessary in itself, given the fundamental nature of things, even if human beings are not capable of proving it), or perhaps necessary given certain additional principles, that might be plausible to postulate. [3]

Now the thing to notice is that Strong Physicalism itself contains a logical modal operator $\Box$ within it.  If we let $b$ be a list of physical facts about a human brain (which are of course logically contingent, since human beings do not exist by logical necessity), and we let $c$ be the proposition that this human being is conscious, then we can restate each team's claim of logical necessity as follows:

Team Dennett: $\Box (b \to c)$ (from Strong Physicalism)

Team Chalmers: $\Box \neg \Box (b \to c)$ (Strong Physicalism is necessarily false)

But by the rules of modal logic, $\Box \neg \Box (b \to c) = \Box \Diamond \neg (b \to c) = \Diamond \neg (b \to c)$, a mere possibility claim.

So this makes it clear.  Team Dennett is making the claim that a first-order proposition, one that does not involve any modal symbols, is necessarily true.  This is a very strong claim and the burden of proof is on them to show it.

On the other hand, Team Chalmers is making a claim that a second-order proposition, one that involves a modal symbol, is a necessary truth.  But all second-order propositions about logic partake of necessity; either they are necessarily true or necessarily false.  Hence, this is an exception to the usual rule that claims of a priori necessity have a strong burden of proof.

Instead, one should strip off all but the last modal symbol.  When one does this, one can see that Team Chalmers is actually making a possibility claim about the first-order propositions.  Hence their claim is almost certainly true, unless there is good reason to think that Team Dennett's beliefs might follow from the structure of logic itself.  If there is a good argument for that, I am still waiting to hear it.  (Arguments about how amazing the progress of Science has been to date, of course do not qualify as arguments about the structure of logic!)

It was this realization, back when I was a grad student, that put me firmly in Chalmers' camp.

One might worry that this is a bit of a trick and that I could have rephrased things in a way where the argument could be run in reverse, so that by rearanging the terms somehow it would appear that the Chalmerites were making the 1st order necessity claim and the Dennettites the 2nd order necessity claim. [4]  But I don't see any way of making that permutation convincingly.  Strong Physicalism is (as it says on the tin) a very strong claim, which has a $\Box$ in it by its very definition.  Nobody is forcing anyone to go around making super-strong claims of logical necessity.  Strong theses have powerful implications, but for that very reason they are very easy to refute.

As I have said all along, there are weaker versions of physicalism which don't make such strong claims, and I'm not saying that those views can be ruled out so easily.  But these are precisely the versions of physicalism which do preserve some degree of mystery when it comes to Consciousness. [3 again]

VI. Occam's Shaving Cut

A scientifically-minded person might be tempted to retort, "Well hang it all, you're missing the entire point here!  Forget your sophistical modal argumentation, isn't it so much simpler to just assume that consciousness is physical, not some weird additional new thing?  Occam's Razor, which as you well know is a foundational principle of science, states that we should usually go with the simpler view until the data makes it untenable.  And postulating some crazy new mysterious stuff besides the laws of nature (that work so well in other areas) is anything but simple."

But I think this is a misapplication of the Razor, likely to lead to shaving cuts.  The normal use of Occam's Razor is when we have two or more logically possible hypotheses, each of which is compatible with the data, and we want to figure out which of them is most likely to be true.  In Bayesian terms, the simpler hypothesis is often (though not always) the one with the higher prior probability.

But Strong Physicalism isn't a hypothesis about which of the logical possibilities corresponds to the real world.  It's a hypothesis about the space of logical possibilities itself!  It is a category error to say that the space of logical hypotheses must itself be simple, since it simply consists of all thinkable hypotheses (however complicated or absurd).  Do you think would be absurd for p-zombies to actually exist?  Good!  I do too!  But that doesn't mean it doesn't exist as a logical possibility.  There is no limit to how complicated or absurd a logical possibility can be, as long as it is not self-contradictory.

When we use Occam's Razor, we are generally presupposing that we have already successfully identified the space of logical possibilities, and that we have already used ordinary logic to figure out what each hypothesis says.  We can use the Razor to say "Hypothesis X is better because it is simpler and still logically implies observation Y".   But we shouldn't use it to say "It is better to think that X logically implies Y (even if I can't see how it does), because things would be so much simpler if it did imply Y than if it didn't!"  Whether or not X explains Y is a feature of the logical structure of X and Y, and that is not the sort of thing we ought to be applying Occam's Razor to.

Now I admit that if X is a very successful theory, and there is genuine reason to think it might imply Y if we just did some very complicated calculation properly, then of course we should probably give X the benefit of the doubt instead of assuming we need to find a better theory.  This happens all the time in Physics.  But even in these cases, whether or not X implies Y is still a fact about pure logic.  It either does or it doesn't follow.  If it turns out that X doesn't imply Y, then no amount of wishful thinking about simplicity can make it oblige.  Logical consistency trumps Occam's Razor, every time.

This is why mathematicians don't use Occam's Razor all that often.  I won't say there is no use for it; sometimes one can detect patterns in numbers empirically, and it may be reasonable to guess that the patterns continue in the simplest way.  But mathematicians aren't satisfied with that, because in their domain you can usually prove logically what is or is not the case, which is a much better method.

And the issues raised by Chalmers and Co. aren't really a matter of complicated calculations—they aren't saying, "oh but Consciousness is so complicated, so how can it arise from a simple thing like the brain?"  That would be ridiculous, since as we all know the brain is fiendishly complicated.  (I feel like I really ought to link to some amazing pop-sci article about neuroscience here, but I'm having difficulty finding the right one.  Maybe an Oliver Sacks book?)  Rather they are pointing out a logical gap that seems to exist no matter what we postulate about the workings of the brain.

The way to bridge that gap would be to write a description of a physical system that just is logically identical to that system having experience and awareness.  One could propose definitions like "processes information in such and such complicated way blah blah" but then one still needs to show that this is identical to our subjective feeling of awareness, which most certainly exists (see section I).  And I don't see how this could possibly be done, without postulating some additional bridging principles.

VII. Thanksgiving

Since today is Thanksgiving Day, it seems appropriate to end by expressing my gratitude that Consciousness is real.  Since without it, we would be unable to appreciate any of our other blessings!

Footnotes

Footnote 1: Somebody might propose that Consciousness could arise in two different logically possible ways, and that one way is reducible to physics, while the other way is not.  Then it could be a empirical question which of these two categories human Consciousness happens to fall into in the real world.  For purposes of my argument, I am treating such a scenario as a special case of Dennett's viewpoint, because (as I think Chalmers would admit) if it is conceivable for Consciousness to be reduced to purely physical properties about a sufficiently complex physical system, there is no particularly good reason to believe that the brain couldn't be an example of such a complex system.

Footnote 2: Some caveats may be in order here about Gödel's incompleteness theorems, and "ω-inconsistency".  To be brief, in some cases the shortest "proof" that a statement is logically inconsistent might be infinitely long; in which case such infinite proofs must be included for my statement in the main text to be true.  However, I very much doubt that this aspect of mathematical logic is all that relevant to the subject of Consciousness, since the brain is a finite system and so it seems that any relevant proofs ought to be completable in a finite number of steps.

(Some people have proposed a different role for Gödel's theorem, claiming that the ability of human beings to reason about math proves that our intellectual capacities cannot be reduced to computation.  But I think these arguments are bunk!  First note that Gödel's theorem only states that a formal system for proving mathematical truths by rote cannot be both complete and consistent.  Whereas human beings reason primarily by informal methods, so Gödel's theorem does not seem to apply to us in any obvious way.  So this does not prove intellect cannot be reduced to computation, because (a) there is no reason to think that human beings are capable of proving all true arithmetic propositions, and (b) there is no reason to think an intelligent AI couldn't reason about mathematics in an informal way, and if it were truly intelligent, it probably would!)

Footnote 3: Note that in Chalmers' classification, "Type B" materialism (which asserts that the brain and mind are ontologically identical, but that we can only grasp this identity as an a posteriori truth) is actually an example of Weak Physicalism.  For this reason, I don't think it is ruled out by any of the arguments I've made here.  This view is oddly similar to the Chalcedonian explanation of how Christ can be simultaneously divine and human.

Footnote 4: An example of a modal argument which can be run in reverse is the question-begging Modal Ontological Argument for the existence of God.  There you assume 1. if God exists, he does so necessarily: $G \to \Box G$, and also 2. the existence of God is at least possible: $\Diamond G$, and from there you can turn the crank of modal logic to prove 3. that Theism is a necessary truth: $\Box G$.  But if you had instead assumed that Atheism is at least possible: 2'. $\Diamond \neg G$, then you can instead prove that God is impossible: 3'. $\Box \neg G$.  While either argument is technically logically valid according to the rules of modal logic, a fallacy comes when you try to get people to interpret $\Diamond$ in the 2nd premise in a weak epistemic sense, saying they should accept it because it at least theism seems not to be logically self-contradictory, whereas the first premise is only plausible as a claim about metaphysical necessity, not a claim about logical necessity.

I am a Lecturer in Theoretical Physics at the University of Cambridge. Before that, I read Great Books at St. John's College (Santa Fe), got my physics Ph.D. from U Maryland, and did my postdocs at UC Santa Barbara, the Institute for Advanced Study in Princeton, and Stanford. The views expressed on this blog are my own, and should not be attributed to any of these fine institutions.
This entry was posted in Metaphysics. Bookmark the permalink.

### 105 Responses to Consciousness and Falsifiability

1. Hamid says:

I think that one needs to be more careful about the complexity of what is AI and what is consciousness. E.g. one may get stuck in the question of P=NP as I once used to follow Dick Lipton's Gödel's lost letter to von Neumann closely. The ability to build robots that can build cars or other machines may require certain amount of consciousness just as the ability to solve physical and mathematical or mathematical-physics unsolved (and sometimes even solved) problems also requires consciousness. But why does one need theology? One might argue that even the ethics is gained thorough trial and errors. The complexity among mathematics, physics and consciousness can get very difficult not just because Cantor's diagonal argument and its use in the proof of Gödel's incompleteness theorem. One can possibly apply metaphysics to mathematics or physics, but it might not be so easy to raise mathematics or physics to metaphysics. At a theological level, it might be argued that a higher level of consciousness is needed for the latter, but as at all levels corruptions keep on occurring then the burden of proof becomes more and more difficult to prove to a nonbeliever of theological philosophy! Why should one pray to God might appear to be a very funny question to a believer in God, but it might not be so easy to prove it to a nonbeliever or someone who stubbornly pretends not to do so. If one has a fixed amount of assets then it might be easy to calculate how much more one can withdrawal from an ATM machine. But as the number of trades with the fixed asset keeps increasing then the calculations may become more and more complexly difficult as the number of transactions go to infinity over a very short period of time almost approaching 0.
Happy thanks giving.

2. Mactoul says:

Is it not possible to argue that the laws of physics pertain to quantities only and as consciousness lacks the aspect of quantity--at least, at present nobody has quantified consciousness-so that consciousness is outside the domain of physics. Things must be quantified first and only then we can talk about the laws of physics being applicable or not.

Falsifiability can be misleading if too strictly or universally applied. It serves more as a guide suggesting avenues that are more amenable to empirical investigations. Many scientific theories are no less scientific for being non-falsifiable.

The point about logical conflict between
Team Chalmers: It is conceptually impossible for Consciousness to be fully explained in strictly physical terms.
Team Dennett: It is conceptually impossible for p-zombies to exist.

holds only when a certain view about physical systems --that they are exhaustively described by the laws of physics and thus there are no qualitative features in physical things--is adopted without reservation.
A I hold that physical things are not exhaustively described by the laws of physics and would cite
i) Chemistry--even molecular structure and emergent phenomena like liquidity, wetness, temperature
ii) Life itself. A living things is not just a heap of atoms but possesses a unity that can not be described or explained by the laws of physics.

It also needs to be recalled that in Thomism, mental images are held to be material. But not so by modern philosophers. Certainly mental images are not amenable to laws of physics.

Thus, I hold both that p-zombies are conceptually impossible and it is conceptually impossible for Consciousness to be fully explained in strictly physical terms.

About the Godel's theorem, you write:
".. claiming that the ability of human beings to reason about math proves that it cannot be reduced to computation."

I am not sure what the "it" refers to.
Again, you write
"Whereas human beings reason primarily by informal methods"
This is precisely what is sought to be shown by Godel's theorem.
Again,
"there is no reason to think an intelligent AI couldn't reason about mathematics in an informal way, and if it were truly intelligent, it probably would!"
I am not sure what precisely you are meaning with "informal". But surely a system that is fully described by the laws of physics can not reason "informally". Where is the scope of "informality"?
Prof Stanley Jaki has a good discussion of this application of Godel's theorem in his book Brain, Mind and Computers

3. Hamid says:

In modern science, scientists put their faith in nuclear theories, but there appears to be a great complexity involved in the path from philosophy leading to mathematics, physics, chemistry, ..., economics as whether there is a place somewhere in between for alchemy too! Is it false that one can turn copper into gold theologically, philosophically, mathematically, physically, mathematical-physically, chemically? Where is the falsifiability? Copper 63, silver 108 and gold 197 are all in the 11th group of the periodic table with 29, 47, and 79 electrons respectively. Maybe if gold had 192, 195 or 200 isotopes which are close to iridium, platinum, mercury or even trying to turn silver and not copper into gold would be less economically falsifiable. Of course, it appears that scientists unlike alchemists have given up on turning copper to gold, but have turned to enriching the isotopes of other elements in the periodic table which is more economical. The case of uranium isotopes just as those of hydrogen is an example in nuclear theories in the philosophical mathematical physics of which I am only a very pedantic beginner! Whether in theology or science, taking the first few steps can be rather very illusively limiting as one might be getting caught in variously finite space-time loops. It might be more illustrative to work on Langlands geometric program supersymmetrically within mathematical physics to get economical matters going instead of jumping to false conclusions given that one does not get caught or lost in the mathematics or the supersymmetry. And then there is always theology and how to compactify between philosophical theology and philosophical mathematical physics. Electroweak theory of Salam-Weinberg-Glashow in standard model might be a good place to start with.

Interesting discussion on consciousness! Biologists, Neuroscientists, Psychologists and Philosophers have struggled for decades to understand the hard problem of consciousness. A neurologist friend of mine remarked once that “we know when consciousness is absent, e.g. under anesthesia, but we do not know what it is when it is present!” Scientists even do not know how far it goes down in the universe, starting from human beings. Cats and dogs do have some consciousness for sure. Are bacteria, viruses conscious, are plants, rocks and fundamental particles conscious in some sense? Probably quantum mechanics has something to do with it, but it is not clear at the moment.
Eastern religions like Hinduism and Buddhism (probably Christianity and Judaism also) believe that there is an extra sensory, non-empirical world and our consciousness may be link between sensory and extra sensory world. Eastern religions believe that there is a super consciousness prevailing in the entire universe. Expecting to understand extra sensory world by empirical sensory experiments and our everyday logic is oxymoron. At this point perhaps only meditation can access this. It may very well be that there is limit to the scientific method and it has hit a brick wall when dealing with consciousness!

5. Hamid says:

There is a consciousness in Islam in which Imam Khomeini asks Muslims in his book on rituals of prayers to sit at their homes and pray their daily prayers and recite this part of the verse 62 Surah Naml (Sura Ant) (Pickthall's translatioin): "Is not He (best) Who answereth the wronged one when he crieth unto Him and removeth the evil?" However, the entire verse is as follows: Is not He (best) Who answereth the wronged one when he crieth unto Him and removeth the evil, and hath made you viceroys of the earth? Is there any Allah beside Allah? Little do they reflect! (62)
A better translation would have been: is there any god beside God? Here is e.g. Yusef Ali's translation: Or, Who listens to the (soul) distressed when it calls on Him, and Who relieves its suffering, and makes you (mankind) inheritors of the earth? (Can there be another) god besides Allah? Little it is that ye heed! (62) In Arabic, not the same word is used for god and God. The part of the verse which Imam Khomeini stresses upon for reciting is usually recited in order to pray for the appearance of Imam Mahdi. It must also be stressed that the notions of God or Allah and Imam are not one and the same in Islam as it might be in Christianity or Hinduism or Krishna consciousness. It is most probably the case that Judaism also claims that God is with us, but does not claim that the Messiah is himself God. In any case, there must be a belief in God or otherwise there is no religion to worry about either! The falsity might be noticeable when Mohammad as the prophet of God is denied. Of course, if one is not a Muslim, then there is no excuse, but the problem starts when one wants to learn about God in ways other than the Islamic way. How do you make these ways to converge? Is it then thorough Christ, Messiah, Krishna, Adam, or Mahdi consciousness without God consciousness? Has not God been compromised here? The question is can one set aside theology and spend time only on philosophy, mathematics, physics, medicine and other natural sciences including ethics? One can do this if there are different theological philosophies which have something else other than the natural sciences in common. Again, it might be possible to reduce the matter to natural sciences. but it might not be possible to raise the consciousness metaphysically. In particular, if someone claimed that e.g. I am Krishna. How is he supposed to be convinced that he is not? You might answer that the same might occur with Mahdi. And yet there is a distinct philosophical difference as Mahdi is also as everyone else who might happen to be also Muslim a human being and who is also the Imam but Krishna is God or the Supreme personality of Godhead as A. C. Bhaktivedanta Swami Prabhupada puts it. Besides, who is to decide that this person who is claiming such a matter is not actually crazy and is in fact Krishna? Is Hinduism or Buddhism a requirement here? Maybe he is as he claims. How do you know that he is not? Apparently, on matters of philosophy and natural sciences, one does not encounter such a difficulty as the burden of proof is on proving the philosophical problem as a theorem or at least presenting one's arguments in terms of the subject matter involved be it mathematics or physics or medicine or otherwise. There is a consciousness involved although the subject matter might be algorithmic computer sciences such as parallel or vector computing. But thoelogically can one also approach Mahdi or Messiah or Christ too? Or is it sacrificed in favor of natural sciences?

6. Ben Crandall says:

Hi, I have enjoyed your blog for a while, and appreciate you intelligent, polite, and thoughtful presentations and discussions (making things accessible without oversimplifying which is difficult and all too often people fall short).

I thought I would comment as these are topics (consciousness, logic, modal logic, metaphysics, and science) that are all of great interest to me and tho I have no formal graduate education, I have spent some time thinking and reading about for a while now.

I will come out and say that I am more sympathetic to Dennett's positions on consciousness (as I believe Dennett intends them, I think often people misunderstand Dennett's positions... Sometimes because of Dennett's own failure to express the ideas more clearly, and sometimes it seems from lack of familiarity with his work).

I was a little confused regarding your exposition of Dennett's view versus Chalmers as I don't take Dennett to be making a claim about the first order necessary logical entailment of consciousness as a result of a particular physical description at all.

I am curious where you find this in Dennett's work? I am assuming since you attribute it to "team Dennett" you believe this to be Dennett's own view. You make reference to Dennett's criticism of the idea of philosophical zombies, but I think this misses the point of Dennett's criticism entirely.... Can you point me to where you think Dennett makes such an argument or claim?

(I don't have a lot of time right now, but I will just mention that I also follow much along the lines of people like Wittgenstein and Quine when it comes to our understanding of semantical notions, analyticity and modality... So I may also have issue with some more fundamental views you have regarding the foundations of meaning, logic, and synonymy, but maybe we can put those off for a bit unless absolutely necessary :)

7. A fine article Aron; I've learned much, particularly in what modal logic might be. Can you recommend an ebook or web site for self-study?

Here's one point of discussion about the science of mind: very few philosophers of mind have taken into account how consciousness and rational activity develop, as the human matures. (I believe this last is a true statement.) There is of course the pioneering work of Piaget, but even more interesting is an article by Rochet "Five Levels of Self-Awareness as They Develop Early in Life". (see http://www.psychology.emory.edu/cognition/rochat/Rochat5levels.pdf

Another interesting (but off-the-topic) point:
The distributive law you cite for modal logic is presumed by Andrej Grib not to hold in quantum logic. See (a bit a shameless self-promotion here)

8. PS--I myself am a Mysterian. I don't see how self-awareness can be reduced to any sort of physical mechanism (and I realize this is not a logical argument). My stance has been reinforced by reading works by Colin McGinn (did he coin the term "Mysterian"?)

9. Ben Crandall says:

Setting aside other issues, since you present Chalmers and his Hard Problem as the primary obstacle to physicalism I thought maybe a quick review of them and why I find them unpersuasive might at least initiate a discussion of the true disagreement.

His first syllogistic deduction goes as follows:

(1) Physical accounts explain at most structure and function.
(2) Explaining structure and function does not suffice to explain consciousness; so
(3) No physical account can explain consciousness

Before I go into specifics I will note that Chalmers and others who defend the Hard Problem are often very vague about what they mean by "consciousness," often defining merely by stipulation that it is not any functional or structural property (although I will note that this view has a long and distinguished pedigree independent of modern atheism and physicalism with Aristotle's hylomorphic functionalism being a primary inspiration... And many considering him to be the first functionalist).

Instead referring to "raw feels" or "qualia" or the "what it is like" to be something, have its "subjective phenomenology" or any number of other descriptions that seem to wave a hand at what is suppose to be problematic for the functionalist/physicalist.

In fact Chalmers is largely unwilling to tie the notion to even specific properties saying "On my usage, qualia are simply those properties that characterize conscious states according to what it is like to have them. The definition does not build in any further substantive requirements, such as the requirement that qualia are intrinsic or nonintentional."

So instead of indicating what properties such as intentionality or being intrinsic, Chalmers and others largely point to the phenomena by ostension saying "Conscious states include states of perceptual experience, bodily sensation, mental imagery, emotional experience, occurrent thought, and more. There is something it is like to see a vivid green, to feel a sharp pain, to visualize the Eiffel tower, to feel a deep regret, and to think that one is late. Each of these states has a phenomenal character, with phenomenal properties (or qualia) characterizing what it is like to be in the state."

So there is something it is "like" (by which does he just mean comparable to?) to be a conscious being experiencing these states. And I (and Dennett) would agree. As Dennett says in "Quining Qualia":

"Everything real has properties, and since I don't deny the reality of conscious experience, I grant that conscious experience has properties. I grant moreover that each person's states of consciousness have properties in virtue of which those states have the experiential content that they do. That is to say, whenever someone experiences something as being one way rather than another, this is true in virtue of some property of something happening in them at the time, but these properties are so unlike the properties traditionally imputed to consciousness that it would be grossly misleading to call any of them the long-sought qualia. Qualia are supposed to be special properties, in some hard-to-define way. My claim--which can only come into focus as we proceed--is that conscious experience has no properties that are special in any of the ways qualia have been supposed to be special."

And I would agree. Of course we have conscious experiences, and some conscious experiences have properties and are "like" other conscious experiences.... I just deny that there are these properties of conscious experience that are mysterious or problematic in the sense you and Chalmers believe them to be.

I know my comment is already running long so I am going to try and break these responses into chunks, but let me run my own conceivability argument in response to Chalmers above argument.

Chalmers says that us functionalist/physicalists (although I myself hold to a metaphysical position along the lines of Ontic Structural Realism, so it is a matter of debate whether it makes sense to call me a physicalist in the first place) can account for structure and function but leave out the qualia of a subjective phenomenal “feel” (whatever that might mean).

So let’s take a prototypical case of a qualitative experience. You are out camping, feel the breeze go thru your hair. See a green and beige field in front of you. Hear the stream running in the distance. A horse trots by.

Now let's imagine what it is like to have this experience (wait, aren’t qualia ineffable? Never mind). Now let’s take away the functionalist aspects, imagine the qualia once you take away the memory linked reactive attitudes and dispositions you have concerning the ability to recognize and discriminate horses from other animals and objects. Now imagine you cannot discriminate different colors or shapes from each other. Or detect motion from background colors. Or discriminate and associate different pitches and sounds. Or discriminate hot from cool skin sensations, pressure, direction or location on body, associations with memories of previous skin sensations. Etc. etc. etc.

Can you honestly say you can conceive of what it would “be like” to have these qualia absent these discriminations, functional abilities, etc.? I cannot inappropriate honesty conceive of or imagine what that would mean or “be like”... Can you?

Thanks for your time and look forward to your response if you have the time.

10. g says:

(I've just seen this; as it begins with a quotation from something I wrote, I should probably reply. However, it will likely be at least a few days before I have time.)

11. Cedric Brown says:

How would you programme a computer to experience the colour red? I think there are three possible answers:

1) It's impossible.
2) We have no idea how to do it at present but we will eventually figure it out.
3) We can't programme a computer directly to have that kind of experience but it would emerge in some way that we don't understand if we built a sophisticated system that responded to its environment in the right way.

The second two answers seem like cop-outs to me. The fact that we can programme computers to do so many things but have no idea how to make them have specific experiences is already significant. I don't think we should be too impressed by vague promises about possible future developments.

If we eventually build a machine that has conscious experiences but we don't understand how those experiences emerge within the system, then we will have made no progress in understanding conscious. experience.

12. Hamid says:

Philosophically, consciousness is important because suppose that a person could ask God for a miracle and he is granted of his request whatever it might be. What should he ask for? It might be objected that miracles are philosophically impossible although it is possible theologically or metaphysically. That's were consciousness comes in to help in the sense that if it is asked that Christ is resurrected as Christians believe or Mahdi to reappear as Muslims believe, then that's a great miracle because supposedly all kinds of other miracles then start to appear. It might be compared to finding Higgs particle. Now that it is found, what is Higgsing and unHiggsing then? Let us see, once say Mahdi appears, then is he too supposed to ask for Mahdi too? This appears as a funny question. If he is there, how can he ask for himself? It might be pointed out that Mahdi is there for some time alright as Christ is also there, but is the consciousness that Christ or Mahdi is there also there or not? Thus, if only one or two or more people reach the consciousness and realize that he has observed Christ in person, that is then enough. Maybe if Mahdi and Christ announce that they have observed each other, then that would be enough. Furthermore, perhaps Christ and Mahdi themselves wish to teach everyone to reach the consciousness to observe them for themselves whether they can or not so that everyone has the chance of becoming a great human being of infinite capacity to ask God for great miracle and God will grant them. Of course, this is a consciousness beyond natural sciences and runs into metaphysics and theology although it does not mean that natural scientists at LHC should not wish to reach such metaphysical consciousness while trying to find hidden valleys in particle physics.

13. Ben Crandall says:

Cedric Brown,

Do you mind explaining specifically what you mean by programming a computer to "experience the colour red?" Certainly we currently have computers that identify and can discriminate red objects from blue ones. Descriminate various other properties such as shape and size, store information about these in memory, compare and contrast those properties, etc. What aspect of experience are you referring to?

I will also mention as a logical point that even if we could not program a silicon computer to perform a certain task, function, or bare a certain property, that wouldn't mean that the referred to task wasn't physical or physically realizable.

For instance imagine someone who claimed automobiles couldn't possibly be physical. They argue that they have been trying for years to make a car out of marshmallows or butter and always fail, have no idea how to possibly make it work.

Clearly the problem is that a car can't (as far as I can tell) be made entirely out of butter or marshmallows. That doesn't mean cars are non-physical. Similarly there might be physical properties of silicon and other things that compose standard computers that cannot handle the type of physical processes required for the aspects of conscious experience you are referring to (whatever they may be). I am not saying that IS the case (some philosophers argue as much, but I am agnostic) but it could be the case.

Thanks.

14. Cedric Brown says:

Ben

Yes, we can build machines that can discriminate between colours. But are you simply defining the experience of colour as whatever internal state a system is in when it is able to make such a distinction? That seems like begging the question.

If we are unable to programme silicon-based computers to have conscious experiences, it would throw a very large spanner in the works for those who are trying to explain consciousness scientifically. The best hope for naturalism is that the physical makeup of a computing machine is not relevant to consciousness.

15. Ben Crandall says:

Cedric Brown,

Thanks for the polite and thoughtful questions and response.

I wasn't trying to define experience. I was asking you to explain what aspects of experience you are referring to. My own view is that experience isn't a single unified "thing" but has many different aspects. Abilities to descriminate and identify colors, shapes, sounds, textures; our memory linked reactive attitudes, dispositions to respond to stimuli; our ability to make associations between observations and our environment; our ability to model spatial organization of objects in our environment, model counterfactuals and make predictions about our future observations; semantic and visual associations between current stimuli and past stimuli; etc. etc. etc. I think a whole wide range of things go into what people refer to as conscious experiences and awareness.

And further I see no good reason to believe (though I am open to arguments and evidence to the contrary) that there are inexplicable and ineliminable aspects to consciousness beyond these that cannot be accounted for in such functional (and physically realizable) accounts.

But if you believe there are aspects to experience that can't be accounted for on such analyses, I would like to hear what you believe those aspects are and why.

"we are unable to programme silicon-based computers to have conscious experiences, it would throw a very large spanner in the works for those who are trying to explain consciousness scientifically. The best hope for naturalism is that the physical makeup of a computing machine is not relevant to consciousness."

I personally am not arguing that a silicon based consciousness couldn't be constructed... It very well might (and I think if one is constructed virtually everyone will accept it has conscious experiences based on third person observational and behavioral criteria I am almost certain). But I already gave an argument as to why the inability to make a conscious experiencing machine out of silicon wouldn't be a strong argument against physicalism (though if it might falsify certain computational theories of mind... Would depend on the specifics).

Can someone build a skyscraper out of butter? Or a car out of marshmallows? Of course not, but that doesn't mean a skyscraper or a car is immaterial, it just means it can only be constructed out of certain materials right? For instance suppose some aspects of robust human conscious experience rely on the physical speed and interconnectivity of neural spike-trains. And suppose whenever computer scientists try to make a computer capable of modeling these analogue interconnections with sufficient detail the computer can't handle the load and overheats and melts. That wouldn't show that mental processes are non-physical anymore than the fact that a skyscraper made out of butter melts before you can finish it, right? Why would anyone assume that a conscious machine can be made out of any substance? I can't build a computer out of toothpicks, all that shows is that a computer can't be made of toothpicks, not that it can't be made of something else.

Make sense? Thanks again for your response.

16. Cedric Brown says:

Ben

It seems to me that you began by talking about conscious experience in functionalist terms and are now doing the same thing but in more detail. What more is there to consciousness experience than XYZ of functionalist detail? The experience part. Please forgive my flippancy.

I can't really say more than that. It just seems obvious to me that something has been omitted from a functionalist account of experience. But I know you won't find that persuasive.

I wasn't arguing that computers can't be conscious, although I'm sceptical. My point was rather that if computers become conscious and we no more understand why they are conscious than why we ourselves are, then consciousness will still be a mystery to science.

But my understanding is that the physical form of a computer should make no difference to the functions it can perform.

17. Ben Crandall says:

Cedric,

I understand that we may be at a stalemate. You (and you are not alone) believe there is something about conscious experience that can't be explained in physical or functional terms (and considering it is also apparently to even express with language there might be little hope of it being explicable even in immaterial non-functional terms). I (and having thought quite a bit about this can say this in all honesty) don't know what you are referring to. Now maybe that means I am a philosophical zombie, or am just not aware that I have these properties of experiences you are referring to (which would be strange case considering what people often claim about them).

However I hope you don't mind if I push back a little just to see if there might be any hope of progress on the matter.

You claim that the functionalist aspects of experience I described leaves out the "experience" part... I find this a strange claim. Certainly the things I described are at least PARTS or ASPECTS of experience are they not?

If you see a field of grass that has been painted red, surely part of your experience is discriminating the fact that the grass is red and not green as one would expect? And certainly part of the experience is the surprise created from your memory linked dispositions to expect the grass would be green and then detecting that the green is not. And so on and so forth. I mean can you even say (or imagine what it would be like) to have such an experience without these functional aspects? Could you have the experience of surprise at the red grass WITHOUT the functional aspects of discriminating the visual colour, the shapes and textures of the field, the motion in the breeze, etc. In fact what is left over of your experience of the field of red grass without these? If you insist that what is left over is the "experience," my question is the experience of WHAT? It couldn't be the experience of the field of red grass, because you experience (absent these functional states) COULDN'T be of a field of red grass, as identifying the grass, the shape, the motion, the memory linked connections and dispositions, your association of the shapes and texture with your verbal and semantic associations with the word "grass" and "field," etc.

And certainly it would be strange to say you could have that same experience absent and identification of shape, sound, color, texture, etc. would it not?

Let me give one more everyday example that might help illustrate what I think is one of my problems with what I think might be your view.

I am guessing this has probably happened to you before. You are looking for something everywhere. You go from room to room and can't find it. Then looking at a shelf you already had stared at 5 times before, you realize what you had been looking for was right in front of your face that whole time, right in the middle of the shelf you had been looking at all those times.

Now I am assuming you would agree that you had had experiences of that shelf. When you looked at the shelf, was the object part of your experience and you just didn't realize you were having that experience? (You were experiencing it and just didn't realize or notice it) Or was the object not even being experienced (maybe you experienced a vague gap on the shelf, a blur or nothing at all)? Or do you not know? (And how confident are ou in your answer)

Thanks.

18. Hamid says:

A computer can indeed add two and two. It can also possibly be given algorithms to solve instances of Fermat's last theorem experimentally. But can it solve Fermat's last theorem too? Now, suppose that Andrew Wiles announces that he has solved Fermat's last theorem in Annals of Mathematics in 1995. Has his proof then been algorithmically fed to a computer to test its validity? Can it even be done? In fact, can many people understand his proof at graduate levels, doctoral levels, professorships, expert number theorists? Is there any hope that his proof can be one day explained successfully to high school students in their textbooks? If its proof is successfully programmed on every calculator, then can every high school student still hope to learn it as she wished? How should one reach the level of consciousness that she is convinced that she has understood the proof of FLT. Just the desire to learn FLT does not give one enough to actually understand the problem. It is a pleasing humbling embarrassment to learn it otherwise by any rational means possible! And in fact there are many other such embarrassments that even the computers can work out the problems and one is still stuck at them. Where is the consciousness to come out of these algorithmically solved embarrassments using all the resources at hand including computers without too much embarrassments? It might do me some good if I learn from kids too. It might be of some use to learn general relativity too. Quite recently, I sat in a class on this subject taught by Professor Sheikh-Jabbari. But do these all have to do anything with consciousness or we still have not entered its arena? Maybe we need metaphysics too to properly define states of (un)consciousness. Neurologists, psychiatrists, and psychologists might also help, but where is the beauty in John Nash's mind? Can this beauty be pinpointed in his embedding theorem of Riemannian manifolds in Euclidean spaces? There is however a line that separates metaphysical and theological ground of consciousness from a purely philosophical, mathematical or scientific ground.

19. Cedric Brown says:

Ben

I should point out that I am no expert in these matters. But it seems to me that you were somewhat blurring the lines between giving a functionalist account of experience and describing experience in phenomenological terms. In other words, you were deviating from strictly functionalist language.

You were talking about experiences in terms of how they relate to other experiences. But, as I understand it, what you should really be talking about is the relationships between functional states. So if you see a triangular object you are in a functional state that will enable you to pick up the object and place it in a triangular space in puzzle or tell me that there is a triangular object on the table etc. The language of experience can supposedly be reduced to this. And that seems to me to be the problem.

20. Ben Crandall says:

Cedric,

I think you might be confusing functionalism with behaviorism. Functionalists don't say anything about overt behavior being required for a functional state. If you break your computer screen and type in 2+2=4 the computer can still go into the functional state of adding without any observable behavior.

Or consider a system that is designed for facial recognition to identify a potential criminal. The computer my rule out hundreds of people as suspects without anyone overseeing the computer having any indication that the computer is identifying or discriminating between the faces, they will only notice a behavioral response if the computer identifies the criminal and an alarm goes off or something.

Neuroscientists study the way our visual system encodes and transmits information about say numbers of objects, shapes of objects, colors of objects, etc. via the retina, neural spiketrains (signals and patterns of firing neurons). If my eye encodes light frequency, intensity, etc. and then the information is transmitted to part of my brain that identifies it as a triangle and triggers those parts of my brain related to identifying triangles, associating them with other concepts and shapes, etc. that does not require that I have to do anything with the triangle other than see it (tho obviously certain things need to be calibrated or filtered out thru verbal reports and other behavior).

So I am not sure why you are assuming that what I described involves irreducible phenomenological language. If the visual system in my brain identifies the field of grass, and that it was painted red, and triggers semantic and other associated functional processes related to this, why is that not allowed on a functionalist account? If the scientist can give a thorough account of how you identify the objects, descriminate sounds, textures, words, colors, etc. etc. why wouldn't that also be an account of our EXPERIENCE of descriminating shapes, colours, textures, pitches, etc. etc.?

[ Just realized I was picking up on your spelling of "colour" but kept switching back and forth :) ]

Enjoy the conversation and your thoughtful questions. Thanks.

21. Ben Crandall says:

Cedric.... I suppose the main thing I am a little confused by is which part of what I have been saying you don't think can be given such a functional account? That couldn't in theory be instantiated in a computational process or something?

You said "it seems to me that you were somewhat blurring the lines between giving a functionalist account of experience and describing experience in phenomenological terms."

Where precisely? (I do think phenomenological states, to the extent they exist, can be reduced to functional states, relations, etc. so I think as I originally asked, I am trying to figure out what is being proposed about experiences that can't be so reduced. That was what my thought experiments, observations, etc. are meant to point out.... maybe you are referring to my example of the experience of looking for a lost object? If so that was meant to be a question, as was the previous comment, meant to promote a conversation and attempts to answer those questions. I think the view promoted by people like Dennett largely can provide a consistent and plausible answer to those questions and other observations, where as the view Aron, Chalmers, and others promote I believe cannot)

22. Ben Crandall says:

Cedric,

have you read any of Dennett's criticisms of qualia or of the confused idea of the "Cartesian Theater?" I agree with Dennett (tho I could be wrong clearly) that a lot of these intuitions and confusions come from the idea that it "feels to us" like we are observers in the theater of the mind. We are surrounded by technicolor images, surround sound, smell-o-vision, touch-o-vision, etc. and that this is what conscious experience consists in, this appearance that we are observers in our heads somewhere, and the sights come in thru our eyes and are transmitted as vivid visual images into our minds, and the same with our ears, nose, etc.

I think a lot of the appeal of qualia, phenomenal experiences, etc. as a problem for functionalism and physicalism fall away and disappear if you really sit down and think about how wrong and confused the idea of the Cartesian Theater is. It only really seems appealing and persuasive when you don't think about it too carefully in my experience.

23. Cedric Brown says:

Ben

I am inclined to think that functionalism is behaviourism with bells on. Suppose you design a system that can recognise faces. You show it a photograph of Donald Trump and it displays the words "Donald Trump" on a screen. If the screen isn't working you might say that the system still "knows" that it has seen Donald Trump. It is in an appropriate functional state. Even if that state never results in any kind of output it still counts as a state of knowing what it has seen.

That way of looking at things is not a million miles from behaviourism. Ultimately, what counts as a functional state is still the capacity to produce some kind of output or behaviour. Even if functional states can trigger other functional states, there must be the potential to produce an output.

From the functionalist's point of view, our mental life consists of various interacting functional states which have the potential to produce some kind of behaviour. That is fine as far as it goes, but I would say that IN ADDITION to this there are mental experiences! But perhaps I'm just being awkward.

The mental state of seeing a ripe tomato is a functional state that can trigger other functional states and also behavioural outputs. But, in my opinion, it is more than that. There is the particular quality of seeing a ripe tomato.

An alien visitor may agree with our judgements of colour. It may agree that there is a difference between a ripe and an unripe tomato. But perhaps the alien is experiencing something different when it sees a ripe tomato. Perhaps when the alien sees something red it is actually experiencing what I would call green. Now, in theory you might still try to understand that in functionalist terms. The colour red may have a different effect on the alien. It may trigger different kinds of associations. But it seems to me that that is a very long-winded way of defining what is a very basic experience.

Regarding Dennett, I would agree with those critics who accuse him of trying to explain away consciousness. But, as I said, I am no expert. My opinion is very much an intuitive one.

Aron,
I am still in the process of understanding your (modal) logic arguments. But I have a question in the meantime. Have you tried to apply this logic to various interpretations of quantum mechanics? I suppose you cannot question quantum mechanics theory itself. The sole justification of various hypotheses is that there is outstanding agreement with the experiments even though it is hard to understand with our everyday concepts.

25. Aron Wall says:

Mactoul,
"it" refered back to our ability to do math---but to make the point of the sentence even clearer I've replaced it with "our intellectual capacities"

Cedric,
I'm not sure I fully understand the distinction you have in mind between (1) and (3). For example, would simulating an entire human brain count as doing a computation?

I believe that it is probably possible for computer AI's to be conscious. I can't prove it and I'm not sure how anyone could know for sure, but since we know there is at least one physical system (the human brain) that gives rise to consciousness (however mysteriously) it seems not unlikely that there could be other such physical systems as well. (See also my first comment to Ben.)

Ben,

I thought the whole point of Functionalism was that only the causal information-processing capacities of the physical system matter for determining the contents of the conscious experience, not the physical substrate. Of course not every physical system is capable of supporting arbitrary computations. But I don't think your analogy of "building a skyscraper out of butter" is a good reason to be skeptical of silicon AI. We know for a fact that silicon computers (unlike butter) can support arbitrary computations. They are (if we idealize them to have arbitrarily large memory and running time) Turing Complete. Any system which supports universal computation can simulate the behavior of any other computable system (and there is no good reason to think that neurons aren't computable). So when you say "suppose whenever computer scientists try to make a computer capable of modeling these analogue interconnections with sufficient detail the computer can't handle the load and overheats and melts", I think we know enough about computer science and the brain to say that this is false!

Thus, if functionalists are right that all that matters is the causal processing of information, then it should follow that silicon computers can be conscious. If Functionalism is the thesis that any two systems running the same program would have the same experiences as each other (if any) then I probably agree with it, but I hold that if this is true it cannot be deduced from the laws of physics plus logic, so that it is not true that Consciousness reduces to causal information processing by definition.

-----

I read a bunch of Dennett back when I was thinking about this stuff in grad school, but it's been a while now since I've read him extensively, so please correct me if you think I'm misrepresenting him. He has certainly stated he disagrees with Chalmers about the logical coherence of p-zombies, which I think is sufficient to commit him to the proposition of a "first order necessary logical entailment of consciousness as a result of a particular physical description". Yes? If you agree, then I think my argument against his position stands.

The reason I picked him as an exemplar is that (a) he is not an eliminativist about consciousness: he thinks it exists as a worthy object of scientific study, and that we are allowed to talk about things like "beliefs" and yet (b) he thinks it is possible in principle to give a complete explanation of consciousness in a way that refers only to physical data. However, due to our incomplete knowledge of the brain, he thinks that self-reports of people about their experiences plays a very important role in studying consciousness as well. As he says in Who's On First: Heterophenomenology Explained:

Most of the method is so obvious and uncontroversial that some scientists are baffled that I would even call it a method: basically, you have to take the vocal sounds emanating from the subjects’ mouths (and your own mouth) and interpret them! Well of course. What else could you do? Those sounds aren’t just belches and moans; they’re speech acts, reporting, questioning, correcting, requesting, and so forth. Using such standard speech acts, other events such as buttonpresses can be set up to be interpreted as speech acts as well, with highly specific meanings and fine temporal resolution. What this interpersonal communication enables you, the investigator, to do is to compose a catalogue of what the subject believes to be true about his or her conscious experience. This catalogue of beliefs fleshes out the subject’s heterophenomenological world, the world according to S--the subjective world of one subject--not to be confused with the real world. The total set of details of heterophenomenology, plus all the data we can gather about concurrent events in the brains of subjects and in the surrounding environment, comprise the total data set for a theory of human consciousness. It leaves out no objective phenomena and no subjective phenomena of consciousness.

Just what kinds of things does this methodology commit us to? Beyond the unproblematic things all of science is committed to (neurons and electrons, clocks and microscopes, ...) just to beliefs--the beliefs expressed by subjects and deemed constitutive of their subjectivity. And what kind of things are beliefs? Are they sentences in the head written in brain writing? Are they nonphysical states of dualist ectoplasm? Are they structures composed of proteins or neural assemblies or electrical fields? We may stay maximally noncommittal about this by adopting, at least for the time being (I recommend: for ever), the position I have defended (Dennett, 1971; 1987; 1991) that treats beliefs from the intentional stance as theorists’ fictions similar to centres of mass, the Equator, and parallelograms of forces. In short, we may treat beliefs as abstractions that measure or describe the complex cognitive state of a subject rather the way horsepower indirectly but accurately measures the power of engines (don’t look in the engine for the horses).

Basically, Dennett is unwilling to admit into the realm of scientific theory anything that is not obtained by 3rd person analysis. If the experimental subject says "I experience an ache in my arm", then he would claim that yes you should take that seriously as data that he probably believes he has such a sensation, but that once you have gotten a neurological theory which explains why he said those words, you have done everything you need to do, and there is nothing left to explain. (Whereas Chalmers would say, that since we also all have personal, 1st person experience of aches, we also have to explain our own conscious experiences of them.)

Thus, Dennett is prepared to be extremely skeptical when it comes to the reality of specific aspects of conscious experience. He believes it is possible that we are grossly mistaken about e.g. the existence of qualia etc. Now, I admit that there are nerological experiments which show that human beings may have false beliefs about the nature of their own experiences. For example, a naive person might think that they simultaneously experience everything in their visual field "at once", whereas in fact the eye flits back and forth between different objects and as a result the objects we aren't paying attention to are much more indistinct and may even be unnoticed altogether. (Of course, once one learns this fact about perception from neurology books, one can see even from personal introspection that something like this is actually true. The same is not true of e.g. the experience of the qualia of red, which seems to persist no matter how many neurology books I read.) So I hope this makes it clear that I don't believe in the "Cartesian theatre" or "humunculous fallacy" (as if there were a self apart from our processing of experience) yet I still think that there is something important Dennett is missing.

Ultimately the question here is just how much of our 1st person experience of consciousness we are prepared to strip away as a result of 3rd person theorizing. Dennett's view is that 1st person introspection shouldn't really count for anything, but I think this is clearly wrong as a matter of first principle. Our 3rd person experience of other people's brains, and their reports, is itself part of our own 1st person experiences, therefore we could not even have 3rd person data without first assuming that the 1st person data exists and is normally reliable. We would have no reason to even believe in other people if we did not experience them in our own consciousness. Data must be exerienced by somebody to be data at all.

When we study an external object (say, a rock) we can study it though perception, whereby it appears through sensation in our own minds as an object of conscious experience (the "rock" in our minds). We have no direct experience of the rock's own "perspective", if it even has one. But when we turn our attention to study ourselves, there are now two avenues to knowledge, we can either introspect our own minds or observe ourselves "externally" through perception. In this case, and this case alone, we have additional information about the object being studied, apart from its causal effects on the outside world. If we observe ourselves externally, we may be confused about how colors like "red" and "blue" can exist in a lump of grey matter sitting in our skulls. But I think that neither the external nor the internal mode of observation should be confused with our self as it exists in itself (as God sees it, I would say), rather both of them are incomplete and leave things out, and so we get our best theory of ourselves by synthesizing data from both perspectives. That is how I see it.

(I think you can probably deduce from the previous paragraph how I would critique Ontic Structural Realism. In the case of other objects we can only observe their causal-structural effects on our self, but in the case of human beings, we can have additional insight due to being our self. Of course if we didn't have any direct experience of our own mind, we wouldn't perceive anything else either, since as an indirect realist I believe we perceive external objects by means of "representations" existing in our own mind.)

A couple additional responses to what you wrote:

Now let's imagine what it is like to have this experience (wait, aren’t qualia ineffable? Never mind). Now let’s take away the functionalist aspects, imagine the qualia once you take away the memory linked reactive attitudes and dispositions you have concerning the ability to recognize and discriminate horses from other animals and objects. Now imagine you cannot discriminate different colors or shapes from each other. Or detect motion from background colors. Or discriminate and associate different pitches and sounds. Or discriminate hot from cool skin sensations, pressure, direction or location on body, associations with memories of previous skin sensations. Etc. etc. etc.

Can you honestly say you can conceive of what it would “be like” to have these qualia absent these discriminations, functional abilities, etc.?

No, I don't claim I can conceive of what that would be like, but I don't see how that refutes my position. I'm not claiming that human consciousness can exist apart from processing of information or functional relationships, only that, if all you know is that the brain processes information in a particular way, it still seems to be an open question whether they are experiencing anything at all.

Just because we can't conceive of qualia apart from structural features doesn't mean they don't have properties of their own. For exmaple, it's difficult to conceive of shape and color apart from each other. Anytime I imagine a shape, it seems to be some color, and anytime I imagine a color, there needs to be some object with some shape having that color. Yet, that does not imply that color reduces to shape (or shape to color).

Philosophers of mind disagree about which aspects of consciousness are the most myseterious to explain (and therefore, to be regarded as unphysical or nonexistent or a nontrivial research project to reduce to physicality etc.) For example there are disputes about whether the debate about the "hard problem" should focus on qualia, beliefs, intentionality, etc. Your comment seems to suppose that the main problem is qualia, and that the functional relationships between the different sensations, combining them into a (partially) unified experience, are unproblematic. I can see why you would think I agree with that, since Chalmers and Dennett draw their battle lines there, but I personally believe the issue is broader than that. I was trying to avoid those details in my blog post, but I think these other aspects of consciousness are also myseterious, and even if you successfully eliminated the qualia I don't think you would have explained why I am aware of the relationship between the horse and its surroundings, or anything at all.

(Also, qualia are supposed to be ineffable in the sense that it is hard to talk about what it is that makes red, red, and blue, blue. But they are obviously still experiencable since we can perceive or imagine red or blue. Given that I can perceive a brown horse, I can also imagine a brown horse by somehow reliving my experience of brown while I think about it. Imagination and memory are a bit like perception, only weaker and less vivid, if that makes sense. But, this is a side point since my argument wasn't based on the ineffability of qualia.)

I wasn't trying to define experience. I was asking you to explain what aspects of experience you are referring to. My own view is that experience isn't a single unified "thing" but has many different aspects. Abilities to descriminate and identify colors, shapes, sounds, textures; our memory linked reactive attitudes, dispositions to respond to stimuli; our ability to make associations between observations and our environment; our ability to model spatial organization of objects in our environment, model counterfactuals and make predictions about our future observations; semantic and visual associations between current stimuli and past stimuli; etc. etc. etc. I think a whole wide range of things go into what people refer to as conscious experiences and awareness.

And further I see no good reason to believe (though I am open to arguments and evidence to the contrary) that there are inexplicable and ineliminable aspects to consciousness beyond these that cannot be accounted for in such functional (and physically realizable) accounts.

But if you believe there are aspects to experience that can't be accounted for on such analyses, I would like to hear what you believe those aspects are and why.

All of it! I think all aspects of your conscious experience, insofar as you are really conscious of them, cannot be accounted for in terms of a purely physicalist analysis. It is not that I am asking you to identify some particular, highly subtle aspect of your conscious experience which is ineffable and mysterious. The thing that is mysterious is that you are aware of anything at all! (This is not to deny that there may be mysteries associated with individual aspects of consciousness, but they are part of a bigger mystery.)

However, each of the elements of your description of subcomponents of "consciousness" is ambiguous. For example, if I consider "ability to discriminate and identify colors", do you mean, that you actually have an experience of the two colors being different? Or do you merely mean that there is the ability to sort colors, using a camera and some internal filesystems, as measured by an external laboratory observer? Because the latter seems to be compatible with the nonexistence of any consciousness whatsoever. As you say, "we currently have computers that identify and can discriminate red objects from blue ones", but few people think that contemporary robots are conscious! So in that sense they can't really discriminate or identify colors in the sense that a human being can, they only blindly and unconsciously follow a certain program which we (looking from outside) might call the ability to recognize colors, but they don't experience anything at all. And the same goes for all the other abilities on your list.

I am not, of course, denying that there is a correspondence between the physical information processing in our brain, and the relationships between the various objects of our conscious experiences. Nor do I deny that a sufficiently sophisticated robot might be conscious. I merely claim that the identification between the physical and mental does not follow from pure logic and physics alone, but requires some additional substantive assumptions about metaphysics.

I (and having thought quite a bit about this can say this in all honesty) don't know what you are referring to. Now maybe that means I am a philosophical zombie, or am just not aware that I have these properties of experiences you are referring to (which would be strange case considering what people often claim about them).

But, I think you actually do know what people like me are referring to when we talk about Consciousness. Are you aware of anything at all? If so, you aren't a p-zombie, and that is what we are talking about.

26. Ben Crandall says:

Cedric,

Maybe at least on this issue we aren't terribly far apart, though I think some things are lost in the tendency to oversimplify and human desire to categorize things.

That way of looking at things is not a million miles from behaviourism.

I agree, there are many threads historically and philosophically that connect together behaviorists and functionalists... However I might mention again that functionalism largely predates behaviorism, as I have mentioned functionalists like Putnam were directly inspired in their view by Aristotle, and though (like everything in philosophy) people debate in what sense and to what extent Aristotle was a functionalist, it is undeniable that he served as inspiration.

I would also mention that "behaviorism" is often applied to a wide range of often different views and is often caricatured and used as a derogatory term. Ryle, Wittgenstein, Quine, Skinner, Dennett and others have all been labeled "behaviorists" at times (even though many of them would reject the label), however if you read their work (not to assume you haven't already) I think you will find that they have rather sophisticated and thoughtful views (setting aside whether those views are correct or rational) that bare only a minor resemblance to the cartoon picture critics paint.

That is fine as far as it goes, but I would say that IN ADDITION to this there are mental experiences! But perhaps I'm just being awkward.

No, I think you are being articulate (and you may even be right!) but my impulse is to think you have in mind the idea of the Cartesian theater. Chalmers has talked about experience as the amazing "movie" playing in your head all the time.... In vivid color, with amazing surround sound, 3D, a rather continuous monologue going on and narrating your thoughts. Is that the part of experience you are thinking of? That if there is green car in the world that when you "see" the green car in the "theater of your mind?"

If that is what you mean I would just suggest thinking it thru carefully to see if it makes much sense. What would a basic outline look like? The light from the green car hits your eye, is converted into neural charges and patterns of activity in nerve cells, we know various parts of the brain are responsible for determining and discriminating aspects of color and shape, etc... And then what happens, the signal is converted back to colors and shapes and converted into mental experience, for what purpose? To detect the shapes, colors, sounds, etc. all over again? Why would the brain be concerned with identifying shapes, sounds, etc. if they are just going to have to be RE-discriminated and detected in experience a second time? Now this might be very far off from what you are saying or believe, but I can't say I have ever encountered anyone who can even give a plausible rough outline of what such a view would mean. And when you are considering your view think about every day facts such as if I reach out and touch an object, where do I "feel" it? On the tips of my fingers? But how can that be, how can my "experience" of the touch sensation be on the tips of my fingers when we know the information is converted into electro-chemical signals and sent to my brain? Or does the conscious "experience" reach out and "touch" the object independent of my nerves? (If so, why have nerves at all?) Or do I REALLY "feel" the object somewhere in my head, brain, soul? In which case my "experience" of having the feeling at the tips of my fingers is just a mental projection.

And also I would consider less ordinary facts such as split brain patients... Have you seen video or read about such cases? Where the brain is essentially severed in half and can seem to have two independent sets of beliefs, knowledge, wills. Would that mean there are two experiencers?

Now maybe you have thought about these and have very satisfying answers, I am no expert on these topics either, but I have read and thought about them quite a bit... And I feel much of this might not be new to you, but hopefully I have at least framed it in a way that makes it of some interest to think about. Your comments and thoughts have been appreciated.

27. Aron Wall says:

Ben,
I think our comments crossed each other. My previous comment had something to say about the "Cartesian theatre", but I'll say a little bit more to your most recent comment.

I don't think it is necessary for somebody like Chalmers or Cedric or me to have a neurologically naive view that all consciousness is present "at once" to the person. For example, I have no problem with saying that a split brain patient has two different streams of consciousness, or more generally that a normal (non split-brain) human consciousness is probably composite and consists of several different strands with various degrees of relative unconsciousness to each other. Nevertheless, our consciousness is not completely fragmetary, so apperception (the experience of multiple things simultaneously) is clearly also a thing that needs to be explained.

Also, I do not believe in a second, ghostly self that reprocesses and experiences everything a second time. We are trying to figure out how our brain experiences anything the first time.

28. Ben Crandall says:

Cedric,

I apologize and hope I am not overwhelming you with comments.

I did want to get your thoughts on one more thing (and if you don't have time to respond to all of my comments I would possibly be most interested in your response to this one if you are able).

I am not sure many people recognize how radical the position of someone like Chalmers (and the view you seem to be hinting at) essentially is. This view leads to essentially an epiphenomenal view of conscious experience, that "experiences" have no logical effect on your behavior, or even beliefs and desires.

I find that a rather incredible view to hold, is it not? On the view of conscious experience being logically independent from function one would have to say that your beliefs that you have phenomenal experiences does not result from your phenomenal experiences. That your belief "I am experiencing a red sunset right now" isn't a result of your actual experience (p-zombies also enter into the functional states of believing they are experiencing sunsets, etc.).

"An alien visitor may agree with our judgements of colour. It may agree that there is a difference between a ripe and an unripe tomato. But perhaps the alien is experiencing something different when it sees a ripe tomato. Perhaps when the alien sees something red it is actually experiencing what I would call green."

But why stop there? Set aside aliens. If the view of consciousness I take you to hold is true, when you see a ripe tomato and turn to your friend and say "doesn't that tomato look ripe?" And they say "yes, it is great. Buy it." What if their experience of seeing a ripe tomato gives them the exact same experience you have if you are in a horrible and painful car wreck? Or we don't even need to appeal to others. How do you know your OWN experience are the same day to day? What if yesterday green tomatoes looked "red" experientially to you, and now the opposite is true. And every day your red and green experiences switch back and forth, and the only reason you don't notice is because your memory linked functional attitude also switches back and forth every day. It looked "green" to you yesterday, but your memory re-assigns it to your current beliefs concerning "red" appearances.

How would you know if experiences are these things that logically "float on top of" and independent from functional relations? Heck maybe every other day we turn into p-zombies and don't realize it :)

I am using these dramatic examples because I think many people have a gut reaction to the views of people like Dennett (partly I think because they misunderstand his views)... But maybe this helps explains why I find the alternative view even MORE strange and counter-intuitive.

Don't you find the above view strange as well?

(And I am aware some people try to rectify the above problems in various ways, but I don't think it can be done without essentially destroying the intuitive force of the original position)

29. Aron Wall says:

About the Epiphenomenalism issue, while I understand the intuition that it sounds crazy (indeed, it is probably the best argument against Chalmers' view), I don't think that is decisive because:

1) That is really an appeal to Occam's razor, but I explained in my post above why I don't think one should Occam's razor to make assertions about what is logically necessary. If in fact my position can be demonstrated deductively from the nature of consciousness and physics and logic, then plausibility arguments against it are to no avail.

2) I certainly don't think it's a big fat unexplainable coincidence that I say "I am experiencing a red sunset" at the same time as I actually am experiencing the red sunset. What this tells us is that the contents of our conscious experience make sense, that they are related in a coherent and interesting way to the information processing going on in our brain. There are other reasons that things might correspond, besides it being deducible from logic alone.

For example, if Theism is true, and if God wanted to create a community of persons aware of themselves and each other, then God would certain have a good reason to want to make the metaphysical bridging rules relating consciousness and physics be such, that we could actually coherently talk to each other about what we are experiencing. But I think even a Naturalist epiphenomenalist might reasonably think that there could exist a good reason why the contents of consciousness have to "make sense" in this way. A Property Dualist might say that objects have both causal and noncausal properties, and that we might expect the two sets of properties to be meaningfully related, since after all they are properties of the same thing.

(I guess in some ways your OSR position might be considered an extension of your anti-Epiphenomalist position to everything. The only properties a thing can have, is how it affects other things. But stated in that extreme form, it doesn't seem to me at all obviously true. Why can't a thing have intrinsic properties?)

30. Ben Crandall says:

Aron,

Thanks for the very thoughtful and thorough reply... Almost missed the first one! A lot to respond to, very well written and in-depth response (something I am always happy to see... In the age of Twitter I find people will go so far as to get upset if you give a long and thorough reply, but I also have trouble being concise which doesn't help). I will try to respond when I have some more time.

31. Cedric Brown says:

Aron

I am generally sceptical about the possibility of conscious computers but I will concede the point for the sake of argument. What I find odd is the idea that we would need some fantastically elaborate AI to simulate consciousness. If we can programme a computer to perform a specific task like playing chess, then why can't we programme a computer to experience the colour red and not much else? Why can't we isolate experiencing red in the same way as we can isolate chess playing? Would we need to simulate everything about human mental activity just to get a machine that experiences redness?

Ben

I am familiar with the Cartesian theatre argument. As I recall, Dennett imagined that his opponents would be committed to the homunculus fallacy whereby a little man would need to be sitting in the Cartesian theatre watching the show and another little man would need to be sitting inside the first one and so on.

I shall have to give that one some thought. Stay tuned for some "very satisfying" answers. Or maybe not :-) Anyway I would like to thank you for a stimulating discussion.

32. Cedric Brown says:

Ben

Certainly there are unsolved problems with any theory that doesn't regard consciousness as being reducible to brain processes. If consciousness is separate then how does it interact with the brain? If it doesn't interact then there is the problem of epiphenomenalism, as you indicate.

Regarding split brain cases, you might like a solution offered by Richard Swinburne: the soul resides in the left cerebral hemisphere.

33. Ben Crandall says:

Cedric,

I don't have much time right now but I was interested in your question about programming something that experience red and nothing else. On my view of things this doesn't make much sense. Let me explain why with an analogy.

My dog is pretty smart, though obviously he will never be able to know all the things I do... But maybe (I think to myself) I can teach him to understand one concept (analogous to your question about experiencing a single color "red"). So I decide to teach my dog what "pizza" is (which shouldn't be hard as he already is excited and pays close attention whenever it is around). How would I do this? What does it mean to know the concept "pizza?" Does it suffice that he will go grab the pizza any time I exclaim the word "pizza?" Will I consider the task a failure if the dog is never able to distinguish a pizza from focaccia bread or a calzone? Or does the concept of pizza require one also know other concepts like cheese, bread, tomato, etc.?

My view is that (although the answer is indeterminate in some ways) the common usage of "concept" involves something like the the latter. One can't just have a complex concept like "pizza" without having an interconnected web of other concepts, semantic relations, and other complexes of relations and ideas.

I would say the same about an experience like "the color red." One cannot JUST have an instance of such a single experience, and there is science to back this up. Consider someone who has been deaf all their life and then gets a cochlear implant which allows them to hear. Do they just all of a sudden have a single experience of hearing the word "hello?" No, in fact these patients describe a rather overwhelming and very confusing sensation. They don't really know how to describe or categorize their experience... It is only thru practice and adjustment that they eventually learn the interpret their auditory input. Similarly identifying an experience of "red" only makes sense in relation to other colors, visual experiences, etc.

As someone who studied music in college I can give a personal example as well. We had classes called "ear training" where we learned to identify features of music such as the relative frequency distances between notes (the "intervals" between pitches), etc.

Now if I hear a song like "Here Comes The Bride" I can identify the melodic distance between the first 2 notes as what is called a "perfect 4th." Now I had heard "Here Comes the Bride" many times before I ever learned what a perfect 4th sounded like (or a diminished chord, major chord, augmented chord, etc.).

Now did I experience hearing a perfect forth before I knew what one was? Could I say I knew what it was like to experience a diminished chord even before I could tell the difference between an augmented chord and a diminished chord? Consider a child, is it that hard to believe that when they are shown a red object they don't really experience it as red until they are able to distinguish it from other colors? If you find this hard to believe consider how many children have trouble telling the difference between the letters "d" and "b" even tho it is obvious to most adults... If they experience them differently as we do, why is it so difficult to tell them apart? Doesn't it make sense to say that part of the EXPERIENCE of a color, shape, etc. relies on learning to discriminate between them?

Finally (this turned into a much larger comment than I intended).

How do you know we haven't programmed a computer to experience red? (I am only half joking in my question) We have computers and cameras that can "see" red, how do you know they don't experience it? And if you think that is preposterous, isn't it all that stranger than thinking a butterfly or a pigeon sees red? Why assume insects or animals do and not computers?

34. Cedric Brown says:

Ben

OK, I'll go along with that. The experience of red only makes sense within the context of other experiences. Red is red because it isn't blue, green, etc. But once the perceptual system is up and running we can start to experiment. You can electrically stimulate my retina so that I experience the colour red. That suggests that the redness is an arbitrary quality that has been assigned to a particular input in order to distinguish it from other inputs. So, in theory, when you design your AI system which can discriminate between colours you can specifically programme it so that it uses the particular quality of redness to identify light of a certain wavelength.

But that wouldn't be possible. Redness isn't something that could ever be programmed into a computer. Now, I know that you will reject any talk of redness as something in addition to discrimination between wavelengths of light, but that is where we differ.

35. Hamid says:

Cedric,
It might be possible that one is only partially conscious of the color red for possibly infinitely many different reasons, just as a computer might recognize and discriminate redness only partially and just as one might not recognize that both Dennett and Chalmers are associated with Hofstadter whose book Gödel, Escher, Bach was once taught jointly through the math and philosophy departments in Holy Names college and which I did not attend the class but only heard about from the my math instructor. I later bought the book and heard about the beauties of the musical offering of Bach. The nice thing about artificial intelligence is that one might be able to translate one's knowledge to the computer algorithmically and then translate it into another language which might be only algorithmically the same although they look totally different. And thus it might be useful to look at both Gagliardo and Uhlenbeck's "Geometric aspects of Kapustin-Witten's equations" and Kapustin and Witten's "Electric-magnetic duality and geometric Langlands program." It might then happen that one can program the computer to correspond the spectroscopy of colors, say, ranging as blue with posibbly different L-functions in geometric Langlands program. Similarly, there might be a reason to consider the relation of wimps and machos in the cosmological general relativity and its connection to consciousness as it enters the heart. Similarly, consider the similarityies of the two verses in Bible Mark 10:25
It is easier for a camel to go through the eye of a needle than for someone who is rich to enter the kingdom of God.
and Quran 7:40
To those who reject Our signs and treat them with arrogance, no opening will there be of the gates of heaven, nor will they enter the garden, until the camel can pass through the eye of the needle: Such is Our reward for those in sin.
The question arises whether machos are rich enough to be entirely made up of wimps or that can not be the case no matter how theories of consciousness are formulated. E.g. if I write to E. Witten or C. Vafa and do not get an answer and am able to write comments to you and get answers, it might be that I am still very partially conscious of what is going on mathematical physically although I am still a very good p-zombie and have not got the crest of standard model and beyond in general relativity Langlands geometric programmable-wise!

36. Aron Wall says:

Robert,
In addition to the Wikipedia and S.E.P. links that I already cited, there seem to be a bunch of other tutorials online. But I'm not sure which to recommend, since modal logic is something I've picked up in bits and pieces from various places. One very important point (made e.g. here) is that the symbols of modal logic don't have a unique interpretation; they are the sort of thing that comes up in many contexts.

kashyup,
I'm not sure what you have in mind, it's not obvious to me how this modal logic argument could be applied to interpretations of QM.

Christians do indeed believe there is an "extra-sensory" spiritual world out there, but experiencing it isn't really the main point of the religion; if anything it is a potentially dangerous distraction from what we are meant to do here on Earth. Although, it is very important to us that people continue to be alive in Christ after they die here. It is our relationship with God which is the center of our faith, and mysticism is just one possible path to Christ (and not the safest one either).

g,
Whether or not you want to reply is entirely up to you. I promise not to intepret the lack of a reply as if it were a concession speech... But if you do have any comments I would be interested to hear them.

Mactoul writes:

The point about logical conflict between
Team Chalmers: It is conceptually impossible for Consciousness to be fully explained in strictly physical terms.
Team Dennett: It is conceptually impossible for p-zombies to exist.

holds only when a certain view about physical systems --that they are exhaustively described by the laws of physics and thus there are no qualitative features in physical things--is adopted without reservation.

Since Chalmers would agree with you that there are qualitative properties of physical systems which cannot be exhaustively described in terms of the laws of physics, while Dennett thinks there aren't, I think it would be better to say that you agree with Chalmers, rather than to deny that there is a logical conflict between Chalmers and Dennett. (Although Chalmers is not a vitalist, like you seem to be.)

37. Mactoul says:

"How do you know we haven't programmed a computer to experience red?"

Because redness is a quality-and computers and programs don't do quality, by definition. A program is a formal procedure to produce certain outputs from certain inputs. It does not experience anything. The entire idea of computer experiencing anything is nonsensical.

A computer is an artifact and does not have a kind of unity that an organism such a butterfly possesses. This unity allows organisms to have experiences. There is someone or something that can have experiences.
Artifacts lack this quality altogether.

Now. perhaps man can build an artifact possessing organic unity but it won't be a programmable entity for programs are formal procedures having no scope for the organic unity.

38. Mactoul says:

Aron,
Your comments about Godel's theorem are unclear and confusing and your conclusion that an implication of Godel's theorem "that ability of human beings to reason about math proves that our intellectual capacities cannot be reduced to computation " is bunk is unwarranted. Indeed, you misstate the very point
that Godel's theorem is used.
You say
"Gödel's theorem only states that a formal system for proving mathematical truths by rote cannot be both complete and consistent. Whereas human beings reason primarily by informal methods, so Gödel's theorem does not seem to apply to us in any obvious way. So this does not prove intellect cannot be reduced to computation, because (a) there is no reason to think that human beings are capable of proving all true arithmetic propositions, and (b) there is no reason to think an intelligent AI couldn't reason about mathematics in an informal way"

i) Godel's theorem states that in a consistent formal system, there are true statements which are unprovable within the system.
This implies that
a) Human intellect (which perceives the truth of Godelian unprovable proposition) is not reducible to a formal system.
b) Computation IS a formal procedure. Thus, human intellect can not be reduced to computation, contrary to what you state,
c) The sense of the term "informal" in "human beings reason primarily by informal methods" is obscure.
What precisely do you mean?
d) Your (a) is a red herring in the present context. Indeed, Godel's theorem itself states that there exist true statements that can not be proved. However, it takes a human intellect to perceive that a given unprovable statement is actually true.

39. Cedric Brown says:

Hamid
Thanks for your comment. I can't say that I see the connection between the aspects of physics that you mention and consciousness. However, I would be interested in your hearing your thoughts on the possible connection between chaos and free will.

40. Hamid says:

Mactoul,
Redness is a quality which can be quantified alright otherwise it could not have been transmitted through antennas into homes with color tvs which nowadays we also have their HDMI versions. In general, one might come to believe that consciousness can not be realized by AI or computers. However, it might help to notice how say a patient who has lost his olfactory sense might regain it properly by all kinds of precision instruments equipped with computers assisting the physicians, medical doctors, and other experts attending the operation. Nonetheless, the cured patient must be careful to learn how to compute both arithmetically as well as geometrically how to preserve the olfactory sensory system quantum field theoretically. Namely, it must not be the case that he would learn the computation alright at the price of possibly losing another or even more than one other sensory system not to mention fussing to the point of losing consciousness over the matter due to being too much of superstitiously obsessed p-zombie.
And Aron,
considering the infinitely many situations in which one might be able to keep (or lose) the olfactory sensory system properly might need infinitely many algorithms for which the computer might fail due to overheating after all in how to do the proper computations. Someone might object why worry about the olefactory system after all since the computer is a mortal one anyways. But that seems to be the easy problem and the hard problem might turn out to be how can actually quantum general relativistically an entity could possibly be both a wimp and a macho. In fact, it is always a question for me whether a star like our sun's energy is turned into photonic light only at just a veneer on its surface, or do the nuclear reactions burn it all the way through like a candle and we have strong and weak and electromagnetic reactions although we might or might not have gravity unified into everything else. You might call me stubborn. But it might be very useful to learn supersymmetric geometric Langlands program for fine-tuning the masses of the elementary particles involved to learn about the MSSM (minimal supersymmetric standard model) general relativistic gravitationally. There is a nice poem by Saadi in Persian that the candle says to the complaining burnt butterfly that if her wings are burnt a bit I am actually burnt from head to toe all the way. (http://www.shahriari.com/poems/8solj/sj75.htm) And we have not even yet got to the computations of unhiggsings string theoretically.

41. Aron Wall says:

Hamid,
I'm very sorry to say this, since you seem like a nice person, but it seems to me that your comments hop from topic to topic based on "free association", and they don't really make any sense to me (or to anyone else here). I'm going to have to ask that you stop leaving any more comments like these ones on my blog (since they confuse other people and don't move the conversation forward).

For now, you are still allowed to post comments as long as you confine them to a single topic of discussion, and don't go rambling off into speculation about random unrelated math and physics theories. (Random quotes from Sufi poets are okay, in moderation.) If you can't keep to these rules, you won't be allowed to comment here anymore.

(However, the book you mention, Godel, Escher, Bach is certainly relevant to the topic of consciousness, and excellent too...)

Cedric,
The reason why you can't see the connection between the areas of physics that Hamid mentions, and Consciousness, is that there is no connection between them! So please do not encourage this sort of thing further.

42. Aron Wall says:

Mactoul,
(a/d) Human beings can perceive some Godelian "unprovable" statements to be true, but it is far from obvious that we can see all Godelian statements to be true or false. Thus I expect there will be some mathematical statements which are unprovable by any human being.

(b) is wrong since it confuses two different meanings of the word "formal. Just because computation involves a formal procedure, does not mean that it is a fomal system in the sense required by Godel's theorem. Godel's therem assumes that (1) the system reasons directly in a language of arithmetic symbols, (2) that it mechanically applies certain logical rules of inference (including at least certain basic arithmetic rules), and (3) that it is consistent.

But there is no reason why a computer program would necessarily have those properties. For example, Artificial Neural Networks are computer programs, but they don't have any of these properties (1-3) and so Godel's theorem doesn't apply to them.

(c) Whereas human beings (1') communicate in terms of imprecise poetic languages such as English, (2') are taught these arithmetic rules from a more general set of imprecise heuristics, and (3') can sometimes believe or assert inconsistent things. I see no reason why e.g. an Artificial Neural Network couldn't have these properties.

----

Note that I am talking about the behavior of the system as a whole, not the bottom level progamming language. The AI might not even be "aware" of the programming language it is written in, any more than we are aware of the firing of individual neurons. (I put "aware" in scare quotes because I'm just talking about apparently intelligent behavior here, not actual consciousness. The universe may well be such that any sufficiently intelligent system is also conscious, but I'm not assuming it here.)

But even the bottom level programming system isn't quite a formal system in the sense of Godel. It's a programming language, which is a series of instructions, not a series of propositions.

----

You might think that (3'), the ability to be inconsistent, isn't all that useful of a property to have, but I think one of the lessons of Godel's theorem is that it is. Actually being inconsistent isn't all that great, but reasoning by intuitive/associative methods which sometimes lead to inconsistency, but more often lead to insight, is of course quite useful.

What do you think of the following statement: "Mactoul cannot consistently believe this sentence to be true?" It seems obviously true to me; does it seem true to you? In a way this is a sort of informal analogue of the Godel proof as applied to human beings...

As a human being, you have the ability to laugh this off rather than taking it seriously. That is part of what I mean by "informal". There is no reason why a neural network AI couldn't print "ha ha very funny" in response to a similar verbal statement. That is another feature of informal reasoning, the ability to give to paradoxes a more nuanced response, other than "True" or "False".

Some additional discussion of these issues may be found in Godel Esher Bach (look up "Lucas" in the index).

43. Scott Church says:

I must say, I too am having trouble following the "free associations." Last time I checked, the topic of discussion here was consciousness and falsifiability. I'm not seeing what Langlands geometry, the MSSM, or minimal SuSY have to do with either.

44. Hamid says:

Aron,
I understand that you might not want me to write anymore. It is your website. And thanks for calling me nice. As I told this once to a professor of physics specializing in comic rays, he told me that I can also get really nasty!!! I am not rambling off into speculation about random unrelated math and physics theories. The point is that math and physics are indeed part of analytic philosophy, whether one is theist or atheist. Isn't that the case? And so it is nice to have mathematical physics theories that make a correct correspondence between the mathematical world and the natural physical world. And supposedly some people may argue that you can eliminate consciousness from the scene and not to mention any further need for p-zombies either. Gödel's last theorem as mentioned in Gödel Escher Bach possibly shows us that the problem is not this simple. As Morris Klein the mathematical historian would have said, if the spider web in the basement is torn down then it does not mean that the entire building has fallen apart although it might appear so to the spider as such. AI definitely does help the human mind, but it can not solve a problem such as fine tuning. And supersymmetric fine tuning certainly might help adjusting particle physics and correspondence between mathematics and physics. But at the same time it does not prove that mathematicians or physicists know consciously what their experimenting on the nature. And that's the reason it appears to me that the supernatural- or the metaphysical- or theological-oriented philosopher wishes to abandon the ways of the naturalist philosophers. Not that they wish to denounce the whole worldly living, but that their claim is possibly that the natural philosopher in being too exposed to the material world loses or might lose it altogether quite against his or her expectations! And thus it might appear to the natural philosopher that a believer in God might be a philosophical zombie. Ha ha ha very funny! Of course, a natural philosopher might always disguise his belief whether he or she is a theist or not alghough some people might call it dishonesty or there might be a confusion that to what extend he might be a believer in God and then there are different philosophical arguments about the nature of God himself and the different religions there are ....

45. TY says:

On the lighter side of "free association", I am reminded of Procol Harum's "Wider shade of pale". It's one of my favourite pieces of music but to this day I don't know what the song is about.

46. g says:

OK, I've now read what Aron wrote ... and of course I come back
here and find a lengthy extra discussion in comments! I hope
the following isn't too badly obsoleted by any of that discussion.
(I've looked over it briefly and I think it isn't.)

So, first of all: I used the word "falsifiable" and really
I should have said something more careful like "empirically
checkable". I'm pretty sure Aron agrees with me that what
matters about a theory is not its susceptibility to being
conclusively refuted if wrong but its susceptibility to
having good evidence found against it if wrong. This is
what I meant by "falsifiable" and I hope Aron understood
it that way.

I regret that what follows is basically the result of reading
through Aron's post and making comments on it section by
the corresponding bits of Aron's post in parallel with it.
It would have been better to write something more self-contained
but it would have taken twice as long. It may be helpful to
observe that in my opinion the most important part of
what follows is the two consecutive paragraphs the first
of which begins by talking about "burden of proof".

Now, what of Aron's argument? I agree that eliminativism
is probably silly (though maybe there are some definitions
of "consciousness" that deserve it as a response). I don't
agree that "conceptual truths aren't falsifiable", or more
precisely I don't agree that "propositions that are either
conceptual truths or conceptual falsehoods, depending on
whether they're right, aren't empirically checkable". That
would probably be true if we were perfect reasoners who
are instantaneously aware of all logical consequences of
any proposition we contemplate, but we aren't, and sometimes
our best way to find out those logical consequences is by
experiment. Perhaps an experiment of a sort that would
give the same results in any possible world, like
checking some number-theoretic property of the first
hundred prime numbers. I assign a probability of 1/10
to the proposition "the billion-billionth digit of pi
is 4", even though in fact it either is or isn't, in all
possible worlds. I think the Riemann Hypothesis is probably
true, and I can imagine (as I am sure can Aron) all sorts
of empirical evidence that would make me think it more
or less likely than I do right now, without resolving it
definitively. Conceptual truths very decidedly are

I think the point of the section of Aron's post entitled
"Can you tell me a story?" is that he thinks the answer
is no, because there is no possible story that
would lead him to conclude that consciousness is a
physical phenomenon. I have to admire the chutzpah
of this; I complain that some anti-physicalists hold
unfalsifiable views, and part of his counterargument
is to say "Look, my view is unfalsifiable, and that
shows that you are wrong to oppose it!". Anyway, my
answer is that I can tell a story that -- so
it seems to me -- would, if things happened as it
describes, lead any reasonable person to agree that
consciousness is a physical phenomenon; but that some
anti-physicalists are (on this particular issue) not
reasonable and would doubtless decline to accept that
conclusion.

The story goes something like this. After years of toil,
a team of the world's best neurologists, biochemists,
cognitive scientists, AI researchers, etc., announces
that they have completely sussed the human
brain. At the lowest level, they have a detailed
quantum-mechanical analysis of all the molecules
found in the brain and of how they behave in the
environments they're found in. They work up from
this, step by careful step, to a highly accurate
mathematical model (and computer simulation) of
neurons and of the other cells found in the brain;
then, of commonly found simple configurations of
neurons. At each step they have not only a "low level"
model derived from the steps below, but also a new
higher-level understanding that describes how the
structures they're analysing behave. So, several
layers up from individual neurons, they might have
little modules like "horizontal edge detector" (in
the visual cortex) and "roughly sinusoidal pulse
generator" (in the motor cortex). Several layers
up from these, perhaps we start getting things that
feel more "meaningful": neural circuits that discriminate
smiles from frowns or that respond to minor-key harmonies
in music, perhaps. Their model gives some justification
for these high-level descriptions and also explains how
the actual behaviour diverges from the high-level description
(this would be part of an explanation of some optical
illusions, for instance). And so on, up to much higher-level
structures in the brain and mind. This is a model
of "software" and "data" as well as hardware, dealing
not only with particular patterns of neural connection
but with particular patterns of neural activation.
Their model is parameterizable and can be applied to
any human brain in reasonable working order (and it
also gives insight into ways in which they break down).
When applied to a particular human brain, they can e.g.
do this with it: take a given real-world situation, feed
it in simulation to their computer model, follow in
detail everything that happens (down, if necessary, to
the molecular level, though their model allows them to
use approximations at the lowest levels and justifies
the empirical observation that doing so almost never
makes any difference to actual behaviour) as their
simulated brain responds to that situation -- they
have adequate simulations of muscles, vocal cords, etc.,
to connect the simulated brain to -- and see what the
person would actually do in that situation. Their simulation
has never yet given results that differ detectably from what
the person whose brain they're modelling actually does.
Further, because they have this fancy multi-level model,
they can give psychologically meaningful explanations at
a high level (he says that because he is worried about
his performance at work and because he read that book
the other day), and trace the lower-level mechanisms that
implement these higher-level things (these groups of neurons
here are basically concerned with his work performance --
you can see how they tie to these other ones that we saw
doing their thing when we simulated him having an argument
with his boss, etc. -- and these ones here are where his
memories of that chapter are mostly stored, and we can
see how they link to these other neural circuits concerned
with related matters) and go further down (this structure
here is one we see all the time for modelling a person's
concern about how others think of them ...) and further
and further, all the way to the lowest level. And when we
do this (I am now basically recapping the story I have
already outlined) we find (1) that a physically accurate
simulation down to the molecular level or below correctly
predicts human behaviour (so whatever's going on with
consciousness, its outward consequences match those of
a purely physical model), (2) that we can match up features
of this purely physical model with recognizable psychological
phenomena, (3) that we can see how the higher-level things
are implemented in terms of lower-level ones (a bit like
the way we can see subtle "positional" understanding in
a computer chess program emerging from lower-level nuts and
bolts about what pieces are where, together with a little
tree-searching, and (4) that all this continues to apply
its conscious "experiences", its understanding of its
"self", and so on. Oh, and let's add that (5) when, using
the wonderful apparatus this team has developed in the
course of its work, we observe the actual neural firings
and chemical sloshings-about inside the actual human
brain we're modelling, we find that they exactly match
what the model describes.

If all that happened, then of course you could insist
that consciousness is none the less non-physical. You could
say that when a human talks about their conscious experiences
the movement of their vocal apparatus is caused by actual
consciousness, whereas the apparently-isomorphic-at-all-levels
computer model has the same behaviour but with a completely
different
cause -- since its talk of experiences doesn't
come from actual consciousness but from "mere" (simulated)
physics. But I, for one, would think that a move of the utmost
desperation; and if the story I told above actually came to
pass then I think 50 years later essentially no one with any
scientific expertise would deny that consciousness is physical.

(More precisely, that consciousness of a physical system
is physical. You might very well want to say that a sufficiently
detailed computer simulation, like the ones the scientists in
my story have developed, is also conscious; that a similar
simulation running on hardware in some other universe with
very different physics is also conscious; etc. All of this
is consistent with what Aron calls "Strong Physicalism", though
it might be better expressed by saying that certain kinds of
instantiation of certain kinds of causal structure are necessarily
conscious, without any particular reference to physics.)

My only comments on Aron's treatment of modal logic are (1)
"conceivability" actually needs to distinguish between
different notions of possibility/necessity (e.g., what
we think we can vividly conceive is not necessarily logically
possible, nor is everything logically possible within our
powers of imagination) and (2) that when dealing with notions
of necessity more complicated than (say) logical provability,
axioms like S4 and S5 are nowhere near so clearly reasonable
as they may seem.

I think "burden of proof" should be understood as a shorthand
for talk about prior probability; as such, it can't be reliably
inferred from the logical structure of the proposition under
consideration. Imagine that the molecular structure of water
has not yet determined. Deniel Dannatt says: "Water is what
you get when you combine a certain number of atoms of certain
types in a certain way. Anything made in that manner is water,
and nothing else is." Devid Chelmars says: "What makes water
water is some mysterious non-physical quiddity; no mere physical
facts can make something be water." That is: let b
be some list of physical facts about a bowlful of water, and
let c be the proposition that it is in fact water;
then Dannatt says "necessarily b implies c" while Chelmars says
"necessarily b might not imply c". This is the exact same
logical structure
as we have with Dennett and Chalmers in
the actual world; but I hope we can agree that (perhaps only
by good luck) Dannatt is right and Chelmars is wrong, and that
any principles that lead you to say that in their epistemic
situation Dannatt was "almost certainly" wrong and Chelmars
"almost certainly" right are therefore suspect.

What's different between their situation and ours? I think
it's mostly the fact that we happen not to have a tradition
of thinking of "wateriness" as something beyond physics;
so, once we have a pretty good physical account of how
water behaves, we are comfortable defining water
in terms of its molecular structure, and of course once
we do that Dannatt's claim about water becomes one that
actually does "follow from the structure of logic itself"
(given the definition). But we not only (like Dannatt and
Chelmars) don't yet have a good physical account of how
conscious organisms work; we also (unlike them) have this
tradition of thinking of consciousness as magical. And
that, I suggest, is why it seems silly to suggest that
the beliefs of Team Dennett might "follow from the structure
of logic itself": because of course consciousness
couldn't turn out to be something that's physical by
definition; what an absurd question-begging idea! (Just as
Team Chelmars might say: of course wateriness couldn't
turn out to be something that's physical by definition.)
But I think it's only that tradition of thinking
of consciousness as magical that makes the idea seem
silly. Go back to the science-fictional story that (at
Aron's prompting) I told above; in that scenario, what
exactly would make it unreasonable to say that human
consciousness is, by definition, that thing that the
model describes?

("Aha!", you say. "I see that you had to say human
consciousness there, because obviously that sort of model
can't tell us anything about the possibility of other
sorts of consciousness. Doesn't that show that whatever
it's describing it's something too specific? Surely aliens
or angels or AIs could be conscious in quite different
ways that don't correspond to anything in the model."
Firstly, the process of making the model might actually
lead to a clearly-generalizable account of consciousness
that would apply to aliens and AIs and maybe even
angels. I don't know what that would be, for the obvious
reason that there isn't in fact any such model just yet.
Secondly, I do in fact think that for entities sufficiently
different from us, the question "is it conscious?" might
well be one that doesn't have a definite answer. If you
find that absurd, then I suggest contemplating the
questions "is it amused?" and "is it in love?" -- which
I think it's obvious might not have a definite
answer when dealing with a sufficiently not-human-like
being -- and asking why "conscious" should be so very
different.)

Let me state my positive point a bit more explicitly.
If we are ever able to pin down exactly what we mean
by "consciousness", as we are now able to pin down
exactly what we mean by "water", then there will be
nothing terribly unreasonable about suggesting that
the right combination of physical facts might necessarily
imply "X is conscious"; the necessity would be definitional,
as with water.

But what if we are never able to pin down the meaning
of "consciousness" with enough precision? In that case,
the question of necessity simply doesn't arise,
at least if you want to treat it in the sort of modal-logic
way we're doing here, because the "necessarily" and
"possibly" operators have to by applied to propositions
in a formalized language
. If we have only a hand-wavy
idea of what consciousness means, then we can talk about
its (putative) necessary properties hand-wavily, but then
we have no right to pretend we're doing modal logic and
certainly no right to assume that (say) Axiom S5 can be
applied to anything we're doing. And if -- this is what
I think Aron's argument really requires -- the meaning
of "consciousness" fundamentally cannot be pinned down
then we fundamentally cannot do modal logic with it either.
Are there notions too slippery to define, for
reasons other than that they don't actually make sense?
I'm not sure. But if there are, then I can't see how
we could (e.g.) be sure that propositions involving such
notions have a definite truth value, still less that
there's any well-defined fact of the matter about whether
they are logically necessary.

Aron contemplates the possibility that his argument could
be flipped around so as to make it the Chalmerites rather
than the Dennettites who are making the logically stronger-looking
claim. I think he's right that it couldn't, for the reason he
gives: the position he ascribes to the Dennettites is a very
strong one. I don't know exactly what position (for instance)
Daniel Dennett himself takes; but I know that mine is not
quite the one Aron ascribes to the Dennettites. I don't
claim to know that consciousness is a purely physical
thing. I say only that it might very well be and that
what evidence we have seems to point that way. If you
respond to the first that "possibly necessarily p implies
necessarily p", then I observe that we're dealing with two
different modal operators (one epistemic, one logical or
metaphysical or something) and you are not entitled to
collapse them like that. If you respond to the second
that conceptual propositions aren't subject to empirical
evidence, I refer you to my comments above on that score.

I think Aron is just wrong to say mathematicians don't
use Occam's Razor often. I think they use it all the time.
(Of course Aron is right when he says they aren't satisfied
to use Occam's Razor and go no further; mathematicians like
to prove things.) And I don't think there is anything wrong
with this; it is perfectly in order to use Occam's Razor
(with caution) even when dealing with what are in principle
matters of logical necessity -- for imperfect limited reasoners
like us. And I think that's what's happening when someone
says "physicalism is simpler so we should prefer it". I confess
I don't understand what Aron says about supposing that "the
space of logical hypotheses must itself be simple"; of course
no one says that and I don't see why he even brings it
up; nor do I understand why he stresses that the existence of
p-zombies is a logical possibility even if very improbable,
because the actual question is whether the possible
existence of p-zombies is probable, and we don't actually
know that their existence is a logical possibility,
conditional on the physical world being as it is.

47. TY says:

G

Interesting and I’m learning stuff that I’ve never paid much attention to, so my comments seem naive don’t be surprised. I’m not too much concerned about the rebuttals you made to Aron’s comment, but on your fantastic story (try developing it into a book length science fiction. It will sell!)

Am I correct in saying that the story essentially DEFINES AWAY the philosophical contention of whether consciousness is physical, and hence, doesn’t really resolve it. I guess another science fiction writer can make up another story that gives that same contraption an entity called a mind.

You then say “consciousness of a physical system is physical.”

But, that's like saying consciousness of humans MUST ONLY be physical? I struggle with that strong form of the argument..

48. Mactoul says:

Aron,
"there will be some mathematical statements which are unprovable by any human being."
is confused. Per Godel's theorem that I know, in any consistent formal system, there is a true but unprovable propostion.
The Godelian statement is true, meaning its truth can be perceived (by humans, necessarily).
Is unprovable, meaning that it can not be derived begining with the axioms of that formal system.
I feel you are not realizing the key difference between "perception of truth" of a proposition and "provablity" of that proposition. Perhaps the author of Godel, Eascher Bach, interested as he is in promoting AI obscures the point.

I assure you inconsistency is not the reason why humans do not fall under the judgment of Godel. It is simply that human intellect is not a formal system. AI is, even neural networks, they all operate on formal computer codes and necessarily lack the self-referencity that the author of Godel, Eascher Bach makes so much of.

49. Mactoul says:

Aron,
In case my point is unclear, I restate my objection:
"Human beings can perceive some Godelian "unprovable" statements to be true, but it is far from obvious that we can see all Godelian statements to be true or false."

A Godelian statement is DEFINED as a statement that is true but unprovable.
Thus, we can see all Godelian statements to be true. But none can be proved to be so.
Truth is not the same as being proven. That is the very point of Godel.

50. g says:

TY: No, it doesn't but it feels like it does, and that's basically my point.

If you read my "story" carefully you will see that I only ever described physical things getting analysed. Parts of the brain. Physical processes within the brain. Observable human actions like operating the lungs, vocal cords and mouth so as to say particular things. Chalmers's claim is that all those things could be exactly the same for (1) an actual conscious human being and (2) a "philosophical zombie" with no genuine consciousness. I think the plausibility of this claim comes from not thinking too closely about what it would mean, and its exciting alleged philosophical implications (consciousness must be some sort of magical nonphysical phenomenon!) follow only if it's true when you look in as much detail as my hypothetical scientists did and more -- which is exactly when it ceases to feel plausible, and I know of no reason other than intuitive plausibility to believe it.

51. Aron Wall says:

[slightly updated the wording as of 2:53 Eastern Time]

Mactoul,

I feel you are not realizing the key difference between "perception of truth" of a proposition and "provablity" of that proposition.

Fine, then you can replace what I said before with:

"there will be some mathematical statements whose truth is unperceivable by any human being"

and I think this will still be true, or at least it is not obviously false!

(But, I think my original language of "provable" is defensible terminology. Mathematicians regard Godel's theorem as a proof that the Godel statement is true, after all. It is, however, a proof by human standards, namely a set of words and explanations which causes people to think that the proposition has been deductively shown from self-evident truths. That's why it's called a "theorem"! Another fact is that for any given formal system X you can always find a more powerful formal system Y such that Y can prove the Godel theorem for X, thus any particular application of Godel's theorem can be formalized.)

I assure you inconsistency is not the reason why humans do not fall under the judgment of Godel. It is simply that human intellect is not a formal system. AI is, even neural networks, they all operate on formal computer codes and necessarily lack the self-referencity that the author of Godel, Eascher Bach makes so much of.

You assure me, huh?

Are you relying on other people's summaries of Godel's theorem, or have you actually worked through the proof yourself? Because I actually understand (and recall in outline form) the proof of Godel's theorem, and I assure you that it does not apply to neural networks.

If you do claim to understand Godel's theorem, then I am calling your bluff. Please sketch out for us in these comments, just what the protocol is for identifying an unprovable "Godel sentence" for any given Artificial Neural Network code, such that the neural network cannot ever state as output that that Godel sentence is true. (Feel free to restrict to a specific type of neural network program, if that makes it any easier.)

If you don't claim to understand Godel's theorem, then I think maybe you shouldn't be so confident that you have understood its moral correctly.

52. TY says:

g

One further question from your story:

“Their model gives some justification for these high-level descriptions and also explains how the actual behaviour diverges from the high-level description (this would be part of an explanation of some optical illusions, for instance). And so on, up to much higher-level structures in the BRAIN and MIND. This is a model of "software" and "data" as well as hardware, dealing not only with particular patterns of neural connection but with particular patterns of neural activation.”

So this contraption does have a separate entity called a “mind” (X) that is co-existent with the "brain" (the software algorithms, and so on) (Y).

What is X and its role in this physical system?

Thanks.

Aron,
Hamid should not actually feel like a gentile entering your church's website. But the point is that whatever water is made of, is it still water no matter how it mixes with other liquids? He must have a lot to learn before he learns the rituals of entering into a church properly! I learnt a lot from your website. I wish that would go the same for arrogant Hamid too. Thanks.

54. Aron Wall says:

I have good reason to believe that ADHM and Hamid are the same person. Because of this deception, and because his comments still don't make any sense (this website is not a church!) both of him are now BANNED and he will not be allowed to post anymore, under any pseudonym.

55. g says:

TY: My mention of "mind" was a bit sloppy; let me be more precise. "... And so on, up to much higher-level structures in the brain and higher-level psychological phenomena." E.g., perhaps it turns out that certain groups of neurons behave in certain consistent ways when our subject is (so far as we can judge from the subject's own reports and by inference from other behaviour) worrying, or proving mathematical theorems, or composing music, or enjoying erotic daydreams. The "certain consistent ways" would probably need to be described at quite a high level of abstraction. But, indeed, our experiments couldn't tell us for sure that those correspondences hold; only that so far as all available evidence goes they do. If for some reason our subjects all consistently lie, and deceive the experimenters in other ways, about when they are thinking of cabbages and when they aren't, then the experimenters' conclusions about correspondences between brain structures and activity on the one hand, and thinking-of-cabbages on the other, will likely be wrong. And maybe they could all be consistently wrong in some way that permits the real business of thinking to be done by (say) an incorporeal soul, despite all the apparent evidence (there's plenty in the real world, and plenty more in my hypothetical one) that particular mental functions correspond to particular parts of the brain and particular patterns of brain activity. So the events of my story certainly wouldn't constitute irrefutable proof of anything. But there are very few domains in which it's reasonable to look for irrefutable proof.

56. Cedric Brown says:

Here's an interesting article about mice that have been genetically engineered to see new colours.
http://www.colormatters.com/color-and-vision/color-vision-for-mice
Presumably, these mice would have to invent new qualia. In fact, this must have happened in our own evolution, since our distant ancestors also used to have only two types of cone cells in their retinas.

57. Hamid says:

Aron,
I try it one last time. Why did you get so upset and banned me? Because I am deceptive? I myself left the same email of mine as before and did not change it so that you can recognize me alright. I don't think that is called deception. It might even be called going out of formality in order to be a bit funny in a more informal way especially since ADHM reminds me of certain kinds of instantons! And if I called your website as belonging to a church it is because you do talk about Christ quite very honestly and according ot the Bible. And as I said I honestly felt that your website is pretty good and won my respect for it. Sorry if there has been any misunderstandings. Thanks again and Best wishes.

58. Aron Wall says:

Hamid,
Your email address is visible to me, but it is not visible to the others. So your post would have been deceiving the other people on my blog, even if it did not deceive me. The fact is that there are trolls and other people who create "sockpuppet accounts" deliberately in order to cause trouble. As the moderator it is my job to be careful, in order to protect the comment section.

Nevertheless, because you have apologized, I am commuting your ban to ONE MONTH. If you decide to comment after 2017 begins, it will be allowed. However, please be careful to avoid the things which got you into trouble in the first place, including bringing numerous random physics topics into discussions where they don't belong, or I will ban you again. To be honest I think it will probably not work out, but because you have asked for it, I am giving you another chance.

59. Aron Wall says:

g,
I'd like to give a response to the arguments in your long comment, but I'm very confused by your last comment to TY. It almost makes me think you agree with me without realizing it...

Why is it important, in your sci-fi story, that the experimental subjects not lie to the experimenters? I mean, I can see how it might be useful practically to have the additional information, at an intermediate stage in the model construction... but in principle, in the limit of a sufficiently advanced brain science (assuming you're right about the point disputed between us) shouldn't the experimenters be able to tell what conscious experiences are present in the brain without needing to be told? So that they would be able to check if the subjects are lying (and I don't just mean, by means of a "lie detector" test to see if the lying parts of their brain are activated, I mean by directly comparing their statements with reality).

That is, supposing it is true that a certain configuration of neural excitations corresponds to thinking-of-cabbage (which I already believe) and if those neural excitations are in fact identical to thinking-of-cabbage (a proposition about which I reserve judgement) and if we further suppose there is nothing mysterious about this identity, but it can be deduced in an unproblematic way from the physical configuration in question (this is the proposition I am denying), then shouldn't the researchers studying the brain be able to look at the scan of the person's brain, compare to their model, and say "Yup, that person is thinking about a green leafy vegetable!" without ever needing to be told?

If, even after becoming aware of all the relevant physical facts about the brain, you still only know that the person is having a particular conscious experience because they say they are having it (and you believe them) then the purely physical facts are incomplete. But perhaps you didn't really mean that?

TY,
The story won't be publishable until he makes the scientists abuse their powers in some interesting way. He might also need to throw in a love interest somewhere. ;-)

60. g says:

Aron, I am pretty sure I would realise it if I agreed with you. (Unless of course I am mistaken about your position.)

I don't know how much it actually matters whether the subjects lie to the experimenters. The experimenters, after all, might discover the lie. But what importance it has comes from the following: Part of what makes the experiments' results convincing is that everything the experimenters can do to test the hypothesis "mental goings-on match up perfectly with physical events in the brain described by our model" confirms it -- and some of the ways they would test it involve comparing what the model says the subjects were thinking about with what the subjects say they were thinking about. If the subjects lie to the experimenters, that becomes harder to do.

If the experimenters are confident enough in their model then they may in such cases be able to say "Nope, you're lying". In fact, their results would be more impressive if sometimes they did that and the subject fessed up. But if you imagine some absurd conspiracy where every subject, without exception, decided to lie to the experimenters in a similar way, then (1) that might make the development of their model more difficult and (2) it would make it harder to convince other people that the model was right. (Not necessarily impossible, in either case. But more difficult.)

Of course this is also how we get the love-interest into the story. One partner is an experimental subject. One is an experimenter. #1 always seems cold and aloof and #2 only discovers #1's deeply-hidden secret passion by examining the model of #1's brain...

61. TY says:

Aron,

The sub-plot of a "love interest" takes us right back to the Grand Designer who has created man with this amazing apparatus called the brain, and the equally amazing apparatus of the invisible mind to enable us to fully exploit the wonders of the world He created.

62. Aron Wall says:

Hamid,
Banned means you're supposed to stop posting, and any further comments will be deleted (and if you persist, your ban will be permanent again).

g,
I wasn't expecting you to give up so easily, but I wanted to resolve the ambiguity in your last comment.

(1) may well be okay, but as for (2) I think the ability in principle to convince reasonable people the model is right despite lying is essential for a reductionistic story to be correct. Consciousness can't reduce to some physical facts X unless those facts X actually imply consciousness.

Should the title of the story be, "Your lips are saying no, but your brain is saying yes?" Creepy!

[Update: Should the story end with #2 being married to the simulation of #1?]

63. Aron Wall says:

g,

I don't think "empirically checkable" is always the right thing to say either. Sometimes it just comes down to the prior probabilities (including arguments that certain things are conceptually impossible.)

It is true that I think that no possible story would convince me, but that was a prediction, based on general arguments, rather than a close-minded refusal to think about any given story. That is, whenever somebody does try to tell me that story (as you have) I try to mediate on it and see if I would change my views if that story came to pass in reality, and to see if it would form any kind of exception to any of the arguments I've made against it.

But I am having difficulty being persuaded by your science fiction story, largely because I already believe most of the key aspects of what it asserts about the brain! If your story were really capable of settling all debate 50 years after it comes to pass, then I (who already believe most of what it asserts about the brain to be true) should already be convinced.

There are minor aspects of your story that I'm not sure I believe, but I don't think they are the relevant aspects. Obviously the futuristic technology and modelling does not yet exist, and maybe there are practical reasons why it would never be feasible. (Certainly our current knowledge of the brain is, by comparison, incredibly primitive.) Furthermore, the story seems to present the human brain as an essentially deterministic system, whereas I think it is more likely that for quantum and/or stat-mech reasons, such prediction might never be possible even for an arbitrarily advanced technological civilization. (Although that is arguably relevant to questions about Free Will, I do not see that it is all that relevant to Consciousness.)

So, let us suppose that everything happened exactly as it is in your story. Then I still think that your story leaves open the logical possibility that the experimental subjects in question are p-zombies, that they have absolutely no conscious experiences to speak of. How would you deduce the fact that they have conscious experiences, just from the facts that are in your story? (If your sci-fi story had been written in the 1st person, and if I imagined being that person, then of course I could be certain of the reality of my own conscious experiences, but that has nothing to do with whether the physical facts about the brain imply the existence of consciousness.) Would it be unreasonable in such a world to believe that other human beings are p-zombies? Of course, but that is already the case in our world, before all of these hypothetical advances in brain science. I assume that other people are conscious because I am. The question is whether I could have deduced it from purely physical facts.

I want to be very careful about terminology here. I'm not sure I know what you mean when you say "consciousness is physical". I may even agree with this claim for certain definitions of "physical". (I already said my objection does not apply to Chalmers' Type B' materialism. I'm curious whether you consider that view possible?) That is why I have tried to be careful to define my thesis in terms of which propositions can be deduced logically from which other propositions. That way I have some fighting chance to know what we are talking about.

So I do deny that, if the 3rd person empirical physical facts were as specified in your story, the existence of Consciousness could be logically deduced. Sometimes there is underdetermination of theory from data, and people who agree about all the empirical facts can still disagree about the proper interpretation (as in the case of interpretations of QM). Sometimes it just comes down to the prior probabilities. Other times, one of the two sides is simply denying the existence of any interpretational ambiguity. In that case, it should be possible to decide who is right or wrong at the conceptual level. (I think that the argument about p-zombies is just such a case.)

I think when you talk about resisting the implication that consciousness is physical as "a move of the utmost desperation" you are conflating two different kinds of scenarios. One is a scenario in which there is something different going on at the causal level, like maybe actually there is a ghost who is manipulating the mouth of the subject to say different things, which the ghost experiences. I agree that THAT would be an unreasonable hypothesis. But if somebody claims to have a full explanation for consciousness in terms of stated physical facts, it is not unreasonable to ask them to complete the derivation by deductively showing that consciousness would in fact follow from those facts, or at least motivate that such a deductive proof should exist if they were smart enough to make it. That is not desperation, it is merely taking the person at their word for the claim that they are making. If, actually, it only follows from those physical facts if you make some additional metaphysical assumption, then one had better make that additional assumption explicit!

The case of water is completely different. Of course, if we define water as H20, then it is water by definition, and in this case the "burden of proof" to show that it follows purely logically is met. Of course I might define water differently (as people obviously did before modern chemistry). Maybe I define water as the wet stuff that comes out of my faucet. In that case there is obviously some empirical legwork to do to prove that this is the same as H20, but once that is done one should end up in more or less the same place.

(Some analytic philosophers, following Kripke, claim that H20 = water is an a posteriori necessary' truth, but I think this is a silly idea that depends on weird semantic conventions for what naming means.)

Consciousness is different because (and this is true by definition) I directly experience it. As opposed to water, which I only experience indirectly by means of sensory inputs and external experimentation. So how could I possibly have access to a myseterious non-physical quiddity of water''? And how could I not have access to what my consciousness feels like, since it is literally the only thing I have access to?

Your idea that I think Consciousness is different from water because there is a "tradition" of doing so, is frankly puzzling. Tradition! You may as well say that I believe I have 5 fingers on each hand out of tradition. The existence of a quiddity of Consciousness is equally obvious (even if its unphysicality is not equally obvious, which I why I provided an argument for it). Because of this, I am justified in having an independent definition for both sides of the putative identity of mind and brain, making the identity nontrivial. That is different from your water example.

I am not sure that any concept can be completely pinned down definitionally, since it seems like any defintion of a concept must be in terms of other concepts, and eventually you will get down to a basic layer that is somewhat mysterious. But I do think I know what Consciousness is well enough to be able to see the problems with accounts that it can be derived from other things.

In a way you think Consciousness is (currently) more mysterious than I do. You seem to be willing to leave it as more or less a placeholder for neuroscience to eventually fill in, attributing any seeming difficulties to the slipperiness of its definition, whereas I think I know at least enough about it to see why that doesn't work.

In my post I agreed with you that Occam's razor might have a limited use in mathematics. But, my main point was that for a certain class of conceptual propositions, if something isn't necessarily true for logical/conceptual reasons, then it isn't true at all!

One final point. You suggest in passing the "Functionalist" idea that "certain kinds of instantiation of certain kinds of causal structure are necessarily conscious" (regardless of the underlying physical or nonphysical medium). But in a previous conversation, you've expressed the view that causation is not a fundamental concept:

It seems to me that the notion of "cause" is no more than a convenient approximation, and plays no fundamental part in any sufficiently detailed account of how the world works.

I am not sure that is consistent to be a Humean about Causation, and a Functionalist about Consciousness, at the same time.

If you taboo the word "cause", can you still express your viewpoint about the medium-independence of Consciousness without making reference to it? Might a sufficiently complicated chalk drawing, produced randomly, containing no internal causation whatsoever, still be conscious?

Perhaps you will tell me that this is merely a matter of convention, of how we choose to define the word "Consciousness". But my own Consciousness does not seem to be a mere convention. If the people in your sci-fi story told me I wasn't conscious, then I would be able to know that they were simply wrong.

64. g says:

Aron,

First of all, one more comment regarding lying and all that. In the world of my story (at least as I imagine it; of course you might imagine a similar world in which the reductionistic account seems plausible only because of deliberate deception by evil spirits or something...) the scientists would consistently be right if they followed their model when it disagreed with experimental subjects' claims about their conscious experiences. But that's no guarantee that they would actually be convincing to others, at least when they first put forth their theory and the models based on it. But that's a matter of epistemology -- perhaps merely of psychology -- and has no bearing on the actual nature of consciousness in that world.

Now, I find that in response to each of your paragraphs I want to write several in reply. I will try to avoid doing that too much, lest this discussion become even more unwieldy than necessary; my apologies in advance for any unclarity that results and for anything left unanswered that you badly wanted answering -- in the latter case, feel free to poke me further.

When you say "sometimes it just comes down to the prior probabilities", this can happen only when it is wholly impossible to say how a world where the thing in question is true would (observably) differ from one where it's false. (Given our cognitive limitations, this condition can fail even for propositions that are logically provable if true and disprovable if false.) Perhaps that's the case for some interesting propositions about consciousness, but I'd want to see actual arguments for any given case.

I don't know why you are telling me that my story leaves open the possibility that its experimental subjects are p-zombies; that was not the question it was directed at. It was (as per your original "Can you tell me a story?" section) directed at the question of whether consciousness arises from physics, and what I say any reasonable person would conclude if the events of my story happened is not "the people the scientists examined were not p-zombies" but "consciousness is a physical process". It looks to me as if you have entirely misunderstood what I'm aiming at, but perhaps instead I've entirely misunderstood you. Or, of course, both. (Your comments about p-zombies would be relevant if you were gesturing towards the possibility that the experimenters correctly identified their subjects' "conscious experiences" as arising out of physics -- but that if they had only thought to do their experiments on you, non-p-zombie that you are, they would have found themselves baffled. I concede that this, like outright solipsism, is in some sense conceivable, but it's also obviously ridiculous and any position that can be justified only by appealing to such possibilities is in big trouble.)

I am not certain that I have a sympathetic grasp of Chalmers's Type-A/B/C distinction -- it seems to me that which category someone goes in may depend on details of how one understands terms like "conceivable" and indeed "consciousness" -- but I am myself more inclined towards type B or C (perhaps the sort of C that ultimately resolves into A, though) than type A, and I feel (I said a bit about this before but maybe should have said more) that what you call Strong Physicalism may shade into Straw-Man Physicalism :-). Let me be a little more precise about my position, and we can see whether it helps. If "consciousness" is defined purely ostensively -- by gesturing grandly towards one's experiences and saying "it's that" -- then I think that regardless of where it actually comes from and how there is no prospect of the sort of purely logical reduction of consciousness to physics that you discuss; the obstacle is not a metaphysical one but the fact that you need a clear definition of something before you can hope to exhibit it as a purely logical consequence of the laws of physics. If we were able to give a much more precise definition (which I don't think anyone knows how to do at present, but maybe one day we will) then it might be possible, and I think that if we could then it probably would. But I am not, given our present state of knowledge, "super-strong claims of logical necessity", and frankly I'm not sure anyone is.

You agree that given the events in my story it would be unreasonable to hold that something different is going on "at the causal level"; I'm glad to hear this; and I confess that that first sort of scenario is the sort I mostly had in mind. So let's consider the other sort. You say "if somebody claims to have a full explanation for consciousness in terms of stated physical facts ...", but that seems to me a really weird claim to consider because I don't know of anyone in the real world who makes any such claim. (It's true that e.g. Daniel Dennett wrote a book called "Consciousness Explained", but he certainly doesn't claim in that book to have anything that could reasonably be called a full explanation.) In particular, for the avoidance of doubt, I certainly make no claim of that kind. However, the scientists in my story might plausibly make such a claim. So, how does that sentence I started quoting continue? "... it is not unreasonable to ask them to complete the derivation by deductively showing that consciousness would in fact follow from those facts, or at least motivate that such a deductive proof should exist if they were smart enough to make it". Really? This seems to me entirely unreasonable. You start out with their (hypothetical) claim to have an explanation, but then you say that they should be able to "complete the derivation". Is this "derivation" in the mathematical sense, meaning logical proof? If so, then the leap from "explanation" to "derivation" is entirely un-called-for. Or is it in an informal sense, merely restating the claim to have explained consciousness in physical terms? If so, then the leap from "derivation" to "deductively showing" is entirely un-called-for.

So, let's suppose that my hypothetical scientists are unable to provide the logical deduction you want. What, exactly, follows, and why should we or they care?

I don't think our direct experience of consciousness makes the difference you think it does. I'm not sure how to argue against your position, though, because you don't seem to be arguing for it, merely restating your position while professing your shock that anyone would doubt it :-). Perhaps the following thought experiment might help -- a sequel to my earlier story. The scientists find ways to make the processing in their model very efficient and are able to build a simulation with many simulated people in it. These simulated people are purely (simulated-)physical, for sure: they exist inside the scientists' computer and we can watch every step of their execution. And one of them is very ingenious and comes up with much the same arguments as you have made above. "I just know I am conscious", he says. "And this argument shows that it is conceptually impossible for my consciousness to be explained in purely physical terms." He would seem to be wrong about this. Do you have a non-question-begging explanation for why we should find this argument better when you make it than when he does?

You are correct that I used the word "causal" here despite saying earlier that strictly speaking "cause" is a notion that doesn't belong in the most fundamental accounts of how the world works. Congratulations. If you look closely enough you may also find me saying from time to time that the sun "rises" or that light "bends" under the influence of gravity. We use these approximate shorthands because they're useful, and I am not ashamed of doing so. So, is there in fact an inconsistency between taking this view of causality and being functionalist about consciousness? I don't see any. Replace "causal structure" with something like "structure of constraints of the sort we commonly describe as causal" and nothing much changes.

65. Ben Crandall says:

Aron,

Thank you for putting in the time and effort to try and explain and discuss such complex topics!

I think we might agree more than I thought (though are disagreements a important ones). I honestly unfairly underestimated your position. Many people use people like Searle and Chalmers to argue for positions like substance dualism that neither of them hold. I still disagree with you and Chalmers, but I agree that Chalmers’ position is much more modest than many people seem to assume, and people are often surprised to find out that Chalmers for instance argues it is probably nomologically/physically impossible for something functionally isomorphic to a being with qualia to lack qualia (tho logically possible).

“I thought the whole point of Functionalism was that only the causal information-processing capacities of the physical system matter for determining the contents of the conscious experience, not the physical substrate.”

So I think there may be some confusion as I don’t think we are in disagreement on this. I think that anything that could instantiate the appropriate functional processes, or maybe processes appropriately functionally isomorphic, to a conscious person would have the relevant features that we are concerned with when discussing consciousness, experience, etc.

I was making a very general point that as a logical matter, just because material X cannot perform task Y doesn’t mean task Y is immaterial… That was the only point I intended. So unless I am missing something I agree entirely with the above.

I honestly don’t know enough about the physics of microprocessors and other materials to say one way or another whether this is an issue with silicon computers. It seems to me likely that we will be able to at the very least emulate large sections of the brain, although at what speed I am not sure (which raises interesting questions). Further if we are able to figure out what functional states are relevant for mental processes, and which are not (just byproducts, or inefficient results of evolution, the physics of the medium, etc.) then I would assume we could make the processes much more efficient.

But I don’t take a strong position one way or another (largely due to ignorance) on this matter.

“Of course not every physical system is capable of supporting arbitrary computations. But I don't think your analogy of ‘building a skyscraper out of butter’ is a good reason to be skeptical of silicon AI.”

Agreed, sorry for the confusion if what I said came off as though o believed that was plausible instead of just logically possible.

“He has certainly stated he disagrees with Chalmers about the logical coherence of p-zombies, which I think is sufficient to commit him to the proposition of a ‘first order necessary logical entailment of consciousness as a result of a particular physical description’. Yes?”

I think he would put it differently. Dennett considers the p-zombie “argument” to be an intuition pump based on the “zombic hunch.” In other words it is just a thought experiment meant to prompt a certain intuition or response, not a logical argument. He might say somewhere it is incoherent, but I take him to mean that when people claim to be able to imagine or conceive of a p-zombie, they are not really doing so. The reason being that Dennett argues people aren’t taking the thought experiment seriously and that when one considers it as such they will find (he argues) that they can’t actually conceive that notion.

I would say (if I understand you correctly) that he specifically argues against the idea that conscious states can be logically deduced from the physical description.

“If you agree, then I think my argument against his position stands.”

I take Dennett to be a non-reductive physicalist, tho that labels is admittedly vague. Some Dennett quotations that might be relevant. I think he sees many mental properties as realized in a physical medium, but importantly formally independent. Not actually all too different from an Aristotelian view of form and matter (he divides things into illata and abstracts, although as a structural realist I am skeptical of this distinction as he formulates it). He argues that abstracta (beliefs, desires, conscious states) are just as much a part of reality as illata (bosons and fermions).

He addresses this somewhat I think (though not qualia) in his book The Intentional Stance regarding intentional mental states:

“Suppose [...] some [Martians] of vastly superior intelligence [...] were to descend upon us, and suppose that we were to them as simple thermostats are to clever engineers. [...] they did not need the intentional stance—or even the design stance—to predict our behavior in all its detail. They can be supposed to be Laplacean super-physicists, capable of comprehending the activity on Wall Street [...] at the microphysical level. Where we see brokers and buildings and sell orders and bids, they see vast congeries of subatomic particles milling about—and they are such good physicists that they can predict days in advance what ink marks will appear each day on the paper tape labeled ‘Closing Dow Jones Industrial Average.’ They can predict the individual behaviors of all the various moving bodies they observe without ever treating any of them as intentional systems. Would we be right then to say that from their point of view we really were not believers at all (any more than a simple thermostat is)? If so, then our status as believers is nothing objective, but rather something in the eye of the beholder—provided the beholder shares our intellectual limitations. Our imagined Martians might be able to predict the future of the human race by Laplacean methods, but if they did not also see us as intentional systems, they would be missing something perfectly objective: the patterns in human behavior that are describable from the intentional stance, and only from that stance, and that support generalizations and predictions. Take a particular instance in which the Martians observe a stockbroker deciding to place an order for 500 shares of General Motors. They predict the exact motions of his fingers as he dials the phone and the exact vibrations of his vocal cords as he intones his order. But if the Martians do not see that indefinitely many different patterns of finger motions and vocal cord vibrations—even the motions of indefinitely many different individuals—could have been substituted for the actual particulars without perturbing the subsequent operation of the market, then they have failed to see a real pattern in the world they are observing. Just as there are indefinitely many ways of being a spark plug—and one has not understood what an internal combustion engine is unless one realizes that a variety of different devices can be screwed into these sockets without affecting the performance of the engine—so there are indefinitely many ways of ordering 500 shares of General Motors, and there are societal sockets in which one of these ways will produce just about the same effect as any other. There are also societal pivot points, as it were, where which way people go depends on whether they believe that p, or desire A, and does not depend on any of the other infinitely many ways they may be alike or different.
Suppose, pursuing our Martian fantasy a little further, that one of the Martians were to engage in a predicting contest with an Earthling. The Earthling and the Martian observe (and observe each other observing) a particular bit of local physical transaction. From the Earthling's point of view, this is what is observed. The telephone rings in Mrs. Gardner's kitchen. She answers, and this is what she says: 'Oh, hello dear. You're coming home early? Within the hour? And bringing the boss to dinner? Pick up a bottle of wine on the way home,then, and drive carefully.' On the basis of this observation, our Earthling predicts that a large metallic vehicle with rubber tires will come to a stop in the drive within one hour, disgorging two human beings, one of whom will be holding a paper bag containing a bottle containing an alcoholic fluid. The Martian makes the same prediction, but has to avail himself of much more information about an extraordinary number of interactions of which, so far as he can tell, the Earthling is entirely ignorant. For instance, the deceleration of the vehicle at intersection A, five miles from the house, without which there would have been a collision with another vehicle—whose collision course had been laboriously calculated over some hundreds of meters by the Martian. The Earthling's performance would look like magic! How did the Earthling know that the human being who got out of the car and got the bottle in the shop would get back in? The coming true of the Earthling's prediction […] would seem to anyone bereft of the intentional strategy as marvelous and inexplicable […] There are patterns in human affairs that impose themselves, not quite inexorably but with great vigor, absorbing physical perturbations and variations that might as well be considered random; these are the patterns that we characterize in terms of the beliefs, desires, and intentions of rational agents. No doubt you will have noticed […] a serious flaw in our thought experiment: the Martian is presumed to treat his Earthling opponent as an intelligent being like himself, with whom communication is possible ... a being with beliefs ... and desires ... So if the Martian sees the pattern in one Earthling, how can he fail to see it in the others? As a bit of narrative, our example could be strengthened by supposing that our Earthling cleverly learned Martian ... and disguised himself as a Martian, counting on the species-chauvinism of these otherwise brilliant aliens to permit him to pass as an intentional system while not giving away the secret of his fellow human beings. This addition might get us over a bad twist in the tale, but might obscure the moral to be drawn: namely, the unavoidability of the intentional stance with regard to oneself and one's fellow intelligent beings. This unavoidability is itself interest relative; it is perfectly possible to adopt a physical stance, for instance, with regard to an intelligent being, oneself included, but not to the exclusion of maintaining at the same time an intentional stance with regard to oneself at a minimum, and one's fellows if one intends, for instance, to learn what they know ... We can perhaps suppose our super-intelligent Martians fail to recognize us as intentional systems, but we cannot suppose them to lack the requisite concepts. If they observe, theorize, predict, communicate, they view themselves as intentional systems. Where there are intelligent beings, the patterns must be there to be described, whether or not we care to see them. It is important to recognize the objective reality of the intentional patterns discernible in the activities of intelligent creatures, but also important to recognize the incompleteness and imperfections in the patterns.”

66. Ben Crandall says:

Aron,

[continued]

I apologize for all these long posts and long quotes! I think it is in some ways hard to avoid because I think Dennett believes much of his writing to be “therapeutic” in the tradition of the late Wittgenstein. Namely that a considerable aspect of the job of the philosopher is to work on a person's intuitions and try to get the reader to look at the issue from a different perspective, or have them come to see the difficulty or conflict in their pre-theoretical views.

“like most terms of abuse,’reductionism’ has no fixed meaning. The central image is of somebody claiming that one science ‘reduces’ to another: that chemistry reduces to physics, that biology reduces to chemistry, that the social sciences reduce to biology, for instance. The problem is that there are both bland readings and preposterous readings of any such claim. According to the bland readings, it is possible (and desirable ) to unify chemistry and physics, biology and chemistry, and, yes, even the social sciences and biology. After all, societies are composed of human beings, who, as mammals, must fall under the principles of biology that cover all mammals. Mammals, in turn, are composed of molecules, which must obey the laws of chemistry, which in turn must answer to the regularities of the underlying physics. No sane scientist disputes this bland reading; the assembled Justices of the Supreme Court are as bound by the law of gravity as is any avalanche, because they are, in the end, also a collection of physical objects. According to the preposterous readings, reductionists want to abandon the principles, theories, vocabulary, laws of the higher-level sciences, in favor of the lower level terms. A reductionist dream, on such a preposterous reading, might be to write ‘A Comparison of Keats and Shelley from the Molecular Point of View’ or ‘The Role of Oxygen Atoms in Supply-Side Economics,’ or ‘Explaining the Decisions of the Rehnquist Court in Terms of Entropy Fluctuations.’ Probably nobody is a reductionist in the preposterous sense, and everybody should be a reductionist in the bland sense, so the ‘charge’ of reductionism is too vague to merit a response.
[...]
We must distinguish reductionism, which is in general a good thing, from greedy reductionism, which is not. [...] There is no reason to be compromising about what I call good reductionism. It is simply the commitment to non-question-begging science without any cheating by embracing mysteries or miracles at the outset.
[...]
The most common fear about Darwin's idea is that it will not just explain but explain away the Minds and Purposes and Meanings that we all hold dear. People fear that once this universal acid has passed through the monuments we cherish, they will cease to exist, dissolved in an unrecognizable and unlovable puddle of scientistic destruction. This cannot be a sound fear; a proper reductionists explanation of these phenomena would leave them still standing but just demystified, unified, placed on more secure foundations. We might learn some surprising or even shocking things about these treasures, but unless our valuing these things was based all along on confusion or mistaken identity, how could increased understanding of them diminish their value in our eyes?
A more reasonable and realistic fear is that the greedy abuse of Darwinian reasoning might lead us to deny the existence of real levels, real complexities, real phenomena. By our own misguided efforts, we might indeed come to discard or destroy something valuable. We must work hard to keep these two fears separate, and we can begin by acknowledging the pressures that tend to distort the very description of the issues.”

And I don’t take Dennett to be trying to eliminate or deny the “first person” perspective. Just to be arguing that the claim that the first person perspective has properties or features that are necessarily inexplicable in third person or scientific terms is unfounded or not yet persuasively defended.

From an interview with Susan Blackmore:

“Sue: For you, what’s special about the problem of consciousness?

Dan: Human brains are just the most complicated thing that’s yet evolved, and we’re trying to understand them using our brains. There are people who have suggested that this was impossible. [...] I think the reason that we find consciousness so hard is that we have evolved a certain capacity for self-knowledge, a certain access to ourselves which gives us subjective experience—which gives us a way of looking out at the world from where we are. And this just turns out to be very hard to understand. How can something have that perspective? It might be just a thing, but it’s a thing with a point of view, and with the capacity to reflect on that point of view and talk about it. Each one of us is trapped within a point of view. I can’t ever get inside your head, and you can't ever get inside mine. The undeniable fact that we have these perspectives is not closely paralleled with anything else we know about anything else. It isn’t that atoms have that sort of thing, or that molecules do, or that volcanoes or continents or trees or galaxies do; the only thing we know in the whole universe that has this feature is ourselves, and we’re not even sure about each other—that’s the problem of other minds. Now, we are, in a sense, artifacts (and I mean that in the good sense of the term). We have been created by the process of evolution, both genetic and cultural. And what we’re now trying to do is to reverse engineer ourselves, to understand what kind of a machine we are that this can be true of us.

Sue: Are you equating subjective experience with having a point of view?

Dan: Yes, but having a point of view is not a simple matter. There’s an easy sense of having a point of view where lobsters have a point of view, and mosquitoes have a point of view. With a little stretching and pulling you might even say that a pine tree had a point of view; that is to say a pine tree responds to the world selectively—there's only some features of the environment around the pine tree that it’s sensitive to and the rest of the world is indiscernible, as it were, by the pine tree. But that’s indiscernible ‘as it were’. In our case there’s ‘real discerning’; and ‘real discerning’, in the eyes of many people who have thought about this, has got to be worlds away from the sort of discriminative capacities of that pine tree or that mosquito. This creates an artifact in the bad sense of that term. To many people there’s an imaginative chasm between us with our ‘real discerning’ and our ‘real points of view’, and the mere robots, or discriminating-but-not-sentient things. I think that the gap between me and a pine tree, or me and a mosquito, is huge but it’s traversable by a series of steps. But I do have to say that some of the steps are quite counter-intuitive, and there’s not yet in place the sort of firm ‘take it or leave it’ science that can force people to abandon their intuitions.”

67. Cedric Brown says:

Ben

It seems ironic that Dennett would use God in a thought experiment, even if he calls Him a Martian. Dennett's Martian has powers that exceed any conceivable technology. In reality, a super-sophisticated alien might be able to predict the weather slightly further ahead than we can, but that would be the limit. It certainly wouldn't have the power of Dennett's Martian.

That raises an interesting possibility: perhaps it would require a God-like perspective to monitor our brains in enough detail to achieve the scientific reduction of our mental life that Dennett desires.

68. Mactoul says:

Aron,
I must decline your challenge since being a programmer only I have no claims to higher mathematics.
My knowledge is derived entirely from Hofstadter and Jaki. However, being a programmer, I know that neural networks can be programmed to run on a Turing machine and thus are examples of a formal computation.
Strangely, the former uses Godel to instill hope in AI while the later uses the same to rule out AI.
I merely wished to draw attention to certain inexplicably inconsistent statements. Perhaps it is only a question of definitions we are both using. For instance, in the OP you had
"Whereas human beings reason primarily by informal methods, so Gödel's theorem does not seem to apply to us in any obvious way."

Now, I don't know what precisely you mean by "informal methods" here. Hobbes had all human reasoning to be addition and subtraction. So, perhaps you deny that human reasoning could be reduced to computation. But this needs to be shown for the AI people do deny it.
But perhaps you mean something else by "informal methods" for you go on
"there is no reason to think an intelligent AI couldn't reason about mathematics in an informal way, and if it were truly intelligent"
You need to define what is an "intelligent AI" might be. It is something that does not exist and there are strong philosophical and other grounds for thinking that it can not exist. The AI people are overly fond of making extremely loose statements like putting crucial words in scare quotes :
"The AI might not even be "aware" of the programming language it is written in, any more than we are aware of the firing of individual neurons. (I put "aware" in scare quotes because I'm just talking about apparently intelligent behavior here, not actual consciousness)"

There is absolutely no justification to talk of AI being aware or "aware". It is merely sloppy language used in AI field. Would you call a AC that adjusts itself as displaying "apparently intelligent behavior"?

Theologically, it is held that rational soul is infused by God at conception. There can not be intelligent machines. But perhaps you do disagree with this dogma.

69. Cedric Brown says:

Actually, the more I think about it, the more ironic it seems. I think that Dennett's reductionism really does depend on a God's-eye perspective. Only God could actually "see" a human being as a collection of atoms. No current or future technology can or will be able to do that. So if someone says that a human being is a collection of atoms, you would have to ask, "From whose perspective?". If you can't literally see a human being as a collection of atoms, then from your point of view a human being simply isn't a collection of atoms.

I also suspect that the scientist in g's science fiction story is actually God rather than any creature in this world. So perhaps reductionists like Dennett are actually doing theology rather than science.

70. Ben Crandall says:

Mactoul,

"redness is a quality-and computers and programs don't do quality, by definition."

There is nothing in the definition of a computer or a program that states it can't have qualitative experiences. There is no analytic contradiction, or contradiction in terms, in saying "that computer experienced red." In fact many science fiction stories depend on entertaining such hypotheses.

"A program is a formal procedure to produce certain outputs from certain inputs."

It is an algorithmic process for manipulating information. But obviously every ACTUAL computer has other properties in addition to those described by its functional states and relations. My laptop has the property of being silver, though the property of being silver is found no where in its program description. It has silicon chips, though "silicon" is found no where in the program description. Just because something is not found in the program level description of something doesn't mean it isn't a property of that machine.

"It does not experience anything."

That is literally about as straightforward a case of begging the question as one can get... Argument please?

"The entire idea of computer experiencing anything is nonsensical."

This is clearly false. If it was literally "nonsensical" then you would not even understand what my claim was, clearly you do or otherwise you would ask me to clarify. In addition there are many movies and television shows on this basic premise (Ex Machina, I believe West World). If it was literally a contradiction or meaningless claim then people would not even be capable of entertaining such possibilities (and not only do people entertain the possibility, most philosophers accept it as not merely possibility but fact).

And if you don't mean that there is a contradiction, or that my claim is meaningless, then what do you mean?

"A computer is an artifact and does not have a kind of unity that an organism such a butterfly possesses. This unity allows organisms to have experiences. There is someone or something that can have experiences."

I deny that butterflies and other living organisms have a special kind of unity that artifacts lack. A car has a great deal of "unity"... And I would be curious where you precisely delineate such unity in living things... Are the bacteria in my intestines that I need to digest food and survive "unified" with me? If not, why not? What about colonies of single cells where some have specializations? Are they all separate individual things, or are they "unified" and how do you tell? What about a fungus that develops from a mycelium colony or network? Is the fungus a unified thing or a colony? The reason I ask is because I think in reality and practice this claim is untenable. Many things in reality have vague borders and boundaries.

Are you an Aristotelian or follower of Aquinas? Is the kind of unity you are talking about in the sense of a substantial form of a living thing?

"Now. perhaps man can build an artifact possessing organic unity but it won't be a programmable entity for programs are formal procedures having no scope for the organic unity."

Argument?

71. Ben Crandall says:

"It seems ironic that Dennett would use God in a thought experiment, even if he calls Him a Martian."

Not sure what is precisely ironic about it. This is a common thought experiment trope used on various topics, for example people seem to not recognize it (maybe Aron Wall does) but even one of the Chalmers' (originally Frank Jackson) arguments depends on trying to "imagine" such a God's eye view, the example of Mary the color scientist who knows "everything" about the physics of colors, how the brain registers color perception, basically, though maybe not omniscience, pretty close. Total knowledge of all the quantum field interactions, biological interactions, etc. of color vision.

I am not sure how one is to literally imagine what it would be like to be so God like, which makes it a tricky premise (and I don't think this criticism applies to Dennett's argument because it doesn't rely on knowing what it would be like to be the Martian in the story).

"Dennett's Martian has powers that exceed any conceivable technology. In reality, a super-sophisticated alien might be able to predict the weather slightly further ahead than we can, but that would be the limit. It certainly wouldn't have the power of Dennett's Martian."

Part of the function of such a statement is to serve as an idealizing function (clearly Dennett think there IS such a Martian), similar in some ways with what is done in physics. For example when physics do calculations they often use idealizations (because to do the actual calculation is far too difficult). For example when physicists treat a physical system as if it involves frictionless planes, perfect fluids, point particles, perfectly rigid bodies, etc. it is not because any of these things exist, it is because it helps to make the problem easier to conceive and more tractable.

So Dennett is basically just saying lets ignore the limitations physicists normally have, would a "perfect physicists" know all the important facts about reality? And his argument is no.

"That raises an interesting possibility: perhaps it would require a God-like perspective to monitor our brains in enough detail to achieve the scientific reduction of our mental life that Dennett desires."

Certainly it would require a god-like perspective to have the proposed abilities and knowledge. But I think you might have missed something, the point of his argument was the opposite actually, the Martian in the story (despite having a god-like perspective) will miss important mental facts about the world if they are only aware of the physical facts. That is the central point of the story which seems to have been missed.

Thanks for the reply, I know it is long, but maybe if you have a minute you might want to re-read it.

72. Ben Crandall says:

The above was a reply to Cedric Brown I case it wasn't clear.

73. Cedric Brown says:

Ben

Yes, I did note that Dennett wasn't using his thought experiment to invalidate the intentional stance. But it got me thinking. It does seem that the reduction of our mental life to physics requires the perspective of Dennett's Martian. So how should we view such thought experiments?

If we can never have the perspective of the Martian and if that reduction can never be done in practice, what meaning does it have for us?

Thanks for drawing my attention to Chalmers' use of a similar thought experiment. Even though Chalmers is using it to support the case that I agree with, I would make the same objection.

74. Ben Crandall says:

Cedric,

"I think that Dennett's reductionism really does depend on a God's-eye perspective. Only God could actually 'see' a human being as a collection of atoms. No current or future technology can or will be able to do that. So if someone says that a human being is a collection of atoms, you would have to ask, 'From whose perspective?' If you can't literally see a human being as a collection of atoms, then from your point of view a human being simply isn't a collection of atoms."

I think again that there was a misunderstanding of Dennett's point, but setting that aside I think Dennett would agree with you (in fact that is part of Dennett's well-known philosophical position regarding the different "stances" we take).

We don't need anything as complicated as people to make the same point. Why assume my car is physical? When I drive my car I don't have to know anything about the physics of how the gas pedal causes my car to accelerate (and in fact no human physicists could ever completely understand an everyday automobile from a purely physical stance of quantum fields, bosons and fermions, or even atomically), instead we treat cars from the design stance, we know how cars are generally designed, brake pedal decelerates, has accelerates, steering wheel, etc. Nothing depending on the physics (in fact knowledge of the physics would do very little to help us drive a car and surely would at times even hinder it).

However this is all an epistemic claim, not ontological so I am not sure your point. Even if only a God would be capable of knowing the positions of all the particles in the universe (or the wave function of the universe, however you want to think of it) why do you think that would undermine or be problematic even for an eliminativist physicalist (unlike Dennett's mild reductionism)?

"So perhaps reductionists like Dennett are actually doing theology rather than science."

Why is it theology?

75. Ben Crandall says:

The last comment was before I saw you recent comment tho I think it still applies. I think the confusion is between humans epistemic limitations and ontology.

76. Ben Crandall says:

Cedric,

I think your criticism only works if one accepts a very extreme verificationist empiricism (like a Van Fraasen style constructive empiricism).

It is very likely no one in my lifetime will travel to the center of Jupiter (in fact likely no one ever will do so). Does that mean it is meaningless to talk about jupiters core? Develop beliefs about the core? Etc.? Just because we are epistemically limited and will never know all the physical facts, why would that imply anything about those physical facts?

77. Cedric Brown says:

Ben

I would say that a car doesn't pose the same challenge as consciousness does. There doesn't seem to be anything about the behaviour of a car that makes me doubt whether a scientific explanation is possible. On the other hand, I do doubt whether consciousness can be explained scientifically. The response to that may be that if I had God's perspective I would realise that my doubts are misplaced. However, I don't find that entirely satisfying.

Also, I don't think that ontology and epistemology can always be neatly separated. I think there can be a blurring of the lines between the two, as there seems to be in the debate about hidden variables in quantum physics.

78. Ben Crandall says:

Cedric,

I agree that they can't be neatly separated, but I am not sure I see an argument there other than that a car doesn't seem as mysterious to you (though it suffers from the same epistemological opacity) and a mind does... Though you accept that from a God's eye view there needn't be a problem explaining the mental with the physical... In which case you seem to be in agreement? Or are at the very least not convinced Dennett's perspective is wrong?

79. Cedric Brown says:

Ben

I certainly admit the possibility that the mental can be explained in terms of the physical. And I wouldn't bother looking for proof to the contrary. But I still think that the mental probably isn't reducible to the physical.

80. Ben Crandall says:

Cedric,

Wasn't counting on changing anybody's mind! I think Dennett is largely correct on the matter, and as he and others have pointed out one can mean many different things when it comes to the idea of reducing something to the physical, and in a certain sense the mental isn't reducible to the physical (remember on Dennett's view a physical description of reality will be missing mental facts about reality like beliefs and desires that really exist, so in that sense the mental isn't reducible to the physical). But as you can tell I am I persuaded by qualia arguments, I have enjoyed the discussion tho.

81. Aron Wall says:

Cedric,
I don't think there's anything wrong with making a thought experiment that presumes a level of power or knowledge which no physical creature could actually have. Such counterfactuals can often illustrate important philosophical points which continue to be true even though the thought experiement used to reveal them is actually impossible. Certainly there is a long tradition in theoretical physics of using "impossible" thought experiments.

And yes, if the Martians in Dennett's thought experiment don't realize that human beings are conscious, then in that respect they are not like God since the God is omniscient. (This actually raises an interesting question: if Chalmers is right that no external observations could ever prove that a person is conscious, how is God aware that we are conscious? In my own theological speculations, this is possible only because God's knowledge is nonrepresentational.)

Ben,
That interview with Susan Blackmore is from Conversations on Consciousness, isn't it? That's a wonderful book! Anyone interested in learning about the full range of views people have should read it.

I am not sure my (or Chalmers') position on Consciousness makes me a reductionist even in Dennett's "bland" sense, although I suppose I must a reductionist by some sufficiently narrow definition!

I agree however, that Chalmers' position is way more modest than many people seem to think. I wonder how many more people would accept it if they realized that, rather than assimilating it.

I take Dennett to be a non-reductive physicalist, tho that labels is admittedly vague. Some Dennett quotations that might be relevant. I think he sees many mental properties as realized in a physical medium, but importantly formally independent. Not actually all too different from an Aristotelian view of form and matter (he divides things into illata and abstracts, although as a structural realist I am skeptical of this distinction as he formulates it). He argues that abstracta (beliefs, desires, conscious states) are just as much a part of reality as illata (bosons and fermions).

While you have obviously read a lot more Dennett than I have, I'm not sure I read his Martian thought experiment as really implying non-reductionism. In my view, "reductionism" would be a claim that all properties of e.g. human beings (say, their beliefs and desires) can in principle be deduced from physical properties at the atomic level. But Dennett's story doesn't say that the Martians couldn't have deduced the existence of beliefs and desires (had they been able to adopt the intentional stance), rather he merely states that they didn't deduce them. That seems to me like an important distinction. That is, something could be logically implied by a set of facts, but a given person could be too stupid or unaware to make the reduction.

82. Aron Wall says:

g,
I am sorry if you think that I'm not making any arguments, just assertions. From my perspective it seems to be the other way around! But I am sure that is just because we are coming at this problem from such different assumptions that it is hard to get on the same wavelength.

Of course I understand that nobody claims to currently have a complete explanation of Consciousness in physical terms, just a conviction that it is in principle possible. Nevertheless, at the risk of invoking Meno's paradox, it seems to me you must at least have a sketch of a physically-implementable definition of Consciousness right now, or you wouldn't be able to know that the future-hypothetical scientists were talking (in a more refined way) about the same thing. For example, if you were in time-travel communication with them, and they said that Consciousness was a "zesty fruit with eight segments" you would know that there was a mistranslation somewhere, and would not accept that as being the same thing we are discussing! So what are the criteria that something would need to satisfy to "count" as an identification of Consciousness?

I don't understand why all of that fancy science fiction technology is needed if, at the end of the day, you think they would just come up with a new definition that makes them right by definition. I already have a definition of Consciousness, namely "my own awareness of what it feels like to be me". Yes, that involves gesturing towards something I experience, but the same is true for many definitions in the dictionary. It seems to me to point to something whose existence is vivid and obvious, does it not seem that way to you?

If you don't see the relevance of my p-zombie remark about your thought experiment, then I am frankly puzzled. How can a set of physical facts possibly fully explain Consciousness, if it is still an open question (assuming all those facts) whether Consciousness exists at all? (Of course we would also like a theory of Consciousness to explain why it has the specific properties it does, but predicting its existence is a bare minimum...)

"... it is not unreasonable to ask them to complete the derivation by deductively showing that consciousness would in fact follow from those facts, or at least motivate that such a deductive proof should exist if they were smart enough to make it". Really? This seems to me entirely unreasonable. You start out with their (hypothetical) claim to have an explanation, but then you say that they should be able to "complete the derivation". Is this "derivation" in the mathematical sense, meaning logical proof?

Yes, I mean logical proof! But only after stipulating anything the scientists want me to hypothesize about the physical facts. I suppose you think this is unreasonable because, generally speaking, no scientific theory is ever known to be true with deductive certainty, since our knowledge of physical facts is always incomplete. But supposing I go ahead and grant you that all the relevant physical laws and facts turn out favorably, then I think it should be a matter of logical deduction whether Consciousness does or does not follow!

Compare to this dialogue:

A: QCD explains why the proton has a mass of 938 MeV.
B: Interesting. Did you do a calculation to get that number?
A: No, that requires an advanced supercomputer, and I don't have one. And even the best current computers can't get 3 figures accuracy.
B: But hypothetically, if you had lots of time on an advanced supercomputer from the future?
A: Then I would be able to explain the proton mass.
B: Let me see if you really mean that. Do you claim that, given QCD, the mass of the proton is logically determined? For example, would it be logically possible for the proton to be massless?
A: Logical certainty seems very strong. What if some other physical force or effect is relevant besides QCD physics?
B: Let us simply assume that only QCD physics is relevant, and that protons exist, and that you have infinite time on a supercomputer.
A: Well, it depends on fundamental parameters like quark masses.
B: Let's hold those fixed. Could you then logically determine the proton mass, using the supercomputer?
A1 [correct physics answer]: In that case mathematically, only one value of the proton mass is possible, and in principle it could be calculated.
A2 [puzzling problematic answer]: No, I think it wouldn't follow logically, but I think QCD would still explain why it had the mass it does.

(Maybe if the theory predicted a range of possible proton masses which was sharply peaked around 938 MeV, you could still say that it explained it even if it didn't logically determine it. But that doesn't seem parallel to the claims being made about Consciousness.)

You say that the moral of your story would be that "consciousness is a physical process". I'm not sure what you mean by "physical process". Maybe I could even agree with that phrasing, for some definitions, if it didn't imply that the existence of Consciousness could be deduced from the physical facts about the brain alone.

(Your comments about p-zombies would be relevant if you were gesturing towards the possibility that the experimenters correctly identified their subjects' "conscious experiences" as arising out of physics -- but that if they had only thought to do their experiments on you, non-p-zombie that you are, they would have found themselves baffled.

That wasn't what I meant at all! Assuming the p-zombie thought experiment makes sense, a p-zombie would be indistinguishable from an ordinary person from the outside, so there would be no possibility of them being baffled more by one experimental subject than another.

The scientists find ways to make the processing in their model very efficient and are able to build a simulation with many simulated people in it. These simulated people are purely (simulated-)physical, for sure: they exist inside the scientists' computer and we can watch every step of their execution. And one of them is very ingenious and comes up with much the same arguments as you have made above. "I just know I am conscious", he says. "And this argument shows that it is conceptually impossible for my consciousness to be explained in purely physical terms." He would seem to be wrong about this. Do you have a non-question-begging explanation for why we should find this argument better when you make it than when he does?

I think it is quite plausible that the simulated person would in fact be conscious. But I don't see any way of telling from the outside. (On the other hand, a character in a novel could easily make the same argument, and in that case I'm pretty sure they wouldn't be conscious.)

In fact, you shouldn't find the argument any better when I make it, then when he does. When I say "I know for sure I am conscious" you have no way of checking that this statment is true about me. Rather, it is an invitation to make a parallel argument in your own case. I can be 100% sure that I am conscious, while you can be 100% sure that you are conscious (assuming you are).

Once you realize that you are conscious, an argument from analogy suggests that probably other human bodies are also associated with Consciousness. It would be very odd if you were different from all of the rest. But that is merely a strong probabilistic argument, whereas in your own case you can be absolutely certain.

83. Mactoul says:

Aron,
Affirmation that other people are conscious is prior to and indeed is prerequisite for any sane argument.
Same goes with the affirmation of external world. These affirmations are not probabilistic, more so since notion of probability itself depends upon prior acceptance of external worlds and external minds.

84. Aron Wall says:

Mactoul,
I agree that no sane person denies the existence of other people's consciousness, but that does not imply that we know it with absolute certainty. Nor does it imply that it is unreasonable to make clear some explicit arguments for it.

How does probability theory presuppose the existence of any other minds besides one's own?

85. Aron Wall says:

g,
Oh, and the thing about Functionalism and Causality wasn't intended as a "gotcha" or meaningless pedanticism, it's something I've genuinely been thinking about recently.

I don't think it's at all obvious that "nothing changes" if you don't believe that causality is metaphysically real. For example, it might make a difference to whether you might potentially regard a chalk drawing (which is extended in space rather than time) as potentially conscious.

In our previous conversation that you would be "very reluctant to wipe [a] Life configuration from the computer's memory" if it had "something that in important ways closely resembles consciousness". So it seems like it would make an ethical difference to you, whether consciousness can reside in a chalk drawing or not. (Unlike the question of whether the sun goes around the earth or vice versa, which presumably wouldn't make any difference to whether you would be willing to annihilate the sun.)

More generally, I think the intuition that what's really important for Consciousness is a certain pattern of causal relationships, gets a lot weaker if you don't really believe in Causality. Saying that a system behaves "as if" it were causal, maybe is strong enough to imply that it behaves "as if" it were conscious, but I don't think that a metaphysically objective fact about the universe can be grounded in a sort of legal fiction, unless it is itself equally fictitious.

I am assuming in this argument that consciousness isn't fictitious, that it is an objective fact whether a system is or is not conscious, rather than being an arbitrary human convention. Maybe you don't agree. (I know you have argued that there may be grey areas or ambiguities in the proper extension of the concept, but I'm not sure that is really relevant, as long as some systems are clearly on one side or the other, and this refers to some objectively true feature of the universe. For example we say that there is a lot of red paint on a wall, it may be an ambiguous convention exactly how much paint it takes to have "a lot", and it may also be ambiguous what is the boundary between "red paint" and "orange paint", but the existence of the paint itself is still an objective physical fact that can't be defined away. It seems to me that there are similarly objective facts about Consciousness.)

86. Mactoul says:

Aron,
Sanity is not equivalent to absolute certainty which is a mirage and whose pursuit leads to insanity.

"How does probability theory presuppose the existence of any other minds besides one's own?"
Well, typically one learns some probability theory (at least) from a book. So existence of a book is presupposed. Then books are written by somebody not ourselves that shows existence of other minds.

Again, analysis of fundamental concepts used in probability theory--things like-likelihood, bets, very meaning of number assigned to a probability--these presuppose existence of external objects and minds.

87. Mactoul says:

There was speculation earlier in the thread that
"Scientists even do not know how far it goes down in the universe, starting from human beings. Cats and dogs do have some consciousness for sure. Are bacteria, viruses conscious, are plants, rocks and fundamental particles conscious in some sense? Probably quantum mechanics has something to do with it".

a) Scientists do not have a particular expertise in deciding consciousness since consciousness is not a a matter of science but of primary human intuition.
b) To ascribe consciousness to elementary particles is a category mistake in a way ascribing to rocks or sand particles is not. An elementary particle such as an electron or proton is an entity posited in physics whose properties are entirely exhausted by its formal description e.g mass, charge, quantum numbers etc. It can not have any other property.
A thing like a rock or a sand particle has existence outside physics--it is not a formal object. Certainly, a rock is not conscious but to ascribe consciousness to it is not a mistake in the same way ascribing to a proton is.
c) Thus, a formal objects can not be conscious. No computer program, data in computer memory, neural network, computer drawing can not conscious. Their properties are exhausted by the formalization.
d) Consciousness is a mystery, quantum mechanics is a mystery so they must be related. This is hardly a valid argument.

88. g says:

Aron, it's not that I don't think you're making any arguments; that's true
only of a small fraction of what you're saying, namely where you make
the transition from "I am aware of having experiences" to something like
"I am aware of having experiences and that not merely being a matter
of what the physical world does
". I'm not completely certain that
you are in fact doing that, but it seems like it at some points.

Of course some consistency with current ideas about what consciousness is
would be necessary for us to call some future notion "consciousness"; if
I've given a contrary impression then I've probably expressed myself badly.
But I don't have a clearly delineated set of criteria to offer, and I don't
see why anyone should be expected to.

My talk of (as you put it) "a new definition that makes them right
by definition" is there because that's pretty much exactly what you
demanded. I am still not sure why you consider that a reasonable
thing to demand, though. The science-fictional technology is there
to provide the sort of evidence that (so it seems to me) would make it
downright perverse to deny the physicality of consciousness. (Note:
I am aware that that phrase is ambiguous; at least some of the time
what you are denying appears to be only the rigorously, watertightly
provable
physicality of consciousness, or the physicality-by-definition
of consciousness, and I just don't see why I should be bothered by the
possibility that consciousness might not be rigorously provable
to be physical, or might not be physical by definition. The
questions that seem more interesting to me are ones like "Is our
consciousness dependent on any non-physical phenomena?" and "What
would it take for us to be willing to call some entity quite unlike
ourselves conscious?".)

Perhaps it will help if I elaborate the story a little to sketch what
it might say about consciousness itself. I must stress, though, that
in doing so I am not at all claiming that the account of consciousness
I sketch my fictional scientists developing will actually turn out
to be anywhere near right; just that it has the right sort of "shape".

So. These scientists interview and examine and simulate a large number
of people doing many things. They find, first of all, that describing
something as consciously experienced appears to be more or less coextensive
with having (1) paid attention to it and (2) remembered it, at least
briefly, as part of one's internal narrative about the world. (Note 1:
Here "narrative" should not be taken to imply words, only some
kind of coherent temporal sequence of events and experiences.) Note 2:
you might think that "paid attention to" isn't much of a reduction
of "conscious" and involves many of the same difficulties. This isn't
yet meant to be a reduction of anything; just some observations.)

And in their experiments and simulations, they find a set of goings-on
in the brain that (a) correspond roughly, in ways I'll sketch in a
moment, to criteria 1 and 2 in the previous paragraph, and (2) are
always associated with experiences people credibly describe as being
or having been conscious, and never with ones they credibly deny being
conscious of. (Sometimes there is other evidence of consciousness but
the subjects deny having consciously experienced the thing in question;
the scientists' theories and experiments divide these into "conscious"
and "not conscious" in ways that seem plausible. Sometimes consciousness
admits of degrees, and these correspond e.g. to some of the goings-on
going on and some not.)

What sort of goings-on? Well, for instance, it turns out that there
are a bunch of brain structures that record time-sequenced information
about perceptions and the like. They are found to be active when
experimental subjects are, as evidenced by later questioning, laying
down memories of events. Perceptions described as conscious consistently
occur together with activation of these structures, and a flow of
information from the bits of the brain that register these perceptions
and those brain structures. Later recall produces closely related
patterns of brain activity.

And it turns out that when experimental subjects are asked (or induced
by other means) to pay attention to particular things, there are a bunch
of brain structures -- overlapping with the ones in the previous paragraph --
that are consistently involved. Paying closer attention to a thing
is found to correspond to stronger and "richer" activation of these
bits of brain. Activation of these bits of brain provokes all sorts
of things we would expect to be associated with "paying attention";
language-related subsystems begin to operate, for instance, preparing
to describe the thing attended to; information about the perception
is passed to the "time-sequenced memory storage hardware" discussed
in the previous paragraph; it also produces more memory lookups,
raising activity in neural circuits associated with other things
related to what's being perceived; it sets in motion the "predict
possible consequences" machinery that's a large part of what the
brain is for in evolutionary terms; and doubtless many other things
I haven't space for here.

There are some other configurations of neurons associated in many
experiments with subjects' notion of *themselves*, and these too
are generally (though not universally) active when "conscious
experience" (as judged by other criteria) is going on.

After looking (at great length and of course in vastly more detail
than anything I've said here) at these and other related phenomena,
our scientists say the following things about consciousness. (1) In
humans, a perception (there would of course need to be further discussion
of the other things one can be conscious of, such as memories and
decisions, but this is far too long already) is conscious exactly
when the following classes of neural events happen (insert lengthy
description here); when some do but not all, or when some of them
happen only imperfectly, this produces states we classify as only
partially conscious. (2) In high-level handwavy terms, what this
means is that consciousness is the name we give to perceptions (etc.)
that we at least begin to store in memory, that we dedicate more
of our brains to analysing and seeking associations with, that
we are prepared to turn into words, and so forth. (3) This is all
analysis of what happens in humans, but it suggests that we should
consider perceptions "conscious" in non-human agents when they
are more liable to describe them in language, to recall them
later, to subject them to analysis and association-lookup, etc.
Non-human agents with "architecture" too different from ours --
AIs, aliens, archangels, etc. -- might make such definitions
impossible to apply; for them, there might simply be no fact
of the matter as to whether a given thing is perceived "consciously"
or not.

If all this came to pass, with enough detail and convincing enough
experimental evidence, then I think it would be unreasonable to
claim that there's some ineffable more to our consciousness;
just as unreasonable as to claim that there's some ineffable
wateriness about water that isn't captured by the
scientific definition in terms of hydrogen and oxygen and all that.
We might feel inclined to think that there could be cases where
we were truly conscious of something even though the scientists'
instruments said otherwise, or vice versa; but as they turned out
to be right again and again and again, this would get less and
less credible.

So, in this scenario, what would we say about logical proof?
It would be a matter of empirical observation that every person
not suspected with good reason of trying to mislead the scientists
describes perceptions as conscious just in so far as they are
associated with the activation of neural structures C1 to C538
inclusive. It would be logically demonstrable (though perhaps
only with humanly impossible effort) that in human brains, activation
of these neural structures exactly corresponds to "paying attention"
and "recording time-sequenced memory" and whatever other high-level
notions our scientists identify as being the essence of consciousness --
in so far as those notions are themselves given precise definitions.
Human brains differ; that definition in terms of structures C1..C538
might want tweaking for each individual brain to get a perfect match
to its owner's notion of what is and isn't consciously perceived,
and the scientists' methods would in principle be able to identify
just what tweaks were needed. Anyone would be able, if they so chose,
to adopt different definitions, but the cost of doing so would be
that their definitions wouldn't match quite so well with actual
experiences of consciousness. This is all more complicated than
the definition of "water", but then brains are more complicated
and more diverse than water molecules^[citation needed].

Finally, a few brief words on functionalism and causality and all that.
I agree that there is an intuition that "what's really important" is ...
something at least kinda like causality, and that a process
resembling consciousness but proceeding through space rather than time
"feels" less like a candidate for (e.g.) ethical concern. I don't see
any good reason to think that there is a Metaphysically Objective
Fact Of The Matter about whether such exotic things "are conscious"
or not, and I think that's an entirely separate question from (e.g.)
whether you are really having the experiences you say you are, and
evidence of infallible metaphysical insight into the former will
lead only to confusion. (I think, in fact, that objectivity comes
in degrees, and I suggest that even your experience of being
consciously aware of a thing is just ever so slightly corrigible.
Consider the experience of noticing a sharp pain in your left
knee. Now consider the experience of remembering noticing
a sharp pain in your left knee a time t ago. I claim that
when, say, t = a week, it's eminently possible to have the
memory without actually having had the conscious experience; that
when t = 10 milliseconds, having the memory is having
the conscious experience; and (with less confidence) that one can
interpolate smoothly between the two, which to me suggests that,
perhaps for t around half a second, ordinary uncertainty about
memory blurs into uncertainty about conscious experience.)

Be that as it may, when I said "at least kinda like causality" above,
that's what I meant. Although as you'll have gathered I'm a bit skeptical
about the intuition we're discussing, I think what it says isn't
"what matters is the same pattern of causal relationships" but
"what matters is the same pattern of relationships of whatever sort
we regularly find in the world between events linked like these",
where "these" is shorthand for a whole lot of paradigmatic
event-pairs of the sort we typically describe as causally linked.
That probably sounds awfully pedantic, and I'm sorry, but the point
is that I don't think our intuitions have the sort of logical detail
in them that would justify saying that they are specifically about
causality as distinct from whatever subtler web of relationships
plays the role in the actual world that we sloppily and carelessly
ascribe to causality. (Just as we have intuitions about physical
goings-on that we might describe in terms of things "falling down"
under the "force of gravity", but that doesn't give us the slightest
justification for rejecting general relativity on the grounds that
its picture of the world violates our intuition by not singling out
a special direction as "down" and not exactly making gravity a
"force". GR is just a more accurate description of the things our
intuitions are about and that we sloppily describe in other
language. So, I suggest, with causes.

89. Ben Crandall says:

Aron,

“Basically, Dennett is unwilling to admit into the realm of scientific theory anything that is not obtained by 3rd person analysis. If the experimental subject says ‘experience an ache in my arm’, then he would claim that yes you should take that seriously as data that he probably believes he has such a sensation, but that once you have gotten a neurological theory which explains why he said those words, you have done everything you need to do, and there is nothing left to explain. (Whereas Chalmers would say, that since we also all have personal, 1st person experience of aches, we also have to explain our own conscious experiences of them.)”

I think you accept this counterintuitive consequence, but just to reiterate, if science can explain why you say things (including why you claim to have awareness, qualia, conscious experience) then that would mean if qualia is an additional non-natural property, we never are caused to say we have qualia because we have it. No claim to conscious experience, etc. would happen as a result of conscious experience. That seems a little hard to believe.

“He believes it is possible that we are grossly mistaken about e.g. the existence of qualia etc.”

To be clear though qualia is a specialized philosophical term, and he denies they exist only in the sense that conscious experiences don’t have the special properties that are claimed to be problematic…. Not that experiences don’t exist.

“Now, I admit that there are nerological experiments which show that human beings may have false beliefs about the nature of their own experiences. For example, a naive person might think that they simultaneously experience everything in their visual field "at once", whereas in fact the eye flits back and forth between different objects and as a result the objects we aren't paying attention to are much more indistinct and may even be unnoticed altogether. [...] The same is not true of e.g. the experience of the qualia of red, which seems to persist no matter how many neurology books I read.)”

I am not sure precisely what you mean… It seems we clearly have information that shows we are deeply mistaken about color perception, namely the Cartesian theater intuition. For example it seems like we see color on “things out there,” however we know that the color is mediated by our eyes. When we think about that it often provokes the Cartesian theater intuition that the red is coming thru our eyes and being projected in the “theater of the mind.”

But this also doesn’t make sense because the literal “red” wavelengths of light never make it past our eyes, there is nothing “red” in our brain (and it is dark in there anyways). We know that the light waves are converted into neural spike trains (but that red shape doesn’t LOOK like it's made of neural spike trains!). I think there are many thought experiments even (forget about science even) that can lead one to realize they must be mistaken about the feeling that there is a colorful surround sound movie in their head.

“So I hope this makes it clear that I don't believe in the "Cartesian theatre" or "humunculous fallacy" (as if there were a self apart from our processing of experience) yet I still think that there is something important Dennett is missing.”

It seems to me the thing that seems mysterious is what Chalmers’ describes as the “color movie” and “surround sound” that feels like is going on in our heads. I admit that we have this feeling of such a movie running in our heads…. Now if you say that this isn’t what you are referring to, then I am not sure what you are referring to. The Cartesian theater seems to be the only aspect of experience that is intuitively problematic to me.

“Ultimately the question here is just how much of our 1st person experience of consciousness we are prepared to strip away as a result of 3rd person theorizing. Dennett's view is that 1st person introspection shouldn't really count for anything, but I think this is clearly wrong as a matter of first principle. Our 3rd person experience of other people's brains, and their reports, is itself part of our own 1st person experiences, therefore we could not even have 3rd person data without first assuming that the 1st person data exists and is normally reliable. We would have no reason to even believe in other people if we did not experience them in our own consciousness. Data must be exerienced by somebody to be data at all.”

But Dennett doesn't dispute that we all have our own experiences. So the above doesn't seem to me to accurately reflect Dennett’s view (in fact it seems to contradict some of the quotes I provided).

“When we study an external object (say, a rock) we can study it though perception, whereby it appears through sensation in our own minds as an object of conscious experience (the "rock" in our minds). We have no direct experience of the rock's own "perspective", if it even has one. But when we turn our attention to study ourselves, there are now two avenues to knowledge, we can either introspect our own minds or observe ourselves "externally" through perception. In this case, and this case alone, we have additional information about the object being studied, apart from its causal effects on the outside world.”

But I, and Dennett, would argue that there aren't these two “worlds.” Certainly we have a certain amount of privileged access to our beliefs and thoughts and feelings, but it isn't unique. For example have you ever not realized you are agitated, angry, etc. only to have your emotions pointed out to you by someone else, and only then noticing “oh yeah, I am angry?” Or seen a child fall down, only to look around and see if anyone is reacting with a cringe before deciding whether to cry and respond with pain? Or what about a treatment used for phantom pains? One solution is to place a mirror so that from your visual field it appears you have two arms due to the reflection. The reflection tricks your brain (visually) so that seeing it alleviates much of the discomfort… Discomfort that feels like it is “in” the arm that doesn’t exist. Or people that suffer from over activity in their mirror neurons, and upon seeing someone hit in the head will feel a strong pain in their own head for instance.

Or have you read Dennett’s examples in “Quining Qualia?”

“If we observe ourselves externally, we may be confused about how colors like "red" and "blue" can exist in a lump of grey matter sitting in our skulls. But I think that neither the external nor the internal mode of observation should be confused with our self as it exists in itself (as God sees it, I would say), rather both of them are incomplete and leave things out, and so we get our best theory of ourselves by synthesizing data from both perspectives. That is how I see it.”

But as Dennett argues, surely when we see blue nothing in our brain (or our mind?) “turns blue.” A film reel contains frames that have the color being represented, if someone in the movie is wearing a red coat, the frames of the movie will be tinted red. By contrast a movie stored on a DVD still has the information about the red coat (but like our brains) doesn’t need to turn red in order to represent the color or coat. It seems to me to think something more happens, that the mind needs to fill in the color with some kind of “mental paint” just does seem to be the idea of the Cartesian theater. That the light waves need to be converted into neural impulses, and then converted again into the substance of “experience” in order to explain the vivid colors and sounds we seem to experience “in our heads.”

90. TY says:

I see the discussion swinging between Chalmers and Dennett views, none of whom really knows what consciousness is, but they have certain a priori claims. From what I gather in reading the comments (sorry to simplify complex things -- which is my nature), Chalmers claims there is such a thing called consciousness, while Dennett claims it doesn’t exist. Both of them can’t be right, of course, but if I had to make a bet, Chalmers’ view seems to hold more promise based on various personal accounts or experiences. Dennett’s position on consciousness, anchored to a naturalistic belief, means either denying the experiences or explaining them away with naturalistic arguments. This seem more like belief than science.

91. Zhenghu Maolong says:

What's your position about the Penrose and Hameroff's Orch-Or Theory? Because I think this is the first hypothesis to be tested (and corroborated) in relation to consciousness and maybe is making the case for a dualism of some sort.

92. g says:

TY, Dennett doesn't claim there is no such thing as consciousness.

93. TY says:

g.
Would it be fair to say then that Dennett denies the "subjective" or "non-physical" notions of consciousness, like the qualia and so on, discussed in the comments. I'm using uplifted commas because I'm struggling with his claim that consciousness is an illusion. I'm here more t understand than to challenge.

Thanks.

94. Ben Crandall says:

TY,

Dennett certainly rejects that we need to appeal to anything outside of physics to explain how the atoms of our bodies get into the positions they do (including saying the words that we say, the gestures that we do, etc.).

Now it is tricky because he also doesn't think mental vocabulary can be reduced to physical explanations. He uses the language of "real patterns." When you see a pattern (he would argue) you are seeing something real in a broad sense. Patterns in nature help up predict things, understand the world, etc.

For example he uses the example of "centers of gravity." If you want a top heavy vehicle to be more stable you can lower its center of gravity. But what is a center of gravity? It is not a particular atom in the vehicle, a physical object, but Dennett would argue in an important sense centers of gravity are real. They help us explain things in the world, explain phenomena that we observe, etc.

He definitely doesn't believe that minds require any kind of mysterious or immaterial substances.

As far as whether Dennett denied the subjective, it depends what you mean. Dennett doesn't deny that you have experiences, and that for most practical purposes you have special access to your own experiences that others don't. He doesn't believe this to be a fundamental and absolute distinction however. He thinks the subjective can in the end be accounted for from a 3rd person scientific perspective.

As for the claim that "Consciousness is an illusion," are you talking about the TED talk called the Illusion of Consciousness?

Maybe this is an unfortunate title because I don't think he is arguing that the existence of consciousness was is an illusion (he insists various places, including I believe in that video, that consciousness really exists). What he means to do is contrast consciousness as a "bag of tricks" (in other words, like life, explainable in terms of various physically instantiated processes and systems) in contrast to "magic" (something scientifically mysterious and inexplainable).

If you watch that video Dennett starts out talking about a book on Indian street magic written by a friend of his, and talking about people asking him if his book is about "real magic" (in other words miraculous conjuring that doesn't really exist) or is it about "fake magic" (the kind of tricks and magic that magicians really do and really exists).

In other words he would say the kind of physical consciousness made out of a collection of processes is REAL consciousness, but the kind of mysterious and magical consciousness that doesn't exist is not real consciousness.... And the belief that consciousness is magical in this sense is an illusion.

Like you questions and comments, hope that helps clarify.

95. TY says:

Ben,

I think I understand Dennett’s position: Consciousness can only mean the objective types that can be fully explained physiologically, or, for the subjective types, “can in the end be accounted for from a 3rd person scientific perspective.” Not surprising because it’s a very naturalistic view. But, see Aron’s November 29, 2016 @ 10.28 comment on the “objectivity” of third person analysis.

This brings me to the conversion of St. Paul. He testified he was struck down by a blinding light on the road to Damascus and he heard Jesus’ voice. A third person, say a physician, would easily confirm that Paul had indeed lost his sight by performing a physical examination. But how would the said physician explain the voice. He would have to make certain assumptions about the reliability of the story, and his 3rd party analysis will in the end be partly coloured by 1st person experience.

I’m not saying that psychiatry is incapable of explaining the fantastical claims by people who say they hear voices (e.g. cases of schizophrenia and various mental disorders).

96. Joe says:

"The illusion which exalts us is dearer to us than ten thousand truths." Alexsandr Pushkin 1799 - 1837

97. Aron Wall says:

g,
I certainly don't claim that (A) "I am aware of having experiences and that not merely being a matter of what the physical world does" can be proven by direct intuition in the same way that (B) "I am aware of having experiences" can be. Thus (A) is not a premise of my argument, only (B) is. My argument goes more like this:

-------------------------
Fact 1. We have conscious experiences.
[For each person, this can be known to be certainly true in our own case, due to being (as you put it) an "unavoidable load-bearing element of my cognitive apparatus". But the fact that it is certain plays no role in my argument, unless someone tries to deny #1 due to it having implausible implications.]

Corollary 2. As a result of #1, the word "Consciousness" has a meaning (at least as applied to human beings), and this meaning can be known and understood prior to having any knowledge of the physical workings of the brain (as indeed, the concept predates neuroscience).
[Note that if this were false, than #1 would be a meaningless statement, so #2 is implicit in #1.]

Hypothesis 3. The contents of our conscious experience happen to be some function of what is physically going on in the brain, so that for every particular kind of conscious experience one can find a corresponding physical process in the brain, so that two people with relevantly similar neural processes will have correspondingly similar experiences.
[This is a concession to possible future scientific discoveries. #3 is plausible but it can probably never be known to be true with absolute certainty due to the limitations of experiment, and possibly also the limits of introspective reports, but this fact does not play any role in my argument! Let us simply assume that #3 is in fact the case.]

Claim 4. Nevertheless, even assuming the scenario described in #3 is the case, the physical activity occuring in the brain (even if specified with infinite precision) would not logically imply the existence of Consciousness when defined as in #2. I believe this Claim for the following set of increasingly general reasons:

a. None of the "stories" I have heard so far seem to allow me to make this logical deduction from the purely physically specified facts (assuming the stories refer only to 3rd person observational facts rather than directly talking about people's experiences)

b. It is hard for me to imagine how a new story would be relevantly different from the ones I can already imagine (assuming it is described with the usual set of physical concepts, and does not e.g. postulate conscious experiences as a fundamental attribute of reality).

c. Logical implication is a very strong sort of necessity, and it exists between two propositions P and Q only if there is an actual proof that P implies Q (modulo the caveat described in Footnote 2 in my main article), but nobody has ever given even a plausible sketch of what such a logical proof would look like, and

d. It is impossible for a set of premises to logically imply a conclusion unless the conclusion and the premises share some sort of terms in common. For example, in syllogistic reasoning, you cannot prove "all X are Y" from premises which do not implicitly or explicitly refer to the property Y. Thus, if a conclusion introduces a fundamentally new concept not found in the premises, then it cannot be logically derived from them. As a somewhat parallel example, it is impossible to deduce from the Peano Postulates of Arithmetic the existence of any physical object in the real world, because physical existence is a new kind of predication. If somebody said, "Mathematicians discover new truths every day, how do you know that they will not one day prove that elephants physically exist" I would reply that this is an obvious category error, and that I can therefore know right now that no such proof will ever be possible.

(Note that, if #2 is correct, it would be the fallacy of equivocation to use in this logical proof any new definition of Consciousness, unless of course can first prove that the two definitions are equivalent--but this threatens to be just as hard as the original problem! For example, it would be illegitimate to say "Maybe one day mathematicians will come up with a definition of elephants in terms of prime numbers or something, and then we will be able to mathematically prove that elephants physically exist.")
----------------------

Now what will your response to this be? Obviously that is for you to say, but here are some possibilities.

You don't deny #1, or you would be an Eliminativist.

I am worried that you are going to deny #2, but I don't see how we can coherently assert #1 without implicitly endorsing #2. We cannot accept a proposition as true without simultaneously asserting its meaningfulness in terms of our current set of concepts. Of course, as you say, there are situations in which we have a crude folk concept and future scientific knowledge refines it into something more precise, and that in certain edge cases it may become meaningless whether the crude folk concept should be applied. But I don't think that objection is relevant here because (i) normal awake human beings are not an "edge case" for possessing consciousness, and (ii) in non-edge cases, in order for the replacement to occur the scientist must first show that all observable aspects of the folk concept are in fact explained by the scientific concept. (Unless they are prepared to deny the reality of the folk-concept altogether, but in the case of Consciousness, that would be Eliminativism.)

I think I've already explained why water is not a parallel case to Consciousness. We have no direct awareness of water, the way we have direct awareness of Consciousness. Water is defined merely as whatever-it-is-that-causes-certain-specific-effects-on-other-things. But Consciousness is not defined merely as whatever-it-is-that-causes-me-to-say-I-am-conscious, rather it is defined by our own direct awareness of its existence.

You might be tempted to say that #3 is all you meant by Consciousness being physical, and that you simply don't care whether #4 is true. But if so then we are not disagreeing with each other. You seem to be saying just that in this paragraph:

My talk of (as you put it) "a new definition that makes them right by definition" is there because that's pretty much exactly what you demanded. I am still not sure why you consider that a reasonable thing to demand, though. The science-fictional technology is there to provide the sort of evidence that (so it seems to me) would make it downright perverse to deny the physicality of consciousness. (Note: I am aware that that phrase is ambiguous; at least some of the time what you are denying appears to be only the rigorously, watertightly provable physicality of consciousness, or the physicality-by-definition of consciousness, and I just don't see why I should be bothered by the possibility that consciousness might not be rigorously provable to be physical, or might not be physical by definition. The questions that seem more interesting to me are ones like "Is our consciousness dependent on any non-physical phenomena?" and "What would it take for us to be willing to call some entity quite unlike ourselves conscious?".)

First let me reiterate, that if I am denying the "rigorously, watertightly provable physicality of consciousness", I do not simply mean that all experimental knowledge is incomplete and that, even with fantastically advanced technology, there could always be something going on in the brain that the researchers have missed. That is true, but it is not my point. My point is that even if they haven't missed anything physically going on in the brain, I still think the existence of consciousness doesn't follow from pure logic/definitions.

Secondly, I'm not sure what exactly is supposed to be "downright perverse" about the claim that the physicality of consciousness does not follow from pure logic and definitions. Do you mean that would be a perverse definition of "physicality"? Because that would be merely a terminological dispute, not a substantive disagreement. The reason why it is reasonable for me to demand a logical proof of Consciousness, is becase that is the exact question I was asking in my article, whether such a logical identity between physics and mind exists. I suppose it is up to you whether a "no" answer to that question would bother you or not. I would say it should "bother" you in precisely this sense: that if true it is a noteworthy metaphysical fact about the way the universe works, which might potentially have or require an explanation. (But that is beyond the scope of the current conversation.)

Perhaps you might think that the logical proof demanded in #4 is possible after all, but the problem is that I only see your various sci-fi stories as implying #3, not #4. That is because your stories only seem to involve uniformly observed correlations. For comparison, it is a scientific fact that all chickens come from eggs. Not only is it a uniform fact of experience, we have very good reasons from Biology to think that it is physically necessary (apart from miracles or some high-tech incubator, etc.). Nevertheless, it is not a logical truth that chickens always come from eggs, it is a nomological truth---one following from the Laws of Nature, which nevertheless logically could have been otherwise. Just as, when I let go of a stone, physically we know it will go down but logically it could have gone up without any contradiction.

So if you prefer, we could change the terminology of the debate as follows. I am conceding that human consciousness may well be physical in the sense of #3, but I am still interested in the question of why it follows from the physical facts. Does it follow logically, in the same sense that, if a Euclidean polygon has 5 sides, it must have 5 angles? Or does it merely follow "nomologically", in the sense that the chicken implies its egg, as a uniform Law of Nature which nevertheless was not logically inevitable? You are free to find that question "uninteresting" if you like, but that is irrelevant to whether or not it is true. I personally find the question interesting, and the fact that you are arguing with me about it, suggests that you probably find it interesting as well.

98. Aron Wall says:

Zhenghu,
I think the Orch-Or Theory is crazy!

First of all, it combines weird minority positions about multiple subjects (quantum gravity, interpretation of quantum mechanics, uncomputability, consciousenss, physical effects of platonistic concepts etc.) making the combined hypothesis have extremely low prior probability. (It would be one thing if the different components of the positions naturally suggested each other, but the main connection between these viewpoints just seems to me to be that Penrose believes them all---the connection between several of these subjects seems to be largely based on wishful thinking, basically assuming that several mysterious subjects are connected in a specific way.)

There are indeed two unproven aspects of the hypothesis which are testable: (1) that there exist large scale quantum coherent superpositions in the brain based on neural microtubules that cause the brain to become a quantum computer, and (2) that a superposition involving sufficiently different distributions of energy will collapse according to a specific formula. Neither of these hypotheses seem very physically plausible to me. But they are testable in principle.

But, even if we assume that all of the testable aspects of the hypothesis were shown to be true, I don't think it would in any way be a solution to the "hard problem of consciousness". Why should a quantum computer be any more likely to have subjective awareness than a classical computer? It doesn't make any sense.

99. Aron Wall says:

Ben writes:

I think you accept this counterintuitive consequence, but just to reiterate, if science can explain why you say things (including why you claim to have awareness, qualia, conscious experience) then that would mean if qualia is an additional non-natural property, we never are caused to say we have qualia because we have it. No claim to conscious experience, etc. would happen as a result of conscious experience. That seems a little hard to believe.

I am prepared to bite this particular bullet, because it seems to me that (a) logical proof of consciousness from physical facts is logically impossible, whereas (b) the counterintuitive result you mention merely seems intuitively implausible, and (c) "Once you eliminate the [logically] impossible, whatever remains, no matter how improbable, must be the truth."

Nevertheless I do not think the situation is quite so bad as it may appear at first sight. I think there are a variety of metaphysical positions which might be adopted which make the apparent paradox less severe. For example:

1) If it is a law of nature, or perhaps a rule of metaphysics, that a physical system of a particular sort always has consciousness, then there would still be a pretty tight connection between say the experience of blue and me saying "I see blue". It just wouldn't be quite the same relation you thought held.

2) Somebody might adopt the position of "overdetermination", where a single event might have multiple coordinating causes. In that case while you could explain "I see blue" without reference to the experience of the qualia of the blue, it might still be true that seeing the qualia of blue caused you to say that you saw blue.

3) Suppose one adopts the viewpoint known as "property dualism", in which two kinds of properties (mental and physical) inhere in the same "substance" (say a human being). Then one might think that metaphysically speaking, causality is a relationship between substances, not a relationship between properties. (Although one can talk about the "causal properties" of a thing, this is just a way of speaking about what the thing itself can do.) In that case one would avoid epiphenomenalism, the idea that consciousness has no causal impact on the world, by saying that that-which-is-conscious is in fact identical to that-which-causally-acts. It is merely that this substantial identity could not be deduced from physics, biology, and logic alone, but also requires knowledge of the way metaphysics works (as suggested in 1).

Tentatively I favor some combination of (1) and something like (3). Even though I am not convinced the quasi-Aristotelian language of "substances" and "properties" is necessarily the best way to describe the world in general, it at least provides an "existence proof" that there are metaphysical views on which the situation appears to be a bit more natural than it otherwise would be. (As a general matter, I have made my peace with the idea that there would still be some open metaphysical questions even after we find the "Theory of Everything" in physics.)

As it happenes, I have read "Quining Qualia" (and I have recently looked through it again since you mentioned it). While interesting, to be honest I did not find his main argument (distributed through several thought experiments, but I think it boiled down to one main idea) against qualia very convincing.

He pointed out some cases where it is difficult or impossible to know through introspection whether (i) our present qualia have been changed, or (ii) merely our past memories of qualia. But, it seems easy enough for a defender of qualia to respond that in these situations one merely does not know which of the two inversions have occurred, not that there is no fact of the matter which has occurred. Of course, that is because I have no direct access to my past experiences, but only experience them insofar as my memory reliably presents them to my present self.

I know that Dennett does not deny the existence of experiences or consciousness. But in "Quining Qualia" he does choose to use the language of saying that Qualia do not exist. It is true that he contemplates at the beginning of the essay the possibility of merely redefining the term to strip "qualia" of certain qualities which are attributed to them, but he explicitly rejects this possibility on the grounds that any salvaged notion would be so far away from the "pre-theoretical concept" that it wouldn't be worth calling by the same name. So I think Dennett is making a very strong claim here. (Incidentally, since as you say qualia is a "specialized philsophical term", I think it is somewhat odd for Dennett to refer to it as a `pre-theoretical concept'. Obviously the experience of color is pre-theoretical, but the term "qualia" is definitely not because outside of specialized contexts there is no need to distinguish clearly between e.g. Red_1: the propensity of a physical object to reflect a certain wavelength of light", Red_2: that wavelength of light, Red_3: whatever happens in our eyes and brain that encodes redness, e.g. "neural spike trains", and Red_4: the experience in our minds which we have when looking at red light. It is Red_4 which is qualia. These distinctions presuppose "indirect realism" and are therefore post-theoretical (considered as a concept, not as an experience). Pre-theoretical people have no clue about Red_3 and conflate the others together into one concept of "red".

To me, it seems totally obvious---like so clearly and vividly the case that almost no philosophical argument could possibly convince me it is a delusion---that there is an experience of "what red looks like". That is how I would define the "quale of red". I would not include anything like "ineffability" or "immateriality" in my definition of qualia, which might be a conclusion but certainly should not be a premise. I can say that I have not seen anything which comes anywhere close to being an "effing" of qualia. But I would not assume for that reason that no analysis whatsoever of qualia is possible. There is some sense in which orange "looks like" a mix between red and yellow; so for all I know red is itself composed of multiple ingredients.

(Even if I were to be convinced by a reductionistic philosophy of mind, it would still seem to me (just as a contribution to the "easy problem of consciousness", which is still quite difficult) that each type of qualia must correspond to some fairly specific neural processes in the brain (which is perhaps damaged in cases of blindsight), and I believe that the distinction between the appearence of different sensations is mostly hard-wired at birth, with only a compartively small part due to learned associations. Although obviously learned associations (like the Eastern association of red with good fortune and white with death, or the western tendency to think of black and red as "evil" colors) do exist as a relatively superficial layer on top of that. Thus, even if I were a reductionist, I would not think that the main difference of the appearence of red and green is due to remembering past experiences of red or green objects.)

When I said I rejected the "Cartesian theater", I meant that I don't think of consciousness as being like a little man sitting inside of my brain sense-perceiving the things on a screen (with eyes and a brain of his own?)---which would seem to threaten an infinite regress. But that does not mean I reject the idea that our consciousness appears to contain a certain spread of colors and sounds etc. I don't think it is useful to refer to anything which contradicts one particular philosophy of mind as being "Cartesian theater". According to Wikipedia this is a "derisive term" coined by Dennett himself in order to critque others. Generally speaking it is better to allow proponents of different views to explain them in the way they think is best, without rounding their positions down to phrases used primarily for parodic critique. Does anyone claim to believe in the Cartesian theater? If not, I think the term may be best avoided in these discussions.

I am not sure precisely what you mean… It seems we clearly have information that shows we are deeply mistaken about color perception, namely the Cartesian theater intuition. For example it seems like we see color on “things out there,” however we know that the color is mediated by our eyes. When we think about that it often provokes the Cartesian theater intuition that the red is coming thru our eyes and being projected in the “theater of the mind.”

But this also doesn’t make sense because the literal “red” wavelengths of light never make it past our eyes, there is nothing “red” in our brain (and it is dark in there anyways). We know that the light waves are converted into neural spike trains (but that red shape doesn’t LOOK like it's made of neural spike trains!). I think there are many thought experiments even (forget about science even) that can lead one to realize they must be mistaken about the feeling that there is a colorful surround sound movie in their head.

I think these paragraphs are conflating two entirely different issues. As an indirect realist, of course I agree that Red_4 (as defined above) is not literally a property of (say) the red truck I see driving outside my hotel window. When I call the truck red I mean it is Red_1 and reflects Red_2, but Red_4 is what I actually perceive. No reasonable philsoopher is going to say that Red_1 and Red_2 are literally found in the brain, and that doesn't even begin to refute the claim that Red_4 is a property of our minds (or brains). The status of Red_1 and Red_2 is irrelevant to the questions of whether Red_4 exists, and of whether it is equivalent in some sense to Red_3.

The argument that Red_4 exists is simply that it appears to exist. A deduction from the apparences can be an illusion, but the appearences themselves cannot be an illusion. Because by definition an illusion is something that appears to be one way but actually is another. Illusion presupposes the existence of appearence, it refers to a mismatch between the appearence and reality. If it looks like something is red, then there might not really be Red_1 or Red_2 or even Red_3, but there is definitely Red_4. That is true essentially by definition.

You seem to be arguing that Red_1 and Red_2 are not equal to Red_3, therefore we are mistaken that Red_4 exists. Once one distinguishes these 4 different meanings of red, to me this seems like a non sequitur. But perhaps I have misunderstood your argument.

But as Dennett argues, surely when we see blue nothing in our brain (or our mind?) “turns blue.” A film reel contains frames that have the color being represented, if someone in the movie is wearing a red coat, the frames of the movie will be tinted red. By contrast a movie stored on a DVD still has the information about the red coat (but like our brains) doesn’t need to turn red in order to represent the color or coat. It seems to me to think something more happens, that the mind needs to fill in the color with some kind of “mental paint” just does seem to be the idea of the Cartesian theater. That the light waves need to be converted into neural impulses, and then converted again into the substance of “experience” in order to explain the vivid colors and sounds we seem to experience “in our heads.”

If you open up my brain and it looks grey, that means there is Grey_1 in my brain, Grey_3 in your brain, and Grey_4 in your mind. The fact that my brain is Grey_1 in no way contradicts the presence of Blue_3 or Blue_4 in my brain. The only way it seems to contradict, is if you commit the Cartesian theater fallacy and conflate your experience of Grey_4, with Grey_4 actually existing in my brain. Or else conflate my experience of Red_4, with Red_1 existing in my brain.

The question under dispute is whether Blue_3 = Blue_4, and if so for what definition of "=". As you yourself have said, Red_4 certainly doesn't look like a neural spike train (NST). But maybe that just means it isn't the same as what you perceive when you "look at" an NST. (In quotes because you can't see such a thing with your naked eye; instead you have to make an experiment and then construct a "model" to interpret the expriment, so your conception of it is even more indirect than in the case of the red truck!) This theoretical model of the NST is actually an event in the experimenter's brain/mind, not the NST as it exists in itself. At best a model can be isomorphic to the thing it represents, not identical to it. As the NST exists in itself, how do you know it does not have additional properties like Red_4? Thus, Red_3 and Red_4 may indeed be the same metaphysical entity, but in a way that will remain everlastingly mysterious to the outside observer, since you only have access to the extrnal experience of my brain, whereas I am my brain.

In this context, the mistake of scientism is thinking that the object as it appears in the laboratory is identical to the object as it really is. But if you think about it, that mistake is actually an example of what you are calling the Cartesian theater illusion.

Certainly we have a certain amount of privileged access to our beliefs and thoughts and feelings, but it isn't unique. For example have you ever not realized you are agitated, angry, etc. only to have your emotions pointed out to you by someone else, and only then noticing “oh yeah, I am angry?”

This is not relevant because I am not denying the existence of unconscious, or barely conscious, mental events, nor that such events may be correlated with external behavior, nor that they may become conscious after we start looking for them.

Or seen a child fall down, only to look around and see if anyone is reacting with a cringe before deciding whether to cry and respond with pain?

Again not relevant. The perception of pain may be sensitive to social cues; that does not mean it does not exist as an appearence. (Or in some cases it may just be the reaction to the pain rather than the pain itself which is social, it's hard to tell from the outside.)

Or what about a treatment used for phantom pains? One solution is to place a mirror so that from your visual field it appears you have two arms due to the reflection. The reflection tricks your brain (visually) so that seeing it alleviates much of the discomfort… Discomfort that feels like it is “in” the arm that doesn’t exist.

So the treatment caused Pain_4 to disappear, how is that inconsistent with the existence of Pain_4 beforehand? It just means that the presence or absence of Pain_4 can be affected by subtle psychological cues. Not that Pain_4 is an illusion.

Or people that suffer from over activity in their mirror neurons, and upon seeing someone hit in the head will feel a strong pain in their own head for instance.

So? This just implies Pain_3 (and therefore Pain_4) can be triggered by visual events in certain situations. But I never denied that.

These examples seem to be arguing against some viewpoint which I do not have. None of these incidents contradict anything which I believe about the brain or mind.

100. Mactoul says:

Aron,
You doubt the Aristotelean classification of substances etc. But this classification is inevitable when it is realized that we live in a universe that is "not a haze of indiscriminate particles, but a universe of things, each with its special form.", to quote the author Anthony Esolen. IF we must have things with natures (i.e. consistent activities), then what other option do you have?

101. Aron Wall says:

Mactoul,
Of course I agree that the universe contains "things", and that those things have "natures" if by that you mean simply that they behave in particular ways. But the common sense definition of "things: includes such objects as cars, houses, rivers, and mountains, which many Aristotelians would often not classify as being "substances". So an Aristotelian ontology seems to be far more specific than just what common sense requires. For example, many Aristotelians seem to believe that:

1) There is always one correct "size" of object to identify as being a substance, so that if for example a human being is a substance than a married couple or a liver or a neuron cannot be a substance. But the intuitive common-sense definition of "things" allows them to be made out of other "things". For example, a car is made out of many parts. Common sense seems to say that parts and wholes both exist equally.

2) That there is an absolute, clear-cut distinction between the "essential" properties of a thing, and the "accidental" properties, such that if a thing ever loses an essential property than it ceases to be the kind of substance it was before. Implicit in this is the idea that an object can only have a single best definition. But if you look in the dictionary, you typically find that a given object meets the criteria for several different concepts described in the dictionary. For example something can be simultaneously a "house" and a "boat", or a "lamp" and a "post". A "lampost" could cease to be a lamp without ceasing to be a post, or vice versa. It does not seem like one of those roles is metaphysically more fundamental than the other.

I think both of these assumptions are highly questionable. They seem to assume that grammatical/conceptual categories like nouns and adjectives and verbs divide reality "at its joints", without allowing for any of the flexibiliy of grammatical categories in natural languages such as English. (In linguistics, concepts are usually defined by their "centers", i.e. prototypical examples, rather than their "boundaries".)

A related issue is that sometimes the boundaries between categories seem indistinct. For example, if I slowly dilute wine with water, it does not seem like there is any well defined point at which "wine" ceases to exist and "water" takes its place. Nor is there necessarily a sharp distinction between "dinosaur" and "bird". And sometimes the world does seem to contain a "haze of indiscriminate particles", as in the case of a fog.

Thus, while I admit that Aristotelianism can be an extremely helpful language for describing various aspects of the world, I think any functioning metaphysical system needs to have a little bit more tolerance for ambiguity, and lack of sharp divisions between the categories. Our brain is very good at enhancing the "outlines" between different objects we perceive in our visual field, and then categorizing the objects with words according to our own human interests, but that does not mean there are no fuzzy boundaries in Nature!

I do think however that the existence of Consciousness probably has some important implications for Metaphysics. For example, if the arguments in my main article are correct, then that strongly suggests that there can exist properties of a "whole" which are not reducible to the properties of the constituent parts. That seems like a significant blow to reductionistic thinking, which has otherwise been rather successful in the analysis of Nature.

102. David says:

Regarding 1, while some wholes appear to be more than the sums of their parts, most wholes don't. This alone should be sufficient to regard "things" on some particular level of analysis as privileged in some way: eventually, we're going to end up with the largest wholes that are greater than the sums of their parts, and any mereological sums that include such wholes will end up being heap-like in nature. These "maximal wholes" are what we call substances.

To me, it seems quite probable that "interesting" mereological sums including organisms as parts - such as ecosystems - will end up being no more "greater than the sum of their parts" than a flyer in Conway's game of life. Since the clearest examples of wholes that could be greater than the sum of their parts are conscious organisms, it seems reasonable to conclude that most conscious organisms are substances. I would think that most unconscious organisms are substances as well. When we reach the inorganic world, things become admittedly less clear-cut. But that's to be expected on Scholastic principles. The further down you go on the ladder of Being, the closer you get to raw prime matter, the "less real" - and thus "less one" - the things you deal with become.

In any case, there is surely a clear motivation for regarding things "of the right size" as privileged in some way, and that so regarding them need not entail that their parts are in any sense "unreal."

Now, I think some words about natural kinds are apropos. To take your example of water and wine, I think that it would be fair to say that at some point near the start of the dilution process you have something that's paradigmatically wine, and that by some point towards the end you have something that's paradigmatically water. This alone should give us reason to think that, eventually, a threshold is reached where the stuff you're diluting stops being wine, and that a threshold is reached where it starts being water. Now, one solution for dealing with the "fuzzy stuff" in the middle of the process would be to say that the two thresholds - the no-longer-wine threshold and the now-water threshold - occur at different points in the process, and that the "very dilute wine" or "slightly alcoholic water" or whatever you want to call it is something other than both water and wine. Provisionally, we could identify the last point at which we were confident that our mixture was wine and the first point at which we were confident it was water, then regard the mixture as some third thing when between those two points.

An alternative solution would be to recognize that even substantial change can be continuous (when dealing with conscious things, this probably won't work - the gap between the barely conscious and the totally unconscious might as well be infinite). Yet another would be to regard solutes present in a solution as accidents of the solvent, in which case wine would simply be a certain kind of water. A fourth would be to say that the only non living substances are molecules, atoms, or elementary particles, and that higher level inorganic objects are mere heaps. There are any number of ways to possibly solve or ameliorate the vagueness problem, and the only way to figure out which one would be best in any particular case would be a detailed investigation of the particular things in question.

However, this doesn't adequately explain why we should regard "natural kinds" as a necessary feature of our ontology in the first place. And to understand that, it would perhaps be easiest to refer back to the notion of substances as being greater than the sums of their parts. The basic idea is that, since the parts being together in a certain dynamic structure isn't enough to constitute the whole (if it were, then the whole wouldn't be greater than the sum of its parts in the first place), we need some kind of "principle of unity" in addition to the parts and their structure in order to constitute our whole. This principle of unity has the name "substantial form" in Aristotelian philosophy.

Now, from what I have said, it should be clear why, "if a thing ever loses" its form, "it ceases to be the kind of substance it was before." The loss of this principle of unity would simply be the destruction of the whole insofar as it actually was greater than the sum of its parts. Moreover, while the substantial form might indeed be something like a "property instance," it is most certainly not supposed to be a set of "essential properties." Instead, essential properties are supposed to flow from it naturally, in the absence of interference. If interference does occur - perhaps some sufficiently violent perturbation of its parts - the "essential properties" may be prevented from manifesting, or may cease to be manifested. However, unless the parts are so violently perturbed that the whole is destroyed and the substantial form lost, the absence of such "essential" properties will not result in a substantial change.

The distinction between the essence of a thing and its "essential properties" is actually one of the key points of Scholastic essentialism.

The connection between the substantial form - conceived as the principle of unity that makes a whole more than the sum of its parts - and the notion of natural kinds should be easy to elucidate. When we conclude that a thing is greater than the sum of its parts, it is always because it behaves in a way that prompts us to conclude that there is something more to it than a dynamic structure of smaller things. In other words, the form we regard as the principle of unity appears to be a principle of activity as well. Now, the activities of various things seem to exhibit at least three ways for a thing to stand out clearly against the relatively fuzzy background of the inorganic world. Perhaps inorganic substances have their own ways to be more than the sums of their parts, but it's certainly hard for us to spell out precisely how they do so. In any case, each of those three ways has to do with the notion of self-perfective activity, or immanent causation as the Scholastics called it.

The first kind of immanent causation - the kind that sets the living apart from the non-living - is autopoiesis. The term means "self-building," and it is the mark of a living thing that it will not merely incorporate suitable matter that comes into contact with it in the manner of a crystal, but rather will actively process matter to make it suitable for incorporation. It will actively maintain "good conditions" for itself within the limits of its ability to do so, and it has a drive towards its own continued flourishing. There is a clear standard of "health" and "sickness" for living things, independent of human interests or conventions, indicative of the fact that they act for themselves in a way that no inorganic system can. This is perhaps the least clear-cut of the divisions, as my mention of crystals should make clear, but it seems to me that a thing either has a state that counts as "healthiness" or "flourishing," or else it does not. At some point, there is a divide between that which is totally incapable of seeking out its own fulfillment and that which is barely capable of doing so, and as hard as it may be for us human investigators to determine where that line is, we may be confident that it is there.

The second kind of immanent causation is the kind about which the original blog post was written: consciousness. The ability of a thing to be aware of its environment. This is, at least to those with modern sensibilities, the clearest of the three divisions. Either there is something it is like to be something, or else there is not. It may be hard for us to tell whether a thing is barely aware of its surroundings or totally oblivious, but again, we may be confident that the line is there.

The third kind of immanent causation is simultaneously the most significant and perhaps the most subtle to modern sensibilities. It may, for our purposes, be called "reason," and the easiest way to explain how it makes us different may well be to examine the views of a man who claimed we don't have it: David Hume. Hume famously claimed that sensory images are the only items of our mental furniture that can in any sense be said to correspond to "the world," that we can have no idea of "causation" and are limited to mere association, that we can have no idea of a thing that persists through time even in the case of ourselves (we can, at best, think of things as ever changing "bundles" of sensible properties, and ourselves as ever changing bundles of conscious experiences), and that morality is just a shadow cast by our emotions or sentiments. Now, there is indeed a class of things to which these claims apply: the brutes, the conscious animals lacking reason, indeed have no notion of the world save sense images, recognize no constraints on their behavior save for what they find pleasant or unpleasant, and so on. But we appear to be something else entirely. We do have concepts of many things - enduring substances included - that cannot be reduced to sense images, we are capable of seeking out the causes of things, and we are indeed bound by a moral law. In short, we are not only aware of the world around us, but we are also capable of coming to understand it and our place in it, in however limited a fashion. And it is because we have reason that we are capable of doing so. And, as in the other cases, reason appears to be something that is either there (regardless of how limited in degree) or else is not.

This gives us four hard and fast divisions of material reality, four non-overlapping categories with room for every material object, with no left overs. The first division is between those things capable of autopoiesis and those that are not. This divides the living from the non-living. The second division is of living things, dividing those capable of awareness from those incapable of awareness. This divides the "animals" from the "vegetables" (to use the Aristotelian terminology). The third division is of animals, dividing those capable of understanding from those incapable of understanding. This divides man from the brutes.

Now, perhaps there are more specific differences in kind in the natural world, but these certainly seem to be the clearest. Note that actual awareness or understanding isn't the essence of an animal or a human, but is rather an "essential property" that flows from animality or humanity. Like all essential properties, actual awareness can be hindered or eliminated by disease or injury. But the deeper capacity to awareness, the tendency towards acquiring it that manifested as maturation during the earliest stages of development, that is an aspect of the essence. Thus the distinction between primary and secondary actualities. Actual awareness is grounded in the capacity to exercise awareness, and the capacity to exercise awareness is an "essential property" grounded in the substantial form "animality."

It's a rather complex picture, but in my view, it's the one that does the most justice to common sense in a complex universe, where the question of the one and the many arises in myriad forms at every level of analysis.

103. Aron Wall says:

David,
Thanks for your long and informative description of the Aristotelian party line. (I've read St. Feser's book on Scholastic Metaphysics, and I had some significant disagreements, but I didn't feel myself qualified to critique it properly after only one reading. Your comment provides a more bite sized set of arguments to respond to.

Regarding 1, while some wholes appear to be more than the sums of their parts, most wholes don't. This alone should be sufficient to regard "things" on some particular level of analysis as privileged in some way: eventually, we're going to end up with the largest wholes that are greater than the sums of their parts, and any mereological sums that include such wholes will end up being heap-like in nature. These "maximal wholes" are what we call substances.

Good!---I was hoping that somebody would push back on my argument in this direction. Since I do believe that at least sometimes this situation of a "whole greater than the sum of its parts" arises, I am willing to accept this definition of a substance, as long as we are clear that it is a mere definition, and that any further statments about the nature of substances requires justification.

However, a few thoughts present themselves. One is that, this definition does not make entirely clear what kinds of "wholes" are eligible to be substances in this respect. For example, maybe given certain circumstances, a computer program, which is distributed throughout a network of computers on the Internet, might possess consciousness. In such cases it may not be entirely clear what should be counted as the "body" of such an entity. Secondly, since "contains" is a partially ordered relationship, it seems possible that some subtances might partially overlap with each other, in which case you couldn't divide the universe into disjoint maximal wholes.

It is not very clear what the complete list of not-completely-reducible properties might be. (I am enough of a Platonist to think that beauty might exist objectively in certain structures. Also, certain ethical principles such as harmony or promise-keeping seem to be features of overall situations rather than individual organisms.)

Thus, it seems at least logically possible that the physical universe taken as a whole might have some unifying property which does not reduce to the parts. Even if there is no reason to think this is the case, I see no way to rule it out. In that case, your definition would lead to the surprising consequence that the universe consists of just a single substance! That would make the term useless for classifying everyday objects. This could be averted by defining a substance as any whole that is greater than the sum of its parts, even if it is in turn part of a larger substance.

From the perspective of sacramentalist Christianity, it seems like there are good reasons to think that certain unions of individuals (including married couples and the Church) as being relevantly similar to the body of a single individual. These unions are explicitly called "one flesh" and the "body of Christ", after all, which should be rather suggestive to anyone who thinks that bodies are metaphysically special! Even from a purely natural (but Aristotelian) point of view, a sexually reproducing couple participates in an essential biological function jointly, not merely as individuals.

However, I do have some additional pushback against the Aristotelian theory of natural kinds:

To take your example of water and wine, I think that it would be fair to say that at some point near the start of the dilution process you have something that's paradigmatically wine, and that by some point towards the end you have something that's paradigmatically water. This alone should give us reason to think that, eventually, a threshold is reached where the stuff you're diluting stops being wine, and that a threshold is reached where it starts being water.

What on earth does "paradigmatically" mean here? I don't see any reason to think there should be a sharp metaphysical distinction here, based on the merely human interest in the degree of alcohol content.

Now, one solution for dealing with the "fuzzy stuff" in the middle of the process would be to say that the two thresholds - the no-longer-wine threshold and the now-water threshold - occur at different points in the process, and that the "very dilute wine" or "slightly alcoholic water" or whatever you want to call it is something other than both water and wine. Provisionally, we could identify the last point at which we were confident that our mixture was wine and the first point at which we were confident it was water, then regard the mixture as some third thing when between those two points.

Surely this is not the right way to go. You are merely replacing one ill-defined boundary with two ill-defined boundaries. One could raise the same question about each of these two sharp boundaries, raising an infinite regress problem. Also "the last point at which we were confident" seems like it is confusing epistemology with metaphysics.

An alternative solution would be to recognize that even substantial change can be continuous

That is more plausible. But note that, whether a change is continuous or abrupt, can itself depend on the context. For example, if you look at the phase diagram of water, you will see that at low pressure there is a sharp distinction between liquid and vapor, but at higher pressures there is no sharp disinction.

A fourth would be to say that the only non living substances are molecules, atoms, or elementary particles, and that higher level inorganic objects are mere heaps.

Given the above definition of substance, isn't that the obvious default option unless and until you show the existence of a property of inorganic materials which doesn't reduce to the parts?

[UPDATE: 1) I meant the elementary particles, since I don't see any good reason to think that molecules and atoms have properties not reducible to the behaviors of their constituent elementary particles, but 2) modern particle physics usually regards particles as being excitations of fields which suggests a rather different picture of the fundamental "substances"; 3) this is an example of a general phenomenon where progress in physics that makes little difference to our physical predictions about the everyday world may nevertheless suggest a different metaphysics of the everyday world.]

(when dealing with conscious things, this probably won't work - the gap between the barely conscious and the totally unconscious might as well be infinite)

I probably agree, but it might conceivably be the case that every object which processes information is "barely conscious" in some sense, and that animals differ only in that their consciousness is more organized and self-reflective. This is the "panprotopscyhist" view. I am not saying I believe it, but it seems at least possibly true.

Moreover, while the substantial form might indeed be something like a "property instance," it is most certainly not supposed to be a set of "essential properties." Instead, essential properties are supposed to flow from it naturally, in the absence of interference. If interference does occur - perhaps some sufficiently violent perturbation of its parts - the "essential properties" may be prevented from manifesting, or may cease to be manifested. However, unless the parts are so violently perturbed that the whole is destroyed and the substantial form lost, the absence of such "essential" properties will not result in a substantial change.

While I admit that I used the term "essential properties" incorrectly, if there is a completely sharp distinction between substances, then there would still have to be a set of necessary and sufficient conditions for a thing to be a particular kind of substance, and that is what I meant to refer to. If substances can be destroyed, then at least some properties are required in this stronger sense!

The first kind of immanent causation - the kind that sets the living apart from the non-living - is autopoiesis. The term means "self-building," and it is the mark of a living thing that it will not merely incorporate suitable matter that comes into contact with it in the manner of a crystal, but rather will actively process matter to make it suitable for incorporation. It will actively maintain "good conditions" for itself within the limits of its ability to do so, and it has a drive towards its own continued flourishing.

It is not at all obvious that these physical phenomena are inexplicable in purely reductionistic terms! Biochemists have done a lot to illuminate the mechanisms by which organisms interact with their world, metabolize, adapt themselves to their environment, and reproduce, according to a malleable set of instructions (DNA). While it is difficult to create an artificial "life form" in a cellular automaton, if somebody did I think I could, in a somewhat fuzzy way, figure out what I meant by that entity flourishing and reproducing itself.

As I have said several times already, vitalism seems importantly disanalogous to consciousness insofar as we are directly aware of consciouness, whereas we are not directly aware of our own metabolism etc. except insofar as it happens to affect our consciousness. Of course, if consciousness alone is a nonreductionistic principle of unity, than it might be our brains rather than our entire bodies which are substances. (I strongly believe that an emotionally healthy person should regard their body as part of their sense of selfhood, but I do not think I would identify selfhood in this sense with being metaphysical substance as it was defined above!)

At some point, there is a divide between that which is totally incapable of seeking out its own fulfillment and that which is barely capable of doing so, and as hard as it may be for us human investigators to determine where that line is, we may be confident that it is there.

It is not as obvious to me that "flourishing" is as sharply defined of a thing as consciousness is (assuming consciousness is, in fact, totally sharp). Why can't it be a fuzzy thing, a thing defined by its "center" rather than a sharp "boundary"? 2) It is not obvious to me that the concept of flourishing can't be defined in a reductionistic way. For example, "open question arguments" don't seem to apply---if a tree is growing, and bearing fruit, and reproducing, and in every other respect looks like a functioning tree on a microscopic scale, what would it even mean to say that it nevertheless isn't flourishing?

Now, there is indeed a class of things to which these claims apply: the brutes, the conscious animals lacking reason, indeed have no notion of the world save sense images, recognize no constraints on their behavior save for what they find pleasant or unpleasant, and so on.

I don't see how you can possibly know this, given that you aren't a (mere) animal. I am not sure that even an animal could function with pure sense images only, with no ability to compare sensations together and notice their similarity. Obviously, the proto-concepts that animals have would be very crude and concrete compared to human concepts, but note that having concepts is a very different thing from being able to verbalize them, or to manipulate them intelligently. So I see no reason to believe that Hume was correct even about animals.

Now there is no question that human thought is very sophisticated in comparison with animal thought. It seems difficult to explain the behavior of Alex the Parrot, for example, without postulating that he was able to form concepts. And if Alex could get to the level of having concepts, maybe other animals can as well.

104. David says:

Regarding these sorts of topics, while Feser's book is a good starting point, the best development of the relevant notions is to be found in Oderberg's Real Essentialism. Scholastic substance theory and its connection to the notion of essence are defended at length there. His insistence that sentience and locomotion be always connected I found questionable, but other than that, the main arguments all seemed solid.

Regarding the definition of substance, my suggestion that the largest wholes that are greater than the sums of their parts be regarded as substances was less a definition than it was a characterization. The characterization was chosen to push back against somebody like you, who recognized wholes greater than the sums of their parts on the basis of what paradigmatic substances - human persons - are like, yet appeared to doubt the notion of substance. If you want a definition of substance, I will suggest that we use the one offered by Oderberg, who himself got it from Lowe:

"x is a substance =df x is a particular and there is no particular y such that y is not identical with x and x depends for its identity on y."

That a part of a whole that is a substance in this sense is not itself a substance should be clear. Indeed, my organs would seem to be dependent on me for their identity. My hand is MY hand, my liver is MY liver, and so on (I'm not shouting, I just don't know how to use italics in html). More significantly, they contribute to my functioning by their very nature, but (naturally, at any rate - the Holy Spirit when at work in me complicates matters) I don't contribute to anything else by my nature. My parts are functionally subordinated to me, as are the organs of all organisms.

I think that we need to be clearer about where we disagree. Do you think that substances don't exist? What, then, are individual human persons? Are you suspicious of natural kinds? How, then, do you account for human rights? Do you find the notion of substantial change dubious? What, then, are conception and death? I think it clear that the notions of substance and essence apply to us, at any rate.

Moreover, it is far from clear to me that we can think in terms of anything other than individual objects with their own unique identities. Events and processes seem to require existing things undergoing change. Properties and other universals seem to require things to inhere in. Relations seem to require relata. But individual objects seem to have a measure of ontological independence, if not as regards existence, at least as regards identity. Now, perhaps there are individual objects that don't have the strong unity and individuation of a Scholastic substance, and if you have an alternative account of such objects, I'm more than prepared to listen. But it seems clear to me that some things - particularly persons - fit the paradigm quite well.

Initially I thought that the notion of overlapping substances made no sense. Then I thought about conjoined twins, and realized that there might actually be a reason to think it possible. I'm rather more skeptical of a conscious internet. Conscious experience has a high degree of unity to it, I experience quite a few sensations at once (I see and hear and feel all at once, even if my attention can only encompass so much at a time). If it is to be instantiated in a material object, I would expect that object to be more or less an organic unity, just like all the other putatively conscious entities we've found so far. Perhaps my reasons for thinking so are merely inductive in nature, but they are no less real for all that.

If your problem with the notion of substance is connected to sacramental unities, I'm afraid I can't help you. I can't tell you what I hope my girlfriend and I will someday be, nor can I tell you precisely what she, I, you, and some two billion others constitute at this moment. I'm not convinced that the Church can be shoehorned into an ontological category at all, as it appears to be founded in The One Who Transcends all such categories. It is as mysterious as He is. Indeed, here as elsewhere, the term "mystery" is apt in more ways than one. If you want an answer to the question "What exactly is the church?" you may have to wait some time before you will be in an Environment where someone might have an answer, though I can hardly guarantee that the question will retain its urgency once you are There.

As for such things as honesty and beauty, while I agree that they do not necessarily attach to substances alone, it seems to me that they can easily be construed as properties of the way substances relate to one another. Relations are, after all, a category of accident. Moreover, as regards beauty, it is clear that a thing doesn't even need to be a being in order to have it. A suitable process of change in the density of the atmosphere, like that produced by a performance of Handel's Messiah, can have beauty. However we construe the atmosphere, it is clear that the relevant changes do not constitute a substance, or even an object, at least as regards the common-sense definition of object. If beauty can attach to a suitable process of change - which isn't even a being in the first place - why could it not attach to a suitable heap?

If you wish to regard the universe itself as "greater than the sum of its parts," I suppose you may do so. However, to me, it seems clear that I am not functionally subordinated to the operations of the universe as a whole in the same way that my members are functionally subordinated to me. Indeed, viewing the entire universe as a substance in its own right seems to bring many of the paradoxes of pantheism with it, as it throws my identity and autonomy of operation into question. This isn't so much a case of "not having a reason to think that the universe as a whole is more than a heap" as it is a case of "having a reason to think that that the universe is not more than a heap." The reason being that my own identity and (natural) autonomy are fairly obvious facts.

Regarding water and wine, when I speak of something being "paradigmatically x," I am saying that it is clearly one of the items that falls under the center of the concept of "x." To say that concepts are defined by centers rather than boundaries is just to say that there are some things to which such concepts definitely apply, correct? And the question is how to deal with those things that do not fall into the center of any concept, correct?

Assuming a Scholastic metaphysics, and assuming that there are some things belonging to the centers of the concepts of water and wine, I don't think we can say much beyond the following: "If (the concepts of water and wine correspond to distinct natural kinds, and the two natural kinds are not related as genus and species), then (there is a fact of the matter as to whether or not a certain liquid is water, there is a fact of the matter as to whether or not a certain liquid is wine, and no liquid is both water and wine at the same time)"

The parentheses identify the antecedent and consequent. Now, this statement doesn't tell us a lot of things. It doesn't tell us whether or not water and wine are distinct natural kinds (one or both might be merely conventional), it doesn't tell us whether or not they are related as genus and species (water might be a genus containing wine as a species), it doesn't tell us whether or not there might be a third natural kind intermediate between water and wine (perhaps swill has a nature of its own), and it doesn't tell us how to figure out exactly what the fact of the matter about the identity of a given liquid is (doing so may be impossible). It just tells us that if they really are distinct natural kinds not related as genus and species, then every liquid either is or is not one of them, and cannot be both at the same time. This is entirely consistent with us being unable to figure out where exactly the demarcation point is.

To once again bring Oderberg into the conversation, "given that there are times in the career of a persisting object where boundaries determinately do NOT exist, why should there be any times where the boundaries are indeterminate?" The question can be rephrased in the context of natural kinds, but in this case, diachronic identity and distinctions of kind both come into play. Since concepts are defined by their centers, it follows that they have centers, and thus there are cases of non-vague non-boundaries. Whence, then, the idea that vague boundaries exist in cases other than those related to human conventions?

Regarding reductionism as a default position, I'm of the opinion that something like a piece of quartz clearly has a determinate synchronic identity in a way that a collection of field excitations does not. It's the whole "not a haze of indiscriminate particles" problem written in hundred meter tall neon lights by quantum field theory. Fluids are a bit harder to analyze, but I feel that in most cases similar considerations apply. Perhaps certain rarified gasses or plasmas might really be such hazes, but most of the objects of everyday experience are not.

Regarding essential properties and necessary and sufficient conditions, the necessary and sufficient condition for being a human is having the substantial form of humanity. I'm not sure what else you're asking the Scholastic to give you.

Regarding whether or not life is capable of reduction, the question ultimately boils down to whether or not immanent causation can, without redefinition, be reduced to transient causation (a doubtful proposition), or if that reduction cannot be achieved, whether or not the sorts of things I identified are really cases of immanent causation (which could perhaps be plausibly argued). However, the question for the time being is, first and foremost, whether substances and natural kinds in the Scholastic sense exist in the first place.

Regarding pan-proto-psychism, I have only two things to say. First, most material objects show no signs of consciousness of any kind. Second, even if I am composed of countless conscious entities, that doesn't seem to do much to explain the unity of my own consciousness. So it's not clear that the view has anything to recommend it to us.

Regarding the idea that my brain might be a substance and the rest of me might be a heap, I will only note three things. First, if my brain is a substance, then substances exist. Which would be sufficient for my purposes. Second, whatever plausibility the view might have comes from the idea that the immanent causation of the merely alive is either reducible to transient causation or else isn't genuine immanent causation in the first place. Third, the counterintuitive nature of the suggestion that my hands, for example, are not a part of me should equate to some kind of probability cost.

Regarding the nature of concepts, it's not obvious to me that the ability to "compare sensations together and notice their similarity" requires abstract, conceptual thought in the first place. It merely requires the ability to store "old" sense images, associate them, and bring them to mind again when necessary. Imagination and memory might be required, but abstract concepts? A notion of being or causation? Morality proper? Not so much.

Regarding "how I know" that brutes lack abstract/conceptual thought, the short answer is that quite a few of them - flies and worms and the like - show no signs of having it, and much the same seems to be the case with respect to dogs and horses and the like.

Regarding Alex, while my first impulse is to think that he was simply a well trained parrot, it is certainly possible that I am wrong, and that for reasons unknown he was endowed with a rational soul. In that case, he would have been human, as he would have been a rational animal. Certainly, he is a more plausible candidate for humanity than any of the other animals of which I am aware, as he was capable of asking questions.

So while I cannot tell you what he was with certainty, I can tell you that he was either a human or a brute. It may be hard for us to tell which, but that doesn't change the fact that he was either one or the other.

105. TY says:

I see Daniel Dennett has a book out "From Bacteria to Bach and Back". I don't know if I should buy it but I may end so doing just to have in one cover his world view written by himself. One reviewer in the The Guardian doesn't think the book adds new knowledge and notes:

"Bacteria to Bach and Back is an infuriating book. It is too long, repetitive, indulgently digressive and self-referential (no fewer than 64 references to his own publications). But underlying it all there is a subtle and interesting argument. The bare bones are these: mind and consciousness are no more and no less mysterious than other natural phenomena, such as gravity."
https://www.theguardian.com/books/2017/feb/02/from-bacteria-to-bach-and-back-by-daniel-c-dennett-review

But as physicist Stephen Barr has stated more than a decade ago, naturalists have put themselves in a straitjacket::
"The materialist, by contrast, in is a straightjacket of his own devising. Nothing is allowed by him to be beyond explanation in terms of matter and the mathematical laws that it obeys. If, therefore, he comes across some phenomenon that is hard to account for in materialist terms, he often ends up denying its very existence." Modern Physics and Ancient Faith, page 17.