Blind Brain Theory and Enactivism: In Dialogue With R. Scott Bakker
June 11, 2014 § 7 Comments
[Image: Hannah Imlach]
Last week I posted a short essay on the question of meaning, style, and aesthetics in the ecological theories of Alva Noë and Jacob von Uexküll. The post resulted in a long and in-depth discussion with science fiction novelist and central architect of the Blind Brain Theory (BBT) of cognition, R. Scott Bakker. Our conversation waded through multiple topics including phenomenology, the limits of transcendental arguments, enactivism, eliminativism, meaning, aesthetics, pluralism, intentionality, first-person experience, and more. So impressed was I with Bakker’s adept ability to wade through the issues — across disciplines, perspectives, and controversies — despite my protests that I felt it worth excerpting our dialogue as a record of the exchange and as a resource for others interested in these debates. Whatever your views on the philosophy of mind, Bakker’s unique position is one you should familiarize yourself with — if only, like me, so that you can find better ways to dispute its unsettling consequences. To provide a little context to the dialogue I am re-stating my central claim and concluding paragraph from the earlier post:
In Noë’s actionist and in Uexküll’s ethological approach space, time, and motion are the variable forms of organismic intuition, an insight which calls forth the aesthetic nature of the ecological arena. Here we can see that Kant’s error was to focus too narrowly on one kind of transcendental ego — the human being — at the expense of all other species. Deepening Kant via Noë and Uexküll, then, we can see that aesthetic formulations of meaning, value, and significance are causal and necessary factors in evolutionary processes, and that any attempt to evacuate the enacted ecology of meaning that surrounds every organism undercuts the very mode by which evolution has transpired since the emergence of life on Earth. In other words, ecology is necessarily about transactions of meaning, translations of value, and transformations of significance, and it is in principle irreducible to mechanical description alone.
R. Scott Bakker: Great piece. Though I occupy the opposite end of the spectrum, Adam, I’m really beginning to appreciate the clarity with which you present your views. I think my overall worry can be best expressed as an answer to Mathew’s question above:
“The attempt of the neurosciences to perform an erasure of 1st person experience and intentionality is self-refuting, isn’t it?”
Only if you assume there’s such a thing as the first person in the first place. I could go through a monstrous list of apparent experiential verities that are quite literally false. Since it is entirely possible that ‘philosophical interpretations of the first-person’ could find themselves on this list it remains an *empirical* question as to whether there is such a thing. Any approach that takes the first person as ‘a priori’ (a name for metacognitive blindness is there ever was one) or ‘transcendental’ has to be religious in some sense, based on some kind of leap of faith that reflective intellection can deliver anything more than more chronically underdetermined claims. Why, when cogsci is throwing doubt on so much, trust reflective intellection on this one thing?
Kantian approaches are quite simply on the empirical menu. If a thesis like my own Blind Brain account is validated, then such approaches are the artifacts of metacognitive neglect. The question of whether my account is accurate or no is an empirical one.
Since we know that astronomically complex machinations somehow lie at the root of it all, the question really divides into one of whether the peculiarities of the first-person are the product of some kind of *special* emergence (which is to say, one where the resulting functions are systematically incompatible with natural functions), or the product of metacognitive shortcomings, a kind of illusion. Where the latter enjoys parsimony, the former enjoys traditional consensus.
Given the history of science, why should anyone bet on traditional consensus? Transpose this debate onto the 19th century, and Noe, Uexkull, and many others are on the side of design, not Darwin. I think we should bet that the picture that actually prevails will prove disastrous to our prescientific presumptions.
So my question is basically Craig’s: Why bet on the prescientific past?
Adam Robbert: Scott, when you write, “Any approach that takes the first person as ‘a priori’ (a name for metacognitive blindness is there ever was one) or ‘transcendental’ has to be religious in some sense,” you’re really getting at one of the core issues I’m interested in. In that sense I’m absolutely not a phenomenologist (or even a philosopher of any transcendental stripe, really). Kant got away with simply positing a transcendental ego without giving a genetic account of its emergence, and who can blame him? He just didn’t have the empirical information to give such an account. Similarly, Evan Thompson gives a nice summary of Husserl’s transition from a static to a genetic and then to a generative transcendental phenomenology. In the latter phases Husserl basically tries to give an individual, cultural, historical, and intersubjective account of the conditions which give rise to structures of awareness. I think this moves things in the right direction, but obviously it doesn’t go far enough into the depths of evolutionary, and not just cultural, history. Above I try to take this basic premise — that we can give an evolutionary / ecological account of the emergence of these kinds of structures — in a way that also connects them to the (variable) first-person experiences of these structures across species. This requires a great deal of methodological pluralism; moving across disciplines — history, science, philosophy — and across domains of experience — first and third person, etc. The account I want to give is not “religious” in this sense of positing an a priori and pre-scientific subject (as you and Craig are right to question), but rather to understand the first-person “I” not just from the perspective of neuroscience (where the category doesn’t even exist), but from the perspective of a performative or enacted “I” living in an evolving world (I think of the way Nietzsche calls reason “an expedient method of falsification” but perhaps with a more positive and stylistic spin). In this view, a structure of experience is not just a condition within which phenomena are disclosed but is also the condition within which the knowing “self” is itself disclosed in particular, partial, historical ways. “I” is not just a phantom process of biological systems but also a capacity for variable self-disclosure; I-making is I-enacting; the self is made and unmade, not given. I don’t think it’s actually possible, in principle, to live out the neuroscientific view within our day-to-day first-person lives, so for me philosophy — as a technology of self-making — remains indispensable in terms of its capacity for the generation of skillful means within lived experience, a point I think has analogs in all species insofar as “cycles of meaning,” to borrow Uexküll’s phrase, are indispensable to the flourishing of all organisms. In that sense I take meaningful-ness to be the driver of physical evolution and where physics places constraints on what kinds of meaning are possible to enact.
Bakker: I agree with you on the impossibility part. I actually think that’s part of our dilemma: evolution stranded us with a subreptive capacity for self-knowledge. And I actually think I can make a pretty good case as for why it is overwhelmingly likely that, given evolution’s proclivity to select the ‘fast and frugal’ specialized fixes, this is the kind of capacity we do have.
But even is this is merely just a mere possibility, I think it presents your project with a real dilemma. I too think nature is wild and hairy, that it requires many different kinds of brushes and combs to cognize. The *cognize* part is the problem, because theoretical cognition is *hard* to come by. Obviously you’re not interested in creating esoteric intellectual artifacts, but rather somehow dredging something important about the human, cognizing something. It’s a cognitive pluralism that you are espousing.
Now what you call ‘I-enacting’ I call (not to put too fine a point on it!) ‘philosopher running afoul metacognitive illusion’: your ‘self-disclosure’ is my ‘self-deception.’ This self-deception is systematic because the metacognitive illusions it turns on are systematic. Absent any baseline, projects such as Husserl’s are doomed to seem quasi-cognitive as a result. The fact that none of it can be squared with natural cognition, provender of our highest dimensional views, simply evidences this (or so I say).
The question is, *What evidences your view?* What evidences the ‘fact’ that anything ‘self enacts’ or that ‘meaningful-ness’ is a driver of anything, let alone evolution. It can’t be a deliverance of introspection or philosophical reflection. Everywhere you turn in cognitive science you find evidence of metacognitive neglect, reason to doubt any such deliverances. It feels as if we see colour in our periphery, but we don’t. It feels like our intuitions of correctness are rationally grounded, but they’re generally not. It feels like the information that intuition immediately renders to solve a problem is all that’s required, but it often isn’t. It feels like pain and suffering could never be dissociated. It feels like advertising has no effect on our decisions. It feels like we have no race/gender bias… and so on.
If the pluralism you advocate is a deliverance of reflection, then it seems to be an ‘everything goes’ pluralism, one that would have no choice but include innumerable incompatible interpretations, such as Eckhart Tolle, say, and thus provide no real knowledge at all. But introspection or reflection seem to be all you have to go on.
Robbert: Yes, it’s definitely a kind of cognitive pluralism I am espousing, though there are specific constraints and affordances placed on the kinds of plurality that are possible. Outlining these constraints and affordances in more detail is part of what I am interested in pursuing — these include, for example, what we can think of as physical constraints and affordances (no miraculous breaches with known laws, etc.), others are what we can think of as historical constraints and affordances, both in the evolutionary sense that we’re constrained and enabled as a specific kind of organism with a specific kind of embodiment, and in the cultural sense of being members of certain linguistic communities, nations, genders, classes, etc. In other words, pluralism does not imply an infinity of modes, but it does imply a finite diversity of modes that require multiple methods to understand appropriately.
I think our real point of disagreement, however, may be playing out on a more meta-theoretical level (or perhaps just a more basic terminological one). I see two components here: The first is that what I’m arguing for is a nonrepresentational conception of experience and disclosure that is rooted in the body, rather than in an observing cogito (or something of that variety); so things like “reflection” and “introspection” aren’t the right metaphors. Thompson picked up on this recently at UC Berkeley where he pointed out that there’s a certain approach to cognitive experience — and to meditation, which is his area of interest — that views both introspectively, i.e., as the turning inward of observation toward internal mental states (as a stream of consciousness-images passing you by, or what have you). This is a consequence, I think, of the representational theory of mind. The enactive view is different in that it posits an ongoing mode of self-construction rather than self-representation. (Again, not any kind of self can be constructed; in this case the sensorimotor coupling with the environment places constraints on what kind of self-experience emerges from the background of inarticulate processes.)
The second is that I treat the distinction between “self-making” and “self-deception” differently than you do. In my approach there can’t be anything like a general category of self-deception since cognition and experience must always be situated cognitions and experiences. In other words, whatever it is that amounts to self-deception, the criteria for qualifying it *as* self-deceptive is relational. This realization is, I think, what awaits us on the other side of nihilism: The kinds of techne I am pointing towards are judged in relation to ethical, aesthetic, and pragmatic criteria (in addition to scientific or logical ones) lived out in intersubjective and multispecies communities. As I see it, self-deception in the abstract isn’t possible, though specific kinds of self-deception are. The relational character of deception is both what makes it so difficult to identify and what makes it possible to overcome it — though of course the overcoming-of-deception is itself a situated and contestable matter, and thus we are firmly in the realm of politics, or what Isabelle Stengers calls “cosmopolitics”: the unruly decision-making and playing out of creativity, powers, concerns, and values in ecological settings. For me this means philosophy must aim for self-care and not just self-knowledge; we must create a livable system of ideas in addition to pursuing critical denouncements of dogmatism.
Bakker: “The first is that what I’m arguing for is a nonrepresentational conception of experience and disclosure that is rooted in the body, rather than in an observing cogito (or something of that variety); so things like “reflection” and “introspection” aren’t the right metaphors.”
I actually agree entirely with this, but these are the metaphors we use. I take an ecological approach as well, one which views cognition in behavioural, environmentally continuous terms throughout. (If I spend alot of time talking about the brain that’s because that’s where most of the yet-to-be-unravelled complexity is). So I view deliberative metacognition as a matter of the human organism behaviourally comporting itself vis a vis its environments vis a vis the processing of neurally sourced information.
I actually think this makes the situation more uncomfortable for you, because short of some kind of magic ‘auto-cognition,’ this picture demands you always relativize the problem ecology you’re attempting to solve (in this case, the ‘self’) against whatever adaptive capacities you possesses. If you believe in physics you believe work must be done. And I think it’s very clear that the kind of problems Noe, say, is attempting so solve, lie far – far! – outside the adaptive problem ecology of our deliberative metacognitive capacity. That they are ‘unworkable,’ as the ancient skeptics have been claiming these past millennia!
So the register change actually strengthens my position, since it allows me to express the problem in my preferred idiom. Either way, you’re on the hook to evidence the applicability of human metacognition to the kind of problem-ecology you’re suggesting it can solve. I’m saying that intentional phenomena are akin to visual illusions, only absent any base-line. This is why they’ve been philosophical intractable since the beginning.
“The second is that I treat the distinction between “self-making” and “self-deception” differently than you do. In my approach there can’t be anything like a general category of self-deception since cognition and experience must always be situated cognitions and experiences. In other words, whatever it is that amounts to self-deception, the criteria for qualifying it *as* self-deceptive is relational.”
I entirely agree with this as well, and as above, actually think it strengthens my position. The effectiveness of deliberative metacognition is *entirely situational,* and it is precisely the situation here that I am calling into question. I think we have every reason to think that deliberative metacognition is both radically heuristic and fractionate, that it consists of numerous special purpose kluges, and in is no way capable of solving the kinds of problems philosophy or cognitive science is interested in solving.
Think about in these terms: Our exo-environmental systems have hundreds of millions of years of evolutionary tuning, and even given the relative simplicity of the systems engaged and tremendous amounts of information, we had tremendous difficulty theoretically solving them via deliberative cognition. Our endo-environmental systems, on the other hand, have just a fraction of the evolutionary tuning, are engaged in theoretically solving the most complex system in the known universe on the basis of extremely limited information, and you’re saying that we could solve ourselves *on their basis* via deliberative cognition.
My whole beef with the enactive/embodied cogsci movement is actually that it isn’t embodied enough, that it loses its nerve as soon as things get threatening/interesting, and flees back into the realm of low-dimensional ghosts. It wants to redeem intuitions that are likely artifacts of an exaptation gone wrong, the submitting of metacognitive information to ACH thinking. Like all prior naturalisms, it wants to pick and choose its intentional phenomena, eject representation and content, yet hold onto meaning and subjectivity, never considering whether all of these share the same basic set of conceptual incompatibilities with natural cognition because they are all artifacts of precisely the same problem.
Robbert: We actually agree quite substantially on the insolubility of what you call metacognitive neglect, though I think where we differ is in our understanding of the finitude of knowing as such. I still don’t see why the metacognitive problem — even as it might contain more complex dimensions than other kinds of problems — is different in kind from, say, my attempt to cognize anything else in my environment. As I see it, in both cases knowing is finite and partial in principle, regardless of whether we take a first- or third-person approach. So, when you write:
“Our endo-environmental systems, on the other hand, have just a fraction of the evolutionary tuning, are engaged in theoretically solving the most complex system in the known universe on the basis of extremely limited information, and you’re saying that we could solve ourselves *on their basis* via deliberative cognition.”
No, I take a different position. My goal is not to “solve” myself or anyone else; at bottom I am not — nor is anything else — a “problem” in any ontological sense; this is of course an artifact of our approach. If I do appear as a problem to be solved then this is a consequence of the aesthetic basis of knowing-perceiving that I outline above; I have constructed myself as problem to be solved. The appearance-construction of a problem is a lure for future engagements, and not an absolute statement about how things are or a challenge to solve what consciousness is once and for all. As an immanent and emergent lure for knowing, problem-posing and problem-pursuing has exactly the kind of intentional structure — construed as a means of directing oneself beyond oneself, a structure Thompson would argue is present in many self-organizing networks — that you claim doesn’t exist. So, maybe I’m still just not grokking your position, but it seems this conversation isn’t even possible within your framework (I have the sense we’re actually having a different version of Dan Dennett’s debate with Sam Harris over free will, though over the much more basic concept of intentionality rather than over the more advanced question of freedom.)
Again, I think there’s some artifact of representationalism lingering here, of trying to represent the one true world “as it really is.” Within the enactive framework it’s not, strictly speaking, the case that “it wants to pick and choose its intentional phenomena, eject representation and content, yet hold onto meaning and subjectivity,” it’s more that representation is reformulated as subtractive disclosure and content is reframed as a method for attaching oneself to, or coordinating oneself with, a phenomenon (i.e., of learning how to grasp something in a particular way; thinking is a mode of “picking up” phenomena, to use a more tactile metaphor). In some sense, then, I find the reverse criticism applicable to your position: You want Truth without subjects, which is a bit awkward. This is a philosophical thesis and it’s hard to follow your account being that you’ve already undercut the value of the philosophical approach to begin with.
In any case, I totally agree with this: “You’re on the hook to evidence the applicability of human metacognition to the kind of problem-ecology you’re suggesting it can solve.” Yes, though, again, I think the register in which a “solution” emerges is different for us (and I’m not claiming to have already accomplished this by any means). My appeal to the aesthetic above is first and foremost to claim that our situation demands more than the perspective of the neurosciences — one of the most diverse and contested fields of knowledge currently in play — and this means I am asking us to consider more aspects, and not less, about the phenomena in question (whether that phenomenon be us or another entity in the environment). This the methodological and epistemological component. My second claim is ontological insofar as I posit that the realm of meaning — long relegated as “secondary quality” — is in fact central to the evolutionary process once we take into account time; the aesthetic is not merely an appearance floating above underlying mechanisms; it is causally efficacious. My primary argument above is: the lures for behavior are necessarily aesthetic and the shape and form of an organism is in some sense a sediment of inherited aesthetic engagements playing out at the level of meaning; to understand the organism we have to understand its relationship to meaning — what it finds significant and valuable in a way that doesn’t reduce either to what we call the “merely subjective.”
Bakker: Truth is another radically procrustrean heuristic. Very useful, but like all such heuristics, prone to jam the works when applied out of school. What I’m curious about is what science will make of our traditional and intuitive self-understandings. The post-human will be post-intentional as well, if I’m right about a handful of key issues. ‘Noocentrism,’ I think, will join biocentrism and geocentrism on the list of ignorance-driven perspectival illusions.’Souls’ like ‘gods’ are low-dimensional posits adapted to solves specific social problems. The only reason we find seeing around the former more difficult than the latter is that the former is welded to bridge of our nose: every time we look, it so clearly seems to be there.
There’s actually nothing awkward about my position – once the gestalt clicks it’s very parsimonious. There’s no problem with engaging in intentional cognition so long as you refuse to make the philosophers move and assume that *only intentional cognition can solve intentional cognition.* Intentional cognition is not something that intentional cognition can solve, as centuries of gear-grinding and wheel spinning should make clear. (My latest post tracks this very issue in Brandom). Tu quoque arguments simply beg the question against my criticism: I’m not saying intentional cognition doesn’t exist, only that it cannot solve itself (in the way philosophers want). I use ‘problem-solving’ the way Darwinians use ‘design,’ as explicit cognitive shorthand. I could talk in terms of convergent sensorimotor feedback loops if you want!
Either way, if you want your solution to be *cognitive* as you say, then it had better be scientific is some respect. And answering scientific questions always involves sourcing and vetting the data driving theory creation. You’re still on the hook. Moving to the aesthetic (my friend Ben Cain makes precisely the same move) is all well and fine, but then the problem of scientific cognition looms large. Anthropomorphism solved innumerable problems for our ancestors, as did lies, war, and epic poetry. If this is the problem-solving you’re after, then I have no argument with your project whatsoever, but this certainly isn’t what enactivism seems interested in more generally.
Robbert: Indeed, I’m waiting for the moment when your gestalt “clicks” for me. Noë’s account of the painting we don’t understand — painted in a style we’ve never seen, evoking content we don’t have a context for, created by someone whose other works we don’t know yet — is apt here; I’m still sitting in front of the painting waiting for it to click. What I do understand, and as I suspected above, is that “cognizing” means different things for us and evidences a certain meta-methodological difference. I should just stop using the word since its so science-laden in our contemporary context, and I don’t mean it as an exclusively scientific category. We see the same difference in our approach to enactivism. In my understanding, which I learned from Evan Thompson, there are many analytical dimensions to enactivism, particularly when we include human cognition. The kind of “inside” or “aesthetic” account of evolution I give above is an example of one of these dimensions. Thompson, for his part, talks about meditative experiences as also being “enacted,” or of phenomenological investigation as part of the enactive program, and this points to something like the variable (practice-dependent) nature of first-person experience in, at least, human beings if not other critters as well. So, I see myself as pursuing the domains my training allows me to adequately pursue. These are ontological, epistemological, aesthetic, ethical, artistic, and political modes of understanding constrained and enabled by my ongoing study of the life sciences, but I’m not a scientist and I don’t see properly scientific work as being suited to making the kinds of general claims I am after (even if such general claims have as part of their obligation fidelity to any number of emerging empirical facts); in its speculative capacity philosophy remains indispensable in this regard. This multiple approach is, again, mirrored in the pluralism of domains and methods accepted by Thompson’s enactivism — in its inclusion of neuroscience, phenomenology, and contemplative practice, for example. In this I find my approach enactive through and through.
Bakker: For Thompson, though, the idea is that the pluralistic motley can be stitched into new, more commodius clothes for the emperor (science). He hammers the explanatory gap from both sides attempting to make plausible (he’s far more sober than, say, Deacon, in this regard) the possibility that it’s only fundamental because we’ve fetishized a handful of methods over others (I’m going by Mind in Life, which is all I’ve read of his). Cognition is his target, so one always has some kind of a yardstick against which his program can be assessed. I think the guy is brilliant (his dismantling of the Kim debate in the appendices is proof of that!), but for all the estimable ingenuity he invests in his account, it actually brings home the apparently insuperable nature of the gap more than anything.
For me, there is no gap because there is no mental, no intentional, no normative, in any high-dimensional sense. Our brains are simply too complex, and the evolutionary history of sapience too brief, for metacognition to consist of anything but a myriad of ‘divide and conquer’ special purpose systems. Simply make a list of all the amechanical concepts you’re interested in, and ask yourself what kind of information they neglect (mechanical, typically). The more they neglect, the more specialized they are bound to be. The more specialized they are, the more understandable the vast gulf between first-order and second-order applications becomes.
So consider what we do know about conscious deliberation: vast numbers of nonconscious processors churn away, ‘conscious ignition’ as Dehaene calls it, sips from this astronomical morass through a straw, stabilizes and broadcasts an exceedingly narrow range of this information throughout the brain, where other nonconscious processors take it up and work it over, perhaps to fed back through the straw once again. Whatever consciousness amounts to, the ‘cognitive bottleneck’ looms large over it. The fact that we systematically neglect this bottleneck, I’m arguing, looms even larger.
So what happens when we turn this straw onto this straw? What is it that our brains are actually doing when we engage in ‘phenomenology’? Blindness to the fact that the straw is a straw is a given. So we should expect deliberative metacognition would systematically suffer ‘straw neglect,’ and that any number of illusions might follow. The fact that the straw has to be used to sample the straw means that the straw cannot be directly sampled, that deliberative metacognition is restricted to short-term memory traces to engage in solving for consciousness, once again, utterly blind to the fact that this is the case, suggesting once again, any number of perspectival illusions.
Blind Brain Theory is simply an account of how the peculiarities afflicting attempts at second order intentional cognition can be understood in terms of these, in many cases inevitable, cognitive illusions – as a peephole glimpse of a peephole view that we think is as wide as the sky. We’re very good at what Friston calls ‘active inference,’ assessing cognitive shortfalls and rummaging our environments for more information. But when it comes to deliberative metacognition? Not at all. We have the merest inkling, maybe. And this actually makes sense, given that our capacity for ‘philosophical reflection’ is an exaptation of preexisting capacities. We quite simply lack any intuitive error-signalling capacity, and so have to offer it up for discursive assessment, turn to other brains also lacking this error-signalling capacity.
Now on this picture, phenomenology is obviously bunk. A different picture is what the intentionalist needs, and I’ve been looking high and low for a couple of years now.
At the very least I would like to convince you that my position is the kind of position you should be concerned to argue against.
Robbert: Well, hopefully this debate is evidence that I am indeed concerned enough to argue against your position. As I am working my way through the meta-theoretical baggage, though, I keep finding less and less to disagree with you on at the level of the constraints you argue for, which I am coming to see I agree with you on to some extent, though these are constraints that I think put you firmly in the skeptical / transcendental tradition you’re at pains to break free from! Anyway, on my end getting through the meta-theoretical layer is important since only then can I learn how to disagree with you better (and rest assured I have a feeling the disagreement will be longstanding!) For example, several of your interlocutors accuse you of a performative contradiction: If there is no “I” how is that you’re even “I-enough” to make your claims? I can see this from your perspective as a kind of question-begging — I can even get into the idea that being a walking performative contradiction is very interesting, ontologically speaking — and so I can follow you in needing to establish why we should have to assume an “I” of any sort to begin with (i.e., I agree we shouldn’t start with, “Hey, it seems like ‘I’m’ having an experience; let’s take that as fact and build upwards since otherwise things would be too confusing and uncomfortable!”)
What I find more difficult about your claim is your assertion that the brain can cognize its environment more or less successfully, but for some reason it has no capacity to cognize itself as itself. This seems like a magical assertion and the wrong standard of evaluation. I want to know, really? Why should self-cognition be different in kind from other-oriented cognition? And why are we couching the whole discussion in absolute terms? Our biology is substantially invested in self-examination, and my point here is that developmentally speaking — and I think developmental health is key to this whole conversation — it seems the case that most mature adult cognition includes a layer of reasonably adequate self-cognizance that takes the form of question-asking (e.g., Am I right? What else could be the case? What happens if I do this? What am I not aware of right now? What will this do to so-and-so? etc.); of specific kinds of feelings (e.g., regret, disappointment, belief, hope, despair, etc.); and of existential quandaries (e.g., What am I about as a person? What is the good life? Who are the good people? What should I not do? What is reality? etc.) — All three of these categories are what I think of as quasi-teleological lures for behavior (when oriented towards the future) and enactment (when organizing the present). We need them in the Noë-ian sense to get around just as much as we need sense data (indeed there can be no bifurcation of the two from my view).
So, while I can tentatively go along with you and agree that, hypothetically speaking, we may be able to create some kind of “post-intentional” being that operates in total absence of semantic content, I diverge by saying that that’s different from arguing that we are, “underneath it all,” that kind of being already, or, even worse to my mind, that our goal should be to become that kind of being through technology (An aside: It also raises important questions about the term “post-intentional.” For example: How is it different from “pre-intentional”? Aren’t we actually speaking about “an-intentionality” rather than pre- or post-intentionality? Why is intentionality still the center of your framework? etc.).
My point is that, at the level of the organisms we know about, intentionality and meaning are just as important as occipital lobes, lungs, and femurs, and, further, that these seemingly divergent levels of physicality and meaning are always everywhere integrated in the organism, necessarily (i.e., they are ontologically nondual and are only made dual analytically), and that they play substantive roles in biophysical evolution as such. Again, the essential difference between us seems to be this: You see only self-deception and error in the attempt to metacognize because your frame of reference is couched in absolute and representational terms and thereby metacognition, construed only as the ability to represent the totality of the (non)self, always results in “medial neglect” — a term you are now, interestingly, bringing *into* our awareness through concept-creation—whereas, I don’t see neglect per se but the aesthetic and dialectical opportunity that prefigures our rather successful ability to rotate around multiple kinds of meaning and re-orient ourselves towards new ecologies of concern through the reciprocal capture of the organism within a field of evolving values.
Bakker: I never said ‘no capacity,’ but rather what you suggested above, ‘situational capacity’: our metacognitive capacities are fractionate and heuristic, consisting of a number of different ways to tackle specific problem ecologies on the basis of limited information.
Self-cognition has to be enormously different for a number of reasons. The first, and most obvious, simply has to do with the astronomical complexity of the problem-ecology: namely, everything that enables cognition. Just think of the tremendous amount of resources we’ve invested in naturally cognizing human cognition. Just think of how almost all that we’ve discovered lies behind the veil of metacognitive neglect. Was philosophical reflection adequate to cognize the structure of the human memory system? Not at all. Plato’s aviary is a far, far cry from the complicated set of *generative* systems revealed by cognitive science. So why should we assume that philosophical reflection will fare better in any other respect? Given the complexity of the human memory system, accurate, high-dimensional metacognitive access to its structure and dynamics was simply not on the evolutionary menu. Instead, we evolved a fuzzy sense that we possess the ability to remember, and nothing more, *including the ability to metacognize this metacognitive blindness.* When Plato reflected on this fuzzy sense, he had no inkling whatsoever that he was groping blindly. Memory was clearly like an aviary. Our metacognitive sense of memory not only fails to provide anywhere near the information required to accurate theorize the structure and dynamics of human memory, it fails to provide information flagging this incapacity. We possess just enough capacity to discharge the kinds of basic tasks our ancestors required to, say, resolve disputes over recounted events.
The second has to do with the fact that our metacognitive systems are themselves *components* of the very problem-ecology they would solve. On the one hand, it is the *functional independence* of exo-environmental systems that allow the Bayesian networks of our brain to solve them: as soon as they become functionally entangled, they become wildly unpredictable. On the other hand, it is our ability to test our environments from multiple positions, to pick the apple, taste and heft it, rotate and taste it–in short, to continually accumulate more and more information–that allows us to solve ‘apples.’ Such is not the case for ‘red’ or for ‘subjectivity.
The third has to do with the evolutionary tuning involved: our exo-environmental cognitive systems have hundreds of millions of years of evolutionary tuning behind them. And our metacognitive systems? No matter how optimistic your estimate, it will only comprise a fraction of this.
All this suggests that human metacognition is an opportunistic, fractionate, and specialized affair, adapted, crudely or exquisitely, to the solution of restricted problem-ecologies. (As indeed, the empirical evidence is trending). Since the problem-ecology of philosophy is clearly a cultural artifact, and given the inevitability of neglect, then the threat of systematic missapplication of these heuristic capacities to problems they simply cannot solve looms as a large one. It even predicts that humans possessing such a metacognitive system would find themselves perpetually perplexed regarding themselves, continually chasing intuitions that seemed as clear as possible, and yet remain perpetually underdetermined, and even physically impossible.
So, I’m asking you why we should assume that deliberative theoretical metacognition possesses anything like the capacity you seem to assume. Your list of questions actually nicely illustrates the problem I’m raising: Is one metacognitive capacity responsible for answering all of these questions? Or are many? Is it simply a *coincidence* that the ease with which our actual capacities can answer these questions directly tracks the practical, limited nature of the questions? As soon as they become general, suddenly everyone is spinning there wheels (while insisting they must be travelling!). If we have the capacity you suggest (despite the problems of complexity, complicity, and youth described above), then why should the questions of a general nature be the one’s that we can never definitively answer (precisely as the problems of complexity, complicity, and youth suggest should be the case)?
Sorry for running so long, but you are mischaracterizing my view if you think I’m running afoul some kind of residual representationalism. Also, things easily get lost in the pixel fog when you deal with multiple big issues. In fact, I think I can actually explain representations away, why they seem to have the structure they do, even why they have such a grip on the philosophical imagination. (This is just a taste). Representations are poster boys for neglect, little mini-subjectivities, only flattened into something quasi-mechanical to be scientifically respectable.
You agree that we can have cognition without representation so you agree that anything does not go. You insist that theoretical cognition of intentional phenomena (which is far different than theoretical cognition using intentional concepts) can be cognitive. This is your commitment to discharge, not mine. The mere fact that I’m saying your first-person speculation is noncognitive does not brand me as a representationalist, just as a someone asking how it is you get from A to B… while pointing out all the features your ‘good’ intentional phenomena share in common with representations! things such as naturalistic inscrutability, low-dimensionality, causal incompatibility, and a history of theoretical futility.
Robbert: No worries on running long; this has been a very productive conversation, but you’re right that the issues are multiplying and the comment box is staying the same size. I’ll have to think more on the self-cognizing vs. other cognizing issue, and let’s just assume I’m wrong about your (non)representationalism as well for the moment, though it does sound like you’re looking for a way to talk about (and reduce) the totality of the human cognitive system into representable cognitive parts that we can talk about as a means to eradicate our first-person experience.
Anyway, my last question — for the moment anyway! — has to do with two different ways you cash out intentionality. Sometimes you write “I’m not saying intentional cognition doesn’t exist, only that it cannot solve itself (in the way philosophers want)”‘ while at other times you claim that BBT “explains intentionality away.” This leaves me confused, and I’m wondering, as a way towards some kind of clarification, if you could comment on the following short lecture given by Evan Thompson. What I’m really interested in is how you construe endogenously created mental states (first-person), initiated volitionally, that change brain states (third-person) in a directed way (dmf and I have been discussing that “levels,” as Thompson sometimes construes them, isn’t really the right language, and I’m already on record as saying the “first-” and “third-person” are only analytical distinctions, but I think the example still raises important questions about your position).
The main question is this: what do you make of what people like Thompson call the “top-down” effect of intentional attention on brain dynamics—a capacity Thompson claims is flexible and trainable (a view with which I agree). Anyway, here’s the video if you get a moment in the next few days / weeks:
Bakker: Intentional cognition is real, there’s just nothing intrinsically intentional about it. It consists of a number of powerful heuristic systems that allows us to predict/explain/manipulate in a variety of problem-ecologies despite the absence of causal information. The philosopher’s mistake is to try to solve intentional cognition via those self-same heuristic systems, to engage in theoretical problem solving using systems adapted to solve practical, everyday problem – even though thousands of years of underdetermination pretty clearly shows the nature of intentional cognition is not among the things that intentional cognition can solve!
My response to Thompson is basically the same as my response to Deacon: emergence in nonlinear systems is entirely to be expected, and is thoroughly mechanical through and through. What needs to be demonstrated by the *spooky* emergentist isn’t the way dynamic complexity generates novel effects, but the contention that apparent intentional phenomena ARE just such novel effects. Thompson entirely admits that he fails to do this, which is why I think he’s the more sober, careful of the two thinkers.(That said, Deacon is the more entertaining writer!)
Attention is a great way to look at the issue. So we now know quite abit about attention, how it is not, as many once thought, coextensive with conscious awareness (metacognitive reportability), but nonconscious as well. Conscious experience toggles between ‘attentions’ (on the basis of winner-take-all competitions between neuronal populations) in a manner entire invisible to conscious experience. We now know this as a matter of empirical fact.
On BBT, the same way the lack of information regarding earth’s motion fooled us into affirming geocentrism, lack of information regarding the biomechanics of conscious experience fools us into affirming ‘noocentrism,’ the sense that we are some kind of self-interpreting rule and self-moving soul. Thanks to neglect, consciousness cannot even intuitively distinguish itself from itself (thus the illusion of the abiding now), let alone cognize precursors like those above. Thompson, who is bent on vindicating these metacognitive intuitions of spontaneity and autonomy, can only draw our attention to *global* nonlinearity of brain function, suggest that the causal distinction between levels of descriptions may somehow explain spontaneity and autonomy.
But as the local picture I sketched above shows, human attention is *plainly* a biomechanical affair. Conscious experience arises as a discrete downstream moment, and can be manipulated in a remarkable number of ways (via masking and so forth) once knows how the machinery works. The kinds of associations between intentional phenomena and process-oriented nonlinearity that Thompson tries to make are metaphoric at best, which is why they break down as soon as we begin looking at the actual machinery involved.
And really, how could it be otherwise? We’re just more nature, after all. Aside from the (totally unreliable) deliverances of theoretical metacognition, we have no reason to think we’ll find some kind of intuition-redeeming spark or twist. (The upshot of the above, of course, is that there’s *nothing special* dividing the ‘endo’ from the ‘exo’ in BBT, and in this sense it is actually more enactive than enactivism. We really are just more nature, on my account, as opposed to ‘nature +’ as intentionalists would have it.)