The following is one of the first conversations I've had about the theory. It was wide-ranging and thorough, and so with Graeme's permission, I thought it would make a good beginning to this forum.
top of page
To see this working, head to your live site.
Jun 14, 2017
Edited: Jun 14, 2017
Conversation with Graeme M
Conversation with Graeme M
bottom of page
I think your latest insight is a powerful one, and one worth meditating on. It is a powerful way to look at one's own life, I think. I have been creating my own delusions, my whole life, without knowing it. In truth, I don't really know who this organism that I call 'me' is. I have been experiencing/remembering this so-called self in a biased, motivated way (not always positive), and really don't know what the actual reality of my life has been. I find now that it's best to be quite humble about this organism, and not assume I understand him.
I'm at work, so I can only write a quick reply right now. Primarily I want to say how much I agree with what you wrote. I think you're right that we take our experience for granted, and so don't recognize when it is diminished, as compared to others. We do recognize, while on psychedelics, that everything has gone strange, but the anosognosics and hemispatial neglect patients have no clue (at least at first) that they are missing obvious aspects of reality. A full 50% of stroke victims have symptoms that they are unaware of, until their brains catch up enough to include that new knowledge in their experience.
But that leaves the rest of us with a haunting problem: what is it that WE are missing? We think we see reality, but of course we're building up a construct, based upon certain heuristics and our limited senses. Who knows what the bat knows about the world, that we are still missing?
There's a lot more to say, but I have to go now. Great insights, Graeme!
Thanks Matt, that's well explained - I see what you are getting at. This kind of cuts across some other thoughts I've had about the idea of experience, or more exactly the "what it feels like". I take your point about how there is no "I" in here - I am the brain doing its thing and the awareness that we have is a summary of events that includes the sense of self and context.
This rather suggests to me that we don't really HAVE a sense of what it feels like. When we think like this, we are mistaken about what we experience and the quality of that experience. I may have mentioned this before.
I am probably a materialist, if I understand the meaning of that term. I don't believe there is any mind stuff at all - no "mental" events. There is just the brain doing things. Our total experience is as you point out largely unknown to us. All of these nodes and subnets do their thing and its only when this control model or simulation is formed that we do have experience. But because there is no "me" inside, my experience is all there can be. Regardless of its form.
In other words, I doubt we are aware of when our experience is reduced in extent, such as with certain lesions or in cases such as HM. Because we cannot take some transcendental stance, we can never know when what we experience is lesser or more constrained. The majority of inner processes happen without awareness anyway, as you show. Of course people with brain injury or some kind of disability can know that their cognitive function is not the same as others, but here I think they are just aware of being less smart. Subjectively, whatever they do experience is just what it is to "be" them. So we can never actually know anything at all about what it is to be us. I'd go further and say that there simply is nothing that it is like to be us. Here is where I think Nagel is mistaken in his famous "What it is like to be a bat" story. He is mistaking the fact that we can offer a description of what we experience, for a description of what it IS to experience.
We respond behaviourally to the world, we have experience, and we can use recall to adapt future behaviours but that's an operational description. When we see the world visually, there is nothing at all we can say about what that experience is like. We can say WHAT we perceive, not what it is to have perception.
Consider the paradigmatic case of colour. Julian Jaynes made the astute observation that much of our consciousness, our sense of self and relational experience is rooted in language, as you observe also. He notes that it is the use of metaphor that allows us to build up a rich recursive view of experience. Without this, the world just is. We use metaphor to build an overlay on top of that. Take the colour red. We can say that something is red, because we have a label for that colour. But we can only have this because we know many objects share this property, and we expose this via metaphor. That bucket is as red as a rose. Or, her lipstick is blood-red. Red itself, I suspect, has no actual quality to it at all. Without metaphor - linguistic description - what is red? If all the world were red, what actual feeling could we assign to it? It seems to me, none at all. It just is.
The same applies to a bat. Its perception is rooted in sonar imagery, but from its perspective the world operates the same as it does for us. Of course, we can describe what it is to experience visual imagery (eg shapes, colours etc), but there is nothing about the experience from our perspective that differs in substance from that of the bat. In effect, a bat perceives his world, builds up a picture of structure and placement and behaves accordingly. Which is what we do. If we were a bat, we would not be aware that our experience differs from that of a man. There is nothing at all that it is to be a bat, or a person.
And so when we come to HM, it seems to me that it might be questionable whether in fact he had any of the kind of subjective experience that those with an intact hippocampus might experience (if your theory is correct). I don't think we can necessarily infer anything of inner experience from outer behaviour, including spoken words (which are after all simply motor actions). Perhaps as you've suggested HM used the underlying buffer contents for awareness, but perhaps not?
The trouble with that line of thought for me is the same one as the bigger question of "why". If Hippocampal simulation generates NSE, why should upstream processes also generate awareness? Why draw the line there? Perhaps all neural processes generate awareness and as we subtract successively upstream components we reduce experience? Or is it simpler to postulate that NSE is only generated by the hippocampus?
Thanks for both e-mails.
Very good insight about the prediction side of this. When I discussed this with hippocampus expert Gyorgy Buzsaki, he pushed the predictive side of the model, saying that hippocampus drives the conversation with the neocortex, which sounds counter-intuitive, since memory should come second.
I'm approaching the limits of my own understanding here, but my speculation is that there is a push/pull relationship between the hippocampus and neocortex. Of course, as described in the paper, news reports from all over the brain meet at the entorhinal cortex to be prepared for binding within the hippocampus. Field CA1 is the binding engine, nesting all the news reports within the theta wave output.
Upstream from CA1, field CA3 and the dentate gyrus also receive inputs from the entorhinal cortex, but they serve a complementary function to CA1. CA3 is the auto-associative engine that retrieves codes of earlier memories for memory recall. But during normal memory encoding, I think it also uses previous memories to provide predictive coding.
When CA1 generates its theta wave output, it nests entorhinal input into the up-cycle of the theta, but then nests the CA3 input in the theta down-cycle. I believe that the CA3 portion is probably the predictive side of the communication with the neocortex, whereas the up-cycle portion is probably the finished memory, which gives rise to NSE. This would allow each theta wave to have both push and pull with the neocortex, creating a multiple drafts-like revision process.
Finally, as to the 'hard problem' of why we have experience at all: prior to language, the only mechanism for recalling an episode would be through re-experiencing the original event. A rat cannot recall a fact in the same way we do, using linguistic concepts. Instead, its HF plays back the previous experience of going through the maze and arriving at the reward, and that drives present behavior. It feels (more or less) like it has gone back in time for a moment, and felt the original experience.
That said, the re-experiencing of the previous memory would not be meaningful, if there were not an original experiencing first. An 'offline' experience that is not familiar is not perceived as recall, but as fantasy. Also, the original experiencing has to be of the same type as the recall. It has to feature a unitary self, not a constellation of nodes. The memory recall needs a unitary self in order to keep the memory elegant and clear, and so the original experiencing also needs a unitary self.
Since the actual awareness of the brain happens at the nodal level, we can ask why is there no experience of that? It is hard for me to guess what a node 'experiences', since my own memory of experiences are all on the global unitary level. So I have a hard time being sure what HM was experiencing, if anything. I would say he was aware, on a nodal level, but did not have the kind of subjective experience that we have.
I'm guessing that HM did indeed have a (simple) inner life, and still continued to be aware in a way that is not truly strange to us. But if you or I were to wake up tomorrow after the same surgery that HM had, I think we would be very confused indeed. We could probably function fine for stereotypical tasks, but anything complicated would leave us befuddled, because we are so used to relying upon experience (and memory) as our way of connecting to the world. Over time, however, I think the neocortex would adapt, and be able to handle more and more complex situations, even though patients like HM could probably never live fully independent lives.
A lot of this is on the outer edges of my theory and research, so I'm giving you my best guess. I hope it makes sense.
Just one more thing that occurred to me through the night! :)
I had kind of been thinking of NSE as described in your theory as a post-event thing - that is that we do stuff and then we generate NSE as a part of the memory formation process. But I think what you are really driving at is that the HC serves to create a simulation of experience/events that is deeply embedded in predictive priming and behavioural responses. That is, we are still aware of things and responding to these things, but the "world" we experience in doing so is simply generated at the HC level. Sure, it is to an extent "post-event", but so closely after the event that in gross behavioural terms we could see them as simultaneous.
Memory is critical to the ongoing experience of "self in the world", of context and narrative and continuity and so on, so it should be no surprise that "memory" is what serves to knit the whole thing together. The distinction between NSE and memory that most people observe is an artificial distinction - in reality (according to the HCS theory), memory processes are all part of a continuum of experience.
Is that more or less it?
Thanks for the great reply. While as I've said I'm just somebody with an interest in this stuff, I do rather like the explanation proposed in your paper.
One of the problems with this matter of consciousness is why even have it (if we think of "consciousness" as NSE)? All of the processes of a typical human brain appear to me to be possible without any experience at all (though perhaps I am too ignorant of the detail to be able to make that claim). And that still stumps me somewhat even with the HC Simulation theory - why is there any need to have this NSE at all, if as in HM's case we see that he can get by to an extent without it. Is it some kind of amalgam of this idea and some others such as Graziano's control model or Dennett's multiple drafts model? That is, if NSE contributes in some way to future planning, we need to use a sparse version for simple tracking and prediction - if we were aware of all that's going on then we'd be overloaded and unable to make sense of things (though why even need to make sense of anything anyway?).
Presuming the HCS theory is correct, I'd tend to think that cases such as HM probably aren't aware in an NSE sense, and perhaps aren't even aware at all. I like what you say about him having just a few minutes of functional memory as that jibes with the idea that memory is critical to an ongoing conscious awareness of self in the world. But would he even be aware of what you've called the buffer contents of processing? For example, if we think about pain, it seems to have two components - the automatic bit where we simply recoil from a painful stimulus and perhaps utter a sound, and the subjective sense which seems not as bound to the stimulus. Presumably the subjective element is useful in learning and later memory based behaviours built upon that learning, whereas the automatic element just happens regardless (as it's earlier in the processing stream). If painful experiences are bound up in NSE, then we could postulate that while HM would react to pains would he say that he felt them? That would be interesting to know. My guess is that he might say that he did, because the motor responses that lead to vocalisations would I think tend to belong to pre-NSE activity, but the subjective element (ie how much it hurt, or what the hurt was like) would be missing.
But I might still be not quite getting the nuances of the theory. Perhaps NSE contributes a more complete experience as you say, and pre NSE is still conscious but less complete and later overwritten by the NSE process (which I assume happens so quickly by our standards that it isn't recalled - I am very aware of how much our conscious experience of the world depends on what it is that we remember!). In fact that might explain what I've mentioned to you before - the fact that sometimes I am aware of something shadowy which later takes a more definite form (eg a movement that I catch and see as a dog but then a moment later it becomes clear that it's actually a rock). Perhaps the initial awareness of sudden and novel stimuli is apprehended pre-NSE before the complete scene is filled in by NSE.
Still, all fascinating stuff and I am hoping to hear more about critical response to the HCS. And hoping that you get the doco finished too - is that still likely?
Thanks for your e-mail, and for the reply on the Brains blog. Well written.
As for the question about "zombies", I think that formulation is dreamt of by philosophers, assuming an either/or situation. They figure that if someone doesn't have neurotypical subjective experience, then there is no experiencing at all. I can't say for sure what HM's inner experience was, but I'm sure he would say that he sees, hears, etc. I think that in we neurotypicals, because of the hippocampal loop that generates NSE, the new memory experience eclipses the neocortical interaction with the outside world. If we did not, we would experience life as a perpetual echo.
Also, the neocortex entertains many simultaneous possible interpretations of noisy inputs, but only sends its final conclusions, the newsworthy output, to the HF for memory encoding. So HM presumably lives in the noisy neocortex, and misses the relatively clean but contextualized experience that arises from the HF. He probably has a different experience that has less a sense of self (in an autobiographically extended sense), less a sense of psychological continuity, less a sense of environmental context, less a sense of a cognitive buffer/mind's eye, and of course, no enduring flow beyond a few minutes at a time.
So I think that non-NSE animals probably live like HM, (albeit without the conceptual self that he learned, pre-surgery). That said, I think that most mammals probably do have the same basic structure and process as us. This is because they use episodic memory in many of the same ways we do (way-finding, reward/punishment learning, social learning), and need to have at least a bodily self in memory and in future planning. Indeed, they probably have something like a cognitive self, as well, for problem solving, even if that aspect of self is probably far less developed than in humans. This is especially true of social mammals, because the concept of self is necessary for interacting with social hierarchy, etc.
Does this make sense?
I saw your precis on the Brains blog. I think that's a nice summary. Someone commented (and I replied) and both the summary and the comment gave me a flash of insight that I'd missed before (or maybe I've just forgotten). The insight that occurred to me is simply that everyone could be mistaking "consciousness" for the events themselves as you propose. If we had no memory formation it might limit how functional we can be in the world, but it doesn't mean that we can't still do stuff. Because the processes that actually cause us to respond behaviourally remain intact and functional even without the HC.
One other thing. If you are right, creatures will only be "conscious" if they have the requisite memory formation (or at least, consciousness would lie on a gradient constituted by the extent to which memory processes operate in this manner). I have no idea which animals have such kinds of memory, but let's assume for arguments sake that a dog does not. Does this mean that all of such a dog's experience would be hidden from it? That is, it doesn't actually experience pain or joy or sadness? I'm not saying this IS the case for dogs, just asking if you think that creatures without memory processes of this nature simply ARE zombies?
I think your intuition that "consciousness and memory constitute one spectrum event" is about right. Yes, it's the same neural coded information that gives rise to both NSE and memory recall, but of course we experience them slightly differently, since NSE is so vivid and present. So yes, I mean what I say when NSE is caused by a "brand new episodic memory", but it's also valid to say that they "constitute one spectrum event".
But of course, I prefer the more "extreme" phrasing, because whenever the word "consciousness" gets employed, things get muddy. I want my language to reflect my schism with the old way of thinking, so it's hopefully less confusing.
Your last paragraph question gets into the same linguistic difficulty as previous ones. Not only are the words "conscious", "unconscious" and "I" all problematic, so too is "awareness". If you think about it, "awareness" is just another word for "consciousness". What I'm claiming is that "awareness" is every bit an illusion as "consciousness". There is only memory. Memory persists, and so we call it "awareness". Pre-memory processes exist, but do not persist, and so we cannot be aware of them.
This relates to the I/micro-agent schism. Micro-agents all around the brain are "aware", meaning that they take in stimulus, process it and respond. That is the only true meaning of aware in the brain. From their reports to the HF, memory is formed, and that memory activates a global experience, NSE. We think of it as awareness, because it includes a fictional "me" who seems to be aware of things. But it is no more true awareness then the micro-agent awareness before it. The only difference is: it is globally experienced, and it is persistent.
Again, remember that the micro-agents' processing cannot be experienced globally, because they are all local actors. The only global experiencing has to come from the neural newscast, which represents all the newsworthy processing before it. The loop of NSE takes disparate reports from all over the brain, stitches them together into a highly contextualized news report, and then broadcasts that report back, all around the brain. It is the only thing experienced, because it is the only thing that is unified.
Plus, experience is specifically a strategy of the episodic memory system (again, for off-line learning), and having separate pre-memory and post-memory experiences would be echoing and confusing, so it is not surprising that the remembered global experience only reflects the output from the "news media outlet", the one organ of the brain which stitches the entire event together.
If you surrender "we experience this summary newscast" and instead think of "the brain creates a newscast which will globally share a big picture story of events, including the self, and which can be recalled later for offline learning" then I think the language makes more sense.
And one last thing about the self: think about dreaming. In dreams, "you" seem to exist, think, feel, choose and act, even though none of that is happening. This is due to the confabulatory power of the hippocampus, and the fact that it has a self-model in the middle of it, for the sake of memory. But that dream-self is not a real self, even though it sure feels that way. Nor is the experienced self; it is also a confabulated construct, based upon news report molded on an armature of habitual self-image.
The self in experience is confabulated, which is why people are not even aware when they overuse filler words like "like" and "um". People with very bad posture do not experience their own bad posture, until someone calls attention to it. They only experience their habitual body schema, albeit informed by newsworthy reports. Racists do not notice most of their racism. The very fact that some gay people can be closeted to themselves shows the amount of fiction that can go into the self-model. Not to mention multiple personality disorder. The self is largely assumed and filled in, including its motivations and feelings. It is also extremely hard to surrender the habit of believing in one's self, but I think it's worthwhile.
Thanks Matt. I still need to digest some of this. I agree entirely about there being no "I" - having read many arguments for this claim, I have come to accept it as right. Yet I find it hard to explain this to people because as you say we are all stuck in this idea that we have some independent form. So it becomes hard to talk about things.
For example, I've been discussing with people on that science chat forum the condition of "blindsight", and there's been a bit of confusion about the notion that "we" can "see" things. I don't think that we do, at least not how most people think of it. The process of seeing is part of what it is that we think of as "we". It's one stream of activity, what I think you refer to as an agent process. And the conscious experience of that process is a kind of summary of the process. I have to say though that it never occurred to me to think that this summary might simply be a memory, though I had certainly come to realise that memory was somehow critical. I was more or less thinking that there is only memory and that consciousness and memory constitute one spectrum event. But your version is even more ummmm... "extreme" than that!
Still, we are not properly tackling what I take to be the hard problem. Why, even if NSE is a memory, do we have awareness of it? How can there be a qualitative experience of a physical mechanism, when all there can be is the physical mechanism? What is the distinguishing feature of this mechanism that makes this happen? I understand what you mean about the fictional "I", but even disregarding this sense of self or agency, there is still the fact that we experience this summary newscast, but nothing else...
All the best
More good questions.
The really tricky thing is in the language. You ask what makes "us" aware of the hippocampal output, when "we" are not aware of other brain activity? Do you see the problem in the question?
The thing is, "I" am not aware of my new memory. Actually, "I" am aware of nothing. Rather, "I" only exist as part of the new memory. That sense that there is an "I" here who is aware of this but not that, that sense is an illusion.
There is no "I". There is only the brain. And the brain is busy communicating between various micro-agents, each sending queries and responses. None of it is "conscious" nor "unconscious"; it's just doing its job.
Part of the brain's job is creating memory. It has evolved to do this, because memory is useful for learning and learning is useful for survival. Because the off-line learning (i.e. learning after the event, from a safe location) needs to have a 'self' in the middle of the memory, as context for the sake of comprehensible replay, therefore memory tells the brain a fiction, that this "I" that is in experience is the actual being that had the experience and thought the thoughts, etc. But the "I" of memory is only a construct (albeit informed by newsworthy reports from body and cognitive structures). It is not an entity or emergence or process. It is only a memory.
And because memory has evolved to be "replayed" in the brain, therefore it needs to be pre-played as well, in the form of immediate experience. And because memory has a limited capacity, only the newsworthy activity of the brain is sent to the hippocampus. These are all strategies of efficiency and efficacy.
This is all extremely counter-intuitive, I know. We are so used to "being" this "I" that consciousness researchers are looking for how this "I" came to be. But the "I" is not a being; it is only an expression of neural information. It is the brain telling a just-so story to itself, as part of an evolved survival strategy.
The brain processes a ton of stuff, and can learn without subjective experience. But memory is needed to hold on to anything complex or contextual. Because memory is the only thing that is experienced, it seems to be more important than the non-experienced part of the brain. Indeed, it seems to be consciousness. But it is actually just the brain's strategy for holding onto events for more than a few seconds. That's what makes the hippocampus important: it's buffering, storage and replaying functions. But it is no more "conscious" than any other part of the brain. And it is not "aware" of anything; it is just a holodeck for creating experiences.
I know this is headache-inducing stuff. I hope it makes sense.
Thanks for these comments, as I said I will reread the paper and our emails to date and see if I can get a more complete picture of the theory. I still really like it, though whether it will fly more generally waits to be seen I guess. Yet to my eye, it just answers so many questions.
As far as what you say below goes, I was more getting at physical realisers. Why should a hippocampal activation of neural assemblies be "conscious" when so many other neural assemblies aren't? I mean this mechanically, rather than either conceptually or causally. What is it about that kind of neural arrangement that makes us "aware" of it, when so much else goes on in our heads of which we aren't aware? I understand what you say about it being a sort of news report, or summary, or perhaps even a control model ala Graziano, but that doesn't answer the question of why, or indeed how. We still end up back in the original hard problem, ie why are some brain processes attended by qualia and others aren't.
My best guess is something like what Prinz proposes when he talks of "maintenance". That is, when certain neural activity is maintained for long enough, we become aware of it (perhaps this is rather like what I meant earlier when I suggested that the "present moment" appears to be around a half second long).
Do you have any thoughts about this?
These are very good questions. I can only speculate on them, because they sort of ask the 'why' of evolution.
My assumption as to why we have only post-hippocampal experience is as follows:
It would be too confusing to have both pre- and post-hippocampal experience, as it would be like a constant echo.
Since experience is a strategy of the memory system, it conforms to the needs of episodic memory. Recalled memory needs a singular unitary event, one coherent history, rather than one neocortical event and one hippocampal one.
Memory exists mostly for the sake of offline learning, playing back an experience to learn from it. Most learning is about conditions in the environment, and how I dealt with them, so memory doesn't need information in it that doesn't potentially represent those elements. Any additional information just muddies the memory.
Also, the neocortical event (which is the precursor to the hippocampal one) is parallel and distributed, so it cannot be experienced globally. Only the new memory is unified, contextualized and has keys to activate a global experiencing.
What distinguishes the neocortical representation from the hippocampal one? That's especially hard to address, because I don't have access to the neocortical representation. But presumably the hippocampal one has all the context already bound in, unlike the neocortical one. For example, dorsal visual information is egocentric, which is vision like a camera, but the hippocampal allocentric view has the context of the cognitive map of space included, so we always see things in context of our understanding of the environment that is currently unseen. (e.g. if you look only at one corner of your room, you continue to be aware of the context of the rest of the room, and could probably navigate through it without looking). Hippocampal experience is also only the newsworthy parts, so a lot of the neocortical stuff is left out. I speculate that it is likely that there is probably less coherence in H.M.'s perception, although I'm not sure how that would be tested. I think it is likely that he is aware of his scotomas, at least if prompted.
Why recall is so much less vivid and real-seeming than immediate experience: Memory works most efficiently as a lesson, rather than a movie, because a lesson can efficiently drive future behavior. So the movie of experience gets quickly examined and boiled down to a gist. The original experience is full of qualia, because the brain doesn't yet know what qualia will be important in the memory. It is only upon consolidation that the lesson emerges and the unnecessary qualia is discarded. Some qualia is still useful in the memory, however, because the original lesson might not have been the right one, and the qualia in memory may be needed to reassess the original lesson. For example, if I associate a certain color berry with poison, I need to keep the qualia of that color in the memory to inform my lesson. But if later, I realize that the berry color was not the signifier that I thought it was, the shape of the leaf may help serve me better, so it helps if I can recall that detail. Eventually, once the lesson of that leaf shape has been fully learned, it is processed only in the neocortex, and the episodic memory of the original event is no longer needed, and likely forgotten. In fact, as we become familiar with things, they are represented less and less in memory, and even in immediate experience (e.g. the view of our noses, but also familiar pathways, smells and sensations, my messy desk, the pictures on my walls, etc. all fade from even immediate experience because they are no longer useful for memory).
I think these are reasonable possible explanations as to why experience happens the way it does.
Well, I think it's a great example of lateral thinking, even if it turns out not to be right. Anyways, I'll try to get to and reread the paper and all our emails and I'll come back to you in a week or so with my final opinion for whatever that's worth. All the best with it just the same.
One thing that occurs to me, and it's likely due to me not having a clear enough idea of how the actual physiology and processes hang together (or maybe it's in the paper and I've forgotten!). If NSE is the first activation of the memory which is in effect just a reactivation of the neural states that contributed to the memory, how come this activity is conscious and not the original activity? What distinguishes say the neural pattern of vision as it is reported TO the Hippocampus versus the neural pattern of vision as activated BY the hippocampus? I understand what you mean about the HC binding these reports into NSE, but why is that conscious (or even how does it become conscious) rather than the earlier states? And why is this first activation of a memory more vivid than subsequent activations? I understand why that is desirable in behavioural terms, but what is the distinguishing physiological feature?
All the best
When the idea first came to me 3 years ago, the first thing I did was make a list of the major mysteries of subjectivity, like the hard problem, anosognosia, driving mind, etc. and saw that they all lined up within the mechanism, without hand waving. That was my first confirmation that the idea might make sense. Those mysteries also made certain demands upon the theory, which created my first testable hypotheses; I'd go do the research to make sure the anatomy fit those demands. The further I go along this path, the more clearly it seems to align with the evidence, which is why I was willing to put all that time into it, and have rested my doc's credibility on the idea.
I am very grateful for your feedback, precisely because you are an interested party, but have no dog in the fight. The response from academia is more likely to be freighted with preconceptions and the inflexibility that sometimes comes with being an expert. So it's good to hear from someone who has been asking similar questions, but who hasn't yet made up their mind. Thanks again!
Just a quick one. I haven't had a chance to really think about what you say below, but I did quickly scan it through. I've been thinking a lot about your theory the past few days while out walking and I am really starting to like it. It hits so many marks. As I said, I'm not sure I have any right to go saying things like that given I have no great knowledge about the topic, but gee... it makes sense!!
I think I'll go over that last email more thoroughly and maybe reread the theory paper. What a fascinating idea!!
Thanks for your latest e-mails. Again, I am responding at work, so I'll be brief.
I think it really is important to drop the word "conscious" because it continues to muddy the conversation. It means too many things, and is thus, ultimately useless.
What I'm describing with the HST is merely NSE. I also assert that NSE creates the persistent illusion of a unified self, with unified will, etc. This gives rise to the illusion that there is a 'mind' (i.e. consciousness) which decides and does things, but that is an illusion. There is only a brain. The brain senses the world and responds to it, and then creates an after-the-fact internal movie about that interaction, primarily for the sake of recalling later. Part of that movie is what we call mind.
There is also cognitive activity, but it is nothing like what we experience as mind. It is synaptic firing of various regions communicating with each other. However, since the cognitive DMN uses the HF as its buffer and workspace, the most newsworthy cognitive activity becomes part of the movie. That is what we experience as mind.
Absolutely, the extrinsic networks are 'responsive' to the outside world (responsiveness and ability to initiate behavior are often synonyms with 'consciousness'). We should not call these consciousness, however, because the philosophical term only means subjective experience (i.e. what it's like to be). Also, they are the work of various parallel agents, and not unified. Also, most of the responsive activity of the extrinsic networks never makes it to awareness, and so those aspects cannot be called "conscious". Only the newsworthy activity gets reported and thus becomes part of the global awareness of NSE.
Another problematic term is "unconscious" because it also means too many things. That's why I prefer pre-memory. Of course the neocortex is not "unconscious" in the sense of being asleep or in a coma. You could even say that the neocortex is 'aware', but it is not the same kind of awareness that makes up NSE. It is micro-agent ground-level type of awareness, instead, i.e. processing of sensation, decision and motor action.
This granular, multi-agent type of awareness is what H.M. relied upon. We neurotypicals also have that level of processing, but it is eclipsed by NSE, so we have no direct awareness of the neocortical agents and their processes. We only have the gestalt awareness, which has bound together only the most newsworthy reports from the pre-memory processes.
So yes, the brain can sense, process and respond to external stimuli without an HF, as H.M. displayed. That said, the output from the HF is nonetheless useful, and is still involved in choosing behavior, just much less than it seems. Primarily it works by informing the default mode network of the big picture, which allows for extrapolating beyond the current scene, guessing what's going on in others' heads, and future planning.
But it also works by sharing one common story with the extrinsic networks. This is significant, because the 'one common story' is much more context-laden than most immediate processing. A lot of that context comes together in the perirhinal area (part of the HF), which is an association hub that links together the conceptual aspects of objects, etc. It connects how the object looks, how it feels, smells, sounds, and even how I feel about it. Then more context comes together in the entorhinal cortex, the main hub into and out of the hippocampus, which brings together data from all over the brain. Thus, the HF does more than create a news story; it creates a news story that is more context-rich than exists elsewhere in the brain. That context does inform decision-making and behavior, and so is part of the (less-immediate) behavioral loop.
So NSE probably doesn't inform behavior RIGHT NOW. But it will inform emotional response and behavior half a second from now. And even more, it informs the DMN about how to plan for future behavior. In that sense, NSE is a very useful (but not necessary) piece of information for normal interaction with the world.
To sum up, the brain is still a government of many agents, each doing their job, each 'aware' in their own way (but only aware of their own necessary inputs, not of the global state of the brain. There is no central node that keeps track of the global state of the brain, not even the prefrontal cortex. It's just semi-autonomous nodes, receiving and analyzing input, and outputting reports for other nodes/departments to work with).
The only (semi-) global event is NSE, and that's because of the binding action of the HF, weaving input reports into one gestalt experience. I call it semi-global, because it leaves out so much of the (non-newsworthy) processing that led up to the event. It is a good-enough system that keeps data lean, lets the neocortex remain pliable by forgetting quickly, and creates a chain of memories for psychological continuity and off-line learning.
I hope this makes sense, and helps clarify some of the outstanding issues.
Matt, I said I wanted to tackle the quoted para below. I think it reflects something I might not have fully grasped in your paper. You are actually arguing that we do NOT have a "conscious" experience of the moment that directly informs behaviour. Rather, you are saying behaviour is a consequence of entirely unconscious processes. What we take for moment by moment consciousness is in fact memory - that is, we generate a memory, or summary report of input/output, of what actually did happen in the moment and this first activation of memory is used for further predictive priming and also background for future episodes. Is that correct?
If so, we are in fact a kind of "zombie" - we do stuff, but without any actual awareness of what we do, nor any "qualia", in the moment of behaviour. Qualia and all that entails arises a fraction of a moment later when the summary report of what just happened generates the memory. The memory gets a whole slew of qualitative properties such as emotional salience, sensory properties ("quales") and so on that help place it in a context.
This sounds pretty good to me but I do have a couple of problems with it. The first concerns what we've been talking about with driving mind. As far as I can see from what I've read, we do have to be conscious to produce behavioural responses. What I mean by "conscious" is the second state I described earlier. This seems to inform the cognitive form of consciousness when we want to direct behaviour using top-down cues. I agree that behaviour based on bottom-up cues is non-conscious in the directive sense, but even here we still need the second kind of consciousness - awareness of qualia - to prime the response.
A simple example. If a person exhibiting normal NSE suddenly sees a ball flying towards their face, they will react using bottom-up cues and flinch. The flinch avoidance behaviour is unconscious in that is not directed. However, if the ball is part of a game of tennis or cricket or whatever, and the person has been following the game and the ball's progress, the resulting behaviour might be different - top-down cues direct a flexible behavioural response that over-rides the instinctive flinch. On the other hand, if the person has no conscious awareness of the scene - that is, they do not perceive some qualitative property of the scene - there will be no behavioural response at all. This is where we can use blindsight as an illustration of what I mean.
Blindsighted people can detect objects and make assessments about those objects at better than chance levels. They can also detect movement and may even have some residual subjective experience of the movement. But they seem to need a cue to do so, which is usually provided by the experimenter. Left to their own devices, a blindsighted person will not flinch from that ball, as I read the literature.
So it seems to me that we have to have awareness of a scene in order to offer more than the most basic of responses to stimulus. As Prinz argues, this kind of consciousness - my second kind - is required to form the role of menu for action. However, you argue that we don't actually have this kind of consciousness at all in such scenarios. All of behaviour is generated unconsciously and what we experience is an after the fact record. I find it difficult to reconcile this apparent contradiction.
One way out of this dilemma perhaps is to appreciate the speed at which all of this happens. Given the massive number of parallel processes and the extraordinary number of connections active during everyday neural processing, much of what we've just described must take place at millisecond intervals. Indeed, the value in generating more salient predictive scenarios is faster reactions to external events so the quicker we can put all of this together the better. Perhaps the fact that NSE is a memory, but a predictive priming memory, isn't so much of an issue in experiencing scenes. The fact that it so closely succeeds the actual moment AND primes future behaviours means that at the gross level of experience the two are practically indistinguishable, although as you note Libet's experiments perhaps shed light on this phenomenon (though consider Schurger's recent paper on this topic).
If though at any point in the processing hierarchy there is a loss of signal, such as V1 damage in blindsight, we simply cannot process information well enough to generate behavioural responses. And thus our memory, our NSE, reflects that fact. Whatever NSE the HC generates includes whatever reports it has to hand, and lacking anything from the visual stream it just doesn't include that in the report.
Hmmm... if any of this is largely what you are proposing, I think I rather like it. When we try to make sense of "consciousness", we are immediately struck by the problem of the neural correlate for consciousness. Where is it? We only have three possibilities, it seems to me:
1. There is actually no such thing so it doesn't arise anywhere. I guess this is the eliminativist position, and it's not so easily dismissed as some think. Still, it's hard to ignore that something is going on that we seem to be aware of.
2. Consciousness arises in widely connected networks in some kind of memory buffering process. Susan Greenfield's work seems to suggest this and it's backed by considerable research into anaesthesia effects. Prinz's AIR is typical of this school of thought - consciousness arises in each sense modality through intermediate representations being modulated by attention. But why exactly these neural assemblies should cause consciousness to occur is harder to see, in my view. I think I can come up with plausible scenarios but these require a bit of sleight of hand as it were.
3. There is some central location where consciousness occurs. Dennett would oppose this idea as he sort of straddles both the 1st and 2nd camps. But it seems the most obvious to me because it answers neatly why it is that only some neural arrangements are conscious and not others. Why is there some privileged set of arrangements that cause consciousness?
I know that in the face of detailed reasoning we can probably make good claims for any of the above, or dismiss all of them in turn. But I like simplicity of explanation, and at this point I rather think the HST could be a great candidate... I'll have to think some more on this. But I need to know if what I now understand of your theory as explained in my opening few paragraphs reflects your proposal?
I just got your latest e-mail, at the same time I finished my hallucination response. So here's that for now. About hallucinations: I think we can understand hallucinations better if we look at what causes REM dreams (as I understand it). During REM sleep, the neocortical extrinsic networks are consolidating semantic (knowledge) and procedural (skill) learnings. It's the brain's sleep rehearsal. The hippocampus remains active during that time, in order to help provide spatial, sequence and other memory cues, which help inform the rehearsal. As during waking, the connection between the HF and neocortex is a loop; the HF informs the neocortex and vice-versa. The extrinsic neocortex sends its reports to the HF, as always, and the HF, as experience generator, confabulates an experience out of it. This confabulation just takes whatever near-random information that's coming in and turns it into a compelling experience, strikingly similar to waking experience, even if it doesn't make any sense.
(Even though the memory generator (the HF) is producing the dreams, the brain doesn't need them to make sense because 1. behavior is almost impossible, and 2. most of them won't be remembered. It's what we remember that defines reality for us). Hallucinations may also reflect confabulations, as well. The error is likely within the prediction system of the extrinsic neocortex, which is why it tends to predict motion in static objects, or faces in walls, etc. The brain nodes that are altered include the predictive sensation systems (mostly hearing for schizophrenics, vision for psychedelics), the error correction circuits (especially orbital-frontal), and some feedback loops. This allows errors to be made, and to remain uncorrected. The HF confabulates the faulty information into an absurd but compelling experience.
Schizophrenic heard voices are likely verbal thought that is missing the metadata tag which indicates that it is self-generated. Just as alien hand syndrome subjects are missing the metadata tag saying "I did this" and therefore perceive the hand movements as being made by someone else, so too the schizophrenics mistake the thought as being other-generated. This is an important issue when considering what "will" is, because some node in the brain is generating the willful movement or thought, but it only feels like "me" if it includes the metadata. And then there may be hallucinatory-type problems that stem from the predictive powers of the HF, itself. If you remember my paragraph about anosognosia for hemiplegia, that's one such case. The HF body model predicts movement to accompany reports from the pre-motor area and, missing an error correction from the motor cortex, precisely because of the damage to the motor cortex, the HF prediction remains unchallenged, and what is subjectively remembered/experienced as what happened. Again, the metadata tags of "I did this" or "I didn't do this" are very important for the sense of agency and ownership.
I think there's a good argument for HF involvement in the out of body hallucination that some subjects have when their TPJ is hit with trans-cranial stimulation. I hypothesize that the TPJ is a comparison site for various models of the body-in-space, as generated by the cerebellum, dorsal visual stream, right hemisphere temporal lobe, motor cortex, somatosensory cortex and yes, the hippocampus. All these streams converge at the TPJ, and I suspect the differences in calculations/predictions/representations of body-in-space are worked out there. When TCS is applied there, the hub is disrupted, and the hippocampal representation of body in space is no longer constrained by comparison with the more body-bound representations. Free of error correction, but still possessed of a cognitive map of the space, and still receiving audio input, it's possible for the HF body model to feel unmoored and apparently move about the space. There's a lot of speculation in this last argument, but I think it's anatomically reasonable. best, matt faw
Thanks for continuing to discuss this too, I enjoy thinking about the whole topic even though I am just a beginner.
I think what you've said below makes sense in terms of a fundamental disunity, although I guess this is really more a case of multiple parallel processes that drive the organism (which in this case is us). My guess is that temporal encoding of neural activations ensures some kind of unity but one in which any specific process is dissociable from the whole. But that's just a naive guess. It just reflects what I said earlier - my experience of the world seems to be of many things going on but which have temporal coincidence. I have a wonderful example from just this morning. I was watching a documentary and there was a narrator giving a running commentary of the story. But interwoven with this was the occasional scene in which one character or another offered their own comment. In this case, the character had a male voice very similar in pitch and timbre to the narrator. At one point while I wasn't paying a great deal of attention they switched to this character and I didn't immediately pick up that he was speaking. I assumed the narrator was still speaking, until it occurred to me that the words and the character's lips coincided temporally. At that point the experience of the character saying the words rather than the narrator, just "popped" into awareness!
I agree about using the word "conscious" in these kinds of discussion because this term seems to get all mixed up in various contexts. At one extreme we have the idea that it equates to subjective experience which seems to boil down to experiencing qualia, at the other extreme is say Julian Jaynes' idea of a metaphorical "I" space founded upon language. I'm never quite sure what anyone is actually talking about! That said, I think your point regarding memory gets to the core of it. A lot of what we take to be consciousness is what we remember - if we had no memory moment to moment I suspect we'd have no qualia at all.
Perhaps we can distinguish between these various takes on consciousness?
When I speak about consciousness I think mostly I mean cognitive or psychological consciousness which is largely about the physical operation of the brain and how an organism behaves in response to stimulus. I think I just mean, does an organism sense its environment and use that sensory information to adapt behaviour, or respond to novel situations, or learn from experience. Such consciousness doesn't necessarily need "qualia", but perhaps as per Graziano it is impossible to do any of these things without an internal model that the organism has access to. By contrast, I get the feeling that often, everyone else talks of consciousness as qualia, or the private, ineffable essence of experience. So there are two senses of the word - one means to be a sentient, flexibly behaving system, the second means to have an inner experience of the world that has a "feeling" to it. It is what it is like to be that system.
I think the former is possible in any physical system that obeys physical laws and produces agreed outputs, so in that respect I would say I am a functionalist. But that's at a sort of philosophical level - I have no idea whether that is possible. Perhaps only organic systems can do this. The latter though... this seems to be the real point of contention. Why does red feel like, well... red? The trouble with this particular view is that it seems we can never explain it. If something only has whatever qualities we ascribe to it but which we can never identify, describe or point to externally, I think we are in trouble. My guess is that they don't actually exist.
This leads us to my claim about being conscious to experience driving mind. Here, I mean conscious in the second sense - that is, we have an awareness of qualia (I know, I just said they don't exist, but what I mean is that we do have the experience, it's just that this does not constitute some separate mental state that has some causal relationship to a physical state). This is just guessing, but I am pretty confident that we have to have the state of awareness (or whatever we want to call experiencing qualia) to behave in the form that I used in my definition above for "cognitive" consciousness. I maybe don't know enough about sleep walking/driving/speaking, but I am sceptical that anyone in such a condition could do any of the things I claimed as definitional. Someone who is genuinely "asleep" and hence not conscious or aware of external sensory input isn't going to be able to behave flexibly or to learn. If they can, then by definition they are aware. They may not recall that fact, but they must be.
I make this claim on the basis that awareness or consciousness (or whatever this form of experience can be called ) clearly evolved for a purpose and I think, like Prinz and Graziano, that it is intrinsically tied to behaviour. As Prinz calls it, it is a "menu" for action. Take it away, or reduce its field through attentional focus, and you lose flexibility of behaviour. Because we don't exist as a real subjective "I" but are rather simply the operation of the brain, we don't get to call the shots. We are conscious when we make behavioural decisions because that's what brains do.
I see consciousness therefore as some kind of continuum. If we are sleep walking and we can actually interact with our environment in some meaningful way, produce flexible behaviours and so on, then we are conscious in the second way - we have awareness or qualia, regardless of whether we remember that fact. If we produce random behaviours that do not correlate with external conditions but which require that our brains are still doing stuff and producing motor actions, then we are conscious in the first sense but cannot have awareness of qualia.
I would say typical driving mind in which we make it home with no incidents suggests that we are conscious in both senses. If we lacked the second case, we'd either have an accident, or an incident in which attention would be quickly brought to bear on whatever external stimulus demanded it. What I mean by this is that if we successfully navigate the drive home but don't recall it, then by definition we are conscious in both senses. The failure to recall the drive is just that - a failure to recall...