The following is one of the first conversations I've had about the theory. It was wide-ranging and thorough, and so with Graeme's permission, I thought it would make a good beginning to this forum.
top of page
To test this feature, visit your live site.
Comments (31)
bottom of page
Hi Graeme,
I think your latest insight is a powerful one, and one worth meditating on. It is a powerful way to look at one's own life, I think. I have been creating my own delusions, my whole life, without knowing it. In truth, I don't really know who this organism that I call 'me' is. I have been experiencing/remembering this so-called self in a biased, motivated way (not always positive), and really don't know what the actual reality of my life has been. I find now that it's best to be quite humble about this organism, and not assume I understand him.
best,
matt faw
Hi Graeme,
I'm at work, so I can only write a quick reply right now. Primarily I want to say how much I agree with what you wrote. I think you're right that we take our experience for granted, and so don't recognize when it is diminished, as compared to others. We do recognize, while on psychedelics, that everything has gone strange, but the anosognosics and hemispatial neglect patients have no clue (at least at first) that they are missing obvious aspects of reality. A full 50% of stroke victims have symptoms that they are unaware of, until their brains catch up enough to include that new knowledge in their experience.
But that leaves the rest of us with a haunting problem: what is it that WE are missing? We think we see reality, but of course we're building up a construct, based upon certain heuristics and our limited senses. Who knows what the bat knows about the world, that we are still missing?
There's a lot more to say, but I have to go now. Great insights, Graeme!
best,
matt faw
Thanks Matt, that's well explained - I see what you are getting at. This kind of cuts across some other thoughts I've had about the idea of experience, or more exactly the "what it feels like". I take your point about how there is no "I" in here - I am the brain doing its thing and the awareness that we have is a summary of events that includes the sense of self and context.
This rather suggests to me that we don't really HAVE a sense of what it feels like. When we think like this, we are mistaken about what we experience and the quality of that experience. I may have mentioned this before.
I am probably a materialist, if I understand the meaning of that term. I don't believe there is any mind stuff at all - no "mental" events. There is just the brain doing things. Our total experience is as you point out largely unknown to us. All of these nodes and subnets do their thing and its only when this control model or simulation is formed that we do have experience. But because there is no "me" inside, my experience is all there can be. Regardless of its form.
In other words, I doubt we are aware of when our experience is reduced in extent, such as with certain lesions or in cases such as HM. Because we cannot take some transcendental stance, we can never know when what we experience is lesser or more constrained. The majority of inner processes happen without awareness anyway, as you show. Of course people with brain injury or some kind of disability can know that their cognitive function is not the same as others, but here I think they are just aware of being less smart. Subjectively, whatever they do experience is just what it is to "be" them. So we can never actually know anything at all about what it is to be us. I'd go further and say that there simply is nothing that it is like to be us. Here is where I think Nagel is mistaken in his famous "What it is like to be a bat" story. He is mistaking the fact that we can offer a description of what we experience, for a description of what it IS to experience.
We respond behaviourally to the world, we have experience, and we can use recall to adapt future behaviours but that's an operational description. When we see the world visually, there is nothing at all we can say about what that experience is like. We can say WHAT we perceive, not what it is to have perception.
Consider the paradigmatic case of colour. Julian Jaynes made the astute observation that much of our consciousness, our sense of self and relational experience is rooted in language, as you observe also. He notes that it is the use of metaphor that allows us to build up a rich recursive view of experience. Without this, the world just is. We use metaphor to build an overlay on top of that. Take the colour red. We can say that something is red, because we have a label for that colour. But we can only have this because we know many objects share this property, and we expose this via metaphor. That bucket is as red as a rose. Or, her lipstick is blood-red. Red itself, I suspect, has no actual quality to it at all. Without metaphor - linguistic description - what is red? If all the world were red, what actual feeling could we assign to it? It seems to me, none at all. It just is.
The same applies to a bat. Its perception is rooted in sonar imagery, but from its perspective the world operates the same as it does for us. Of course, we can describe what it is to experience visual imagery (eg shapes, colours etc), but there is nothing about the experience from our perspective that differs in substance from that of the bat. In effect, a bat perceives his world, builds up a picture of structure and placement and behaves accordingly. Which is what we do. If we were a bat, we would not be aware that our experience differs from that of a man. There is nothing at all that it is to be a bat, or a person.
And so when we come to HM, it seems to me that it might be questionable whether in fact he had any of the kind of subjective experience that those with an intact hippocampus might experience (if your theory is correct). I don't think we can necessarily infer anything of inner experience from outer behaviour, including spoken words (which are after all simply motor actions). Perhaps as you've suggested HM used the underlying buffer contents for awareness, but perhaps not?
The trouble with that line of thought for me is the same one as the bigger question of "why". If Hippocampal simulation generates NSE, why should upstream processes also generate awareness? Why draw the line there? Perhaps all neural processes generate awareness and as we subtract successively upstream components we reduce experience? Or is it simpler to postulate that NSE is only generated by the hippocampus?
Cheers
Graeme
Hi Graeme,
Thanks for both e-mails.
Very good insight about the prediction side of this. When I discussed this with hippocampus expert Gyorgy Buzsaki, he pushed the predictive side of the model, saying that hippocampus drives the conversation with the neocortex, which sounds counter-intuitive, since memory should come second.
I'm approaching the limits of my own understanding here, but my speculation is that there is a push/pull relationship between the hippocampus and neocortex. Of course, as described in the paper, news reports from all over the brain meet at the entorhinal cortex to be prepared for binding within the hippocampus. Field CA1 is the binding engine, nesting all the news reports within the theta wave output.
Upstream from CA1, field CA3 and the dentate gyrus also receive inputs from the entorhinal cortex, but they serve a complementary function to CA1. CA3 is the auto-associative engine that retrieves codes of earlier memories for memory recall. But during normal memory encoding, I think it also uses previous memories to provide predictive coding.
When CA1 generates its theta wave output, it nests entorhinal input into the up-cycle of the theta, but then nests the CA3 input in the theta down-cycle. I believe that the CA3 portion is probably the predictive side of the communication with the neocortex, whereas the up-cycle portion is probably the finished memory, which gives rise to NSE. This would allow each theta wave to have both push and pull with the neocortex, creating a multiple drafts-like revision process.
Finally, as to the 'hard problem' of why we have experience at all: prior to language, the only mechanism for recalling an episode would be through re-experiencing the original event. A rat cannot recall a fact in the same way we do, using linguistic concepts. Instead, its HF plays back the previous experience of going through the maze and arriving at the reward, and that drives present behavior. It feels (more or less) like it has gone back in time for a moment, and felt the original experience.
That said, the re-experiencing of the previous memory would not be meaningful, if there were not an original experiencing first. An 'offline' experience that is not familiar is not perceived as recall, but as fantasy. Also, the original experiencing has to be of the same type as the recall. It has to feature a unitary self, not a constellation of nodes. The memory recall needs a unitary self in order to keep the memory elegant and clear, and so the original experiencing also needs a unitary self.
Since the actual awareness of the brain happens at the nodal level, we can ask why is there no experience of that? It is hard for me to guess what a node 'experiences', since my own memory of experiences are all on the global unitary level. So I have a hard time being sure what HM was experiencing, if anything. I would say he was aware, on a nodal level, but did not have the kind of subjective experience that we have.
I'm guessing that HM did indeed have a (simple) inner life, and still continued to be aware in a way that is not truly strange to us. But if you or I were to wake up tomorrow after the same surgery that HM had, I think we would be very confused indeed. We could probably function fine for stereotypical tasks, but anything complicated would leave us befuddled, because we are so used to relying upon experience (and memory) as our way of connecting to the world. Over time, however, I think the neocortex would adapt, and be able to handle more and more complex situations, even though patients like HM could probably never live fully independent lives.
A lot of this is on the outer edges of my theory and research, so I'm giving you my best guess. I hope it makes sense.
best,
matt faw
Just one more thing that occurred to me through the night! :)
I had kind of been thinking of NSE as described in your theory as a post-event thing - that is that we do stuff and then we generate NSE as a part of the memory formation process. But I think what you are really driving at is that the HC serves to create a simulation of experience/events that is deeply embedded in predictive priming and behavioural responses. That is, we are still aware of things and responding to these things, but the "world" we experience in doing so is simply generated at the HC level. Sure, it is to an extent "post-event", but so closely after the event that in gross behavioural terms we could see them as simultaneous.
Memory is critical to the ongoing experience of "self in the world", of context and narrative and continuity and so on, so it should be no surprise that "memory" is what serves to knit the whole thing together. The distinction between NSE and memory that most people observe is an artificial distinction - in reality (according to the HCS theory), memory processes are all part of a continuum of experience.
Is that more or less it?
Cheers
Graeme
Hi Matt,
Thanks for the great reply. While as I've said I'm just somebody with an interest in this stuff, I do rather like the explanation proposed in your paper.
One of the problems with this matter of consciousness is why even have it (if we think of "consciousness" as NSE)? All of the processes of a typical human brain appear to me to be possible without any experience at all (though perhaps I am too ignorant of the detail to be able to make that claim). And that still stumps me somewhat even with the HC Simulation theory - why is there any need to have this NSE at all, if as in HM's case we see that he can get by to an extent without it. Is it some kind of amalgam of this idea and some others such as Graziano's control model or Dennett's multiple drafts model? That is, if NSE contributes in some way to future planning, we need to use a sparse version for simple tracking and prediction - if we were aware of all that's going on then we'd be overloaded and unable to make sense of things (though why even need to make sense of anything anyway?).
Presuming the HCS theory is correct, I'd tend to think that cases such as HM probably aren't aware in an NSE sense, and perhaps aren't even aware at all. I like what you say about him having just a few minutes of functional memory as that jibes with the idea that memory is critical to an ongoing conscious awareness of self in the world. But would he even be aware of what you've called the buffer contents of processing? For example, if we think about pain, it seems to have two components - the automatic bit where we simply recoil from a painful stimulus and perhaps utter a sound, and the subjective sense which seems not as bound to the stimulus. Presumably the subjective element is useful in learning and later memory based behaviours built upon that learning, whereas the automatic element just happens regardless (as it's earlier in the processing stream). If painful experiences are bound up in NSE, then we could postulate that while HM would react to pains would he say that he felt them? That would be interesting to know. My guess is that he might say that he did, because the motor responses that lead to vocalisations would I think tend to belong to pre-NSE activity, but the subjective element (ie how much it hurt, or what the hurt was like) would be missing.
But I might still be not quite getting the nuances of the theory. Perhaps NSE contributes a more complete experience as you say, and pre NSE is still conscious but less complete and later overwritten by the NSE process (which I assume happens so quickly by our standards that it isn't recalled - I am very aware of how much our conscious experience of the world depends on what it is that we remember!). In fact that might explain what I've mentioned to you before - the fact that sometimes I am aware of something shadowy which later takes a more definite form (eg a movement that I catch and see as a dog but then a moment later it becomes clear that it's actually a rock). Perhaps the initial awareness of sudden and novel stimuli is apprehended pre-NSE before the complete scene is filled in by NSE.
Still, all fascinating stuff and I am hoping to hear more about critical response to the HCS. And hoping that you get the doco finished too - is that still likely?
Cheers
Graeme
Hi Graeme,
Thanks for your e-mail, and for the reply on the Brains blog. Well written.
As for the question about "zombies", I think that formulation is dreamt of by philosophers, assuming an either/or situation. They figure that if someone doesn't have neurotypical subjective experience, then there is no experiencing at all. I can't say for sure what HM's inner experience was, but I'm sure he would say that he sees, hears, etc. I think that in we neurotypicals, because of the hippocampal loop that generates NSE, the new memory experience eclipses the neocortical interaction with the outside world. If we did not, we would experience life as a perpetual echo.
Also, the neocortex entertains many simultaneous possible interpretations of noisy inputs, but only sends its final conclusions, the newsworthy output, to the HF for memory encoding. So HM presumably lives in the noisy neocortex, and misses the relatively clean but contextualized experience that arises from the HF. He probably has a different experience that has less a sense of self (in an autobiographically extended sense), less a sense of psychological continuity, less a sense of environmental context, less a sense of a cognitive buffer/mind's eye, and of course, no enduring flow beyond a few minutes at a time.
So I think that non-NSE animals probably live like HM, (albeit without the conceptual self that he learned, pre-surgery). That said, I think that most mammals probably do have the same basic structure and process as us. This is because they use episodic memory in many of the same ways we do (way-finding, reward/punishment learning, social learning), and need to have at least a bodily self in memory and in future planning. Indeed, they probably have something like a cognitive self, as well, for problem solving, even if that aspect of self is probably far less developed than in humans. This is especially true of social mammals, because the concept of self is necessary for interacting with social hierarchy, etc.
Does this make sense?
Thanks again,
matt faw
Hi Matt,
I saw your precis on the Brains blog. I think that's a nice summary. Someone commented (and I replied) and both the summary and the comment gave me a flash of insight that I'd missed before (or maybe I've just forgotten). The insight that occurred to me is simply that everyone could be mistaking "consciousness" for the events themselves as you propose. If we had no memory formation it might limit how functional we can be in the world, but it doesn't mean that we can't still do stuff. Because the processes that actually cause us to respond behaviourally remain intact and functional even without the HC.
One other thing. If you are right, creatures will only be "conscious" if they have the requisite memory formation (or at least, consciousness would lie on a gradient constituted by the extent to which memory processes operate in this manner). I have no idea which animals have such kinds of memory, but let's assume for arguments sake that a dog does not. Does this mean that all of such a dog's experience would be hidden from it? That is, it doesn't actually experience pain or joy or sadness? I'm not saying this IS the case for dogs, just asking if you think that creatures without memory processes of this nature simply ARE zombies?
Cheers
Graeme
Hi Graeme,
I think your intuition that "consciousness and memory constitute one spectrum event" is about right. Yes, it's the same neural coded information that gives rise to both NSE and memory recall, but of course we experience them slightly differently, since NSE is so vivid and present. So yes, I mean what I say when NSE is caused by a "brand new episodic memory", but it's also valid to say that they "constitute one spectrum event".
But of course, I prefer the more "extreme" phrasing, because whenever the word "consciousness" gets employed, things get muddy. I want my language to reflect my schism with the old way of thinking, so it's hopefully less confusing.
Your last paragraph question gets into the same linguistic difficulty as previous ones. Not only are the words "conscious", "unconscious" and "I" all problematic, so too is "awareness". If you think about it, "awareness" is just another word for "consciousness". What I'm claiming is that "awareness" is every bit an illusion as "consciousness". There is only memory. Memory persists, and so we call it "awareness". Pre-memory processes exist, but do not persist, and so we cannot be aware of them.
This relates to the I/micro-agent schism. Micro-agents all around the brain are "aware", meaning that they take in stimulus, process it and respond. That is the only true meaning of aware in the brain. From their reports to the HF, memory is formed, and that memory activates a global experience, NSE. We think of it as awareness, because it includes a fictional "me" who seems to be aware of things. But it is no more true awareness then the micro-agent awareness before it. The only difference is: it is globally experienced, and it is persistent.
Again, remember that the micro-agents' processing cannot be experienced globally, because they are all local actors. The only global experiencing has to come from the neural newscast, which represents all the newsworthy processing before it. The loop of NSE takes disparate reports from all over the brain, stitches them together into a highly contextualized news report, and then broadcasts that report back, all around the brain. It is the only thing experienced, because it is the only thing that is unified.
Plus, experience is specifically a strategy of the episodic memory system (again, for off-line learning), and having separate pre-memory and post-memory experiences would be echoing and confusing, so it is not surprising that the remembered global experience only reflects the output from the "news media outlet", the one organ of the brain which stitches the entire event together.
If you surrender "we experience this summary newscast" and instead think of "the brain creates a newscast which will globally share a big picture story of events, including the self, and which can be recalled later for offline learning" then I think the language makes more sense.
And one last thing about the self: think about dreaming. In dreams, "you" seem to exist, think, feel, choose and act, even though none of that is happening. This is due to the confabulatory power of the hippocampus, and the fact that it has a self-model in the middle of it, for the sake of memory. But that dream-self is not a real self, even though it sure feels that way. Nor is the experienced self; it is also a confabulated construct, based upon news report molded on an armature of habitual self-image.
The self in experience is confabulated, which is why people are not even aware when they overuse filler words like "like" and "um". People with very bad posture do not experience their own bad posture, until someone calls attention to it. They only experience their habitual body schema, albeit informed by newsworthy reports. Racists do not notice most of their racism. The very fact that some gay people can be closeted to themselves shows the amount of fiction that can go into the self-model. Not to mention multiple personality disorder. The self is largely assumed and filled in, including its motivations and feelings. It is also extremely hard to surrender the habit of believing in one's self, but I think it's worthwhile.
best,
matt faw
Thanks Matt. I still need to digest some of this. I agree entirely about there being no "I" - having read many arguments for this claim, I have come to accept it as right. Yet I find it hard to explain this to people because as you say we are all stuck in this idea that we have some independent form. So it becomes hard to talk about things.
For example, I've been discussing with people on that science chat forum the condition of "blindsight", and there's been a bit of confusion about the notion that "we" can "see" things. I don't think that we do, at least not how most people think of it. The process of seeing is part of what it is that we think of as "we". It's one stream of activity, what I think you refer to as an agent process. And the conscious experience of that process is a kind of summary of the process. I have to say though that it never occurred to me to think that this summary might simply be a memory, though I had certainly come to realise that memory was somehow critical. I was more or less thinking that there is only memory and that consciousness and memory constitute one spectrum event. But your version is even more ummmm... "extreme" than that!
Still, we are not properly tackling what I take to be the hard problem. Why, even if NSE is a memory, do we have awareness of it? How can there be a qualitative experience of a physical mechanism, when all there can be is the physical mechanism? What is the distinguishing feature of this mechanism that makes this happen? I understand what you mean about the fictional "I", but even disregarding this sense of self or agency, there is still the fact that we experience this summary newscast, but nothing else...
All the best
Graeme
Hi Graeme,
More good questions.
The really tricky thing is in the language. You ask what makes "us" aware of the hippocampal output, when "we" are not aware of other brain activity? Do you see the problem in the question?
The thing is, "I" am not aware of my new memory. Actually, "I" am aware of nothing. Rather, "I" only exist as part of the new memory. That sense that there is an "I" here who is aware of this but not that, that sense is an illusion.
There is no "I". There is only the brain. And the brain is busy communicating between various micro-agents, each sending queries and responses. None of it is "conscious" nor "unconscious"; it's just doing its job.
Part of the brain's job is creating memory. It has evolved to do this, because memory is useful for learning and learning is useful for survival. Because the off-line learning (i.e. learning after the event, from a safe location) needs to have a 'self' in the middle of the memory, as context for the sake of comprehensible replay, therefore memory tells the brain a fiction, that this "I" that is in experience is the actual being that had the experience and thought the thoughts, etc. But the "I" of memory is only a construct (albeit informed by newsworthy reports from body and cognitive structures). It is not an entity or emergence or process. It is only a memory.
And because memory has evolved to be "replayed" in the brain, therefore it needs to be pre-played as well, in the form of immediate experience. And because memory has a limited capacity, only the newsworthy activity of the brain is sent to the hippocampus. These are all strategies of efficiency and efficacy.
This is all extremely counter-intuitive, I know. We are so used to "being" this "I" that consciousness researchers are looking for how this "I" came to be. But the "I" is not a being; it is only an expression of neural information. It is the brain telling a just-so story to itself, as part of an evolved survival strategy.
The brain processes a ton of stuff, and can learn without subjective experience. But memory is needed to hold on to anything complex or contextual. Because memory is the only thing that is experienced, it seems to be more important than the non-experienced part of the brain. Indeed, it seems to be consciousness. But it is actually just the brain's strategy for holding onto events for more than a few seconds. That's what makes the hippocampus important: it's buffering, storage and replaying functions. But it is no more "conscious" than any other part of the brain. And it is not "aware" of anything; it is just a holodeck for creating experiences.
I know this is headache-inducing stuff. I hope it makes sense.
best,
matt faw
Hello Matt,
Thanks for these comments, as I said I will reread the paper and our emails to date and see if I can get a more complete picture of the theory. I still really like it, though whether it will fly more generally waits to be seen I guess. Yet to my eye, it just answers so many questions.
As far as what you say below goes, I was more getting at physical realisers. Why should a hippocampal activation of neural assemblies be "conscious" when so many other neural assemblies aren't? I mean this mechanically, rather than either conceptually or causally. What is it about that kind of neural arrangement that makes us "aware" of it, when so much else goes on in our heads of which we aren't aware? I understand what you say about it being a sort of news report, or summary, or perhaps even a control model ala Graziano, but that doesn't answer the question of why, or indeed how. We still end up back in the original hard problem, ie why are some brain processes attended by qualia and others aren't.
My best guess is something like what Prinz proposes when he talks of "maintenance". That is, when certain neural activity is maintained for long enough, we become aware of it (perhaps this is rather like what I meant earlier when I suggested that the "present moment" appears to be around a half second long).
Do you have any thoughts about this?
Regards
Graeme
Hi Graeme,
These are very good questions. I can only speculate on them, because they sort of ask the 'why' of evolution.
My assumption as to why we have only post-hippocampal experience is as follows:
It would be too confusing to have both pre- and post-hippocampal experience, as it would be like a constant echo.
Since experience is a strategy of the memory system, it conforms to the needs of episodic memory. Recalled memory needs a singular unitary event, one coherent history, rather than one neocortical event and one hippocampal one.
Memory exists mostly for the sake of offline learning, playing back an experience to learn from it. Most learning is about conditions in the environment, and how I dealt with them, so memory doesn't need information in it that doesn't potentially represent those elements. Any additional information just muddies the memory.
Also, the neocortical event (which is the precursor to the hippocampal one) is parallel and distributed, so it cannot be experienced globally. Only the new memory is unified, contextualized and has keys to activate a global experiencing.
What distinguishes the neocortical representation from the hippocampal one? That's especially hard to address, because I don't have access to the neocortical representation. But presumably the hippocampal one has all the context already bound in, unlike the neocortical one. For example, dorsal visual information is egocentric, which is vision like a camera, but the hippocampal allocentric view has the context of the cognitive map of space included, so we always see things in context of our understanding of the environment that is currently unseen. (e.g. if you look only at one corner of your room, you continue to be aware of the context of the rest of the room, and could probably navigate through it without looking). Hippocampal experience is also only the newsworthy parts, so a lot of the neocortical stuff is left out. I speculate that it is likely that there is probably less coherence in H.M.'s perception, although I'm not sure how that would be tested. I think it is likely that he is aware of his scotomas, at least if prompted.
Why recall is so much less vivid and real-seeming than immediate experience: Memory works most efficiently as a lesson, rather than a movie, because a lesson can efficiently drive future behavior. So the movie of experience gets quickly examined and boiled down to a gist. The original experience is full of qualia, because the brain doesn't yet know what qualia will be important in the memory. It is only upon consolidation that the lesson emerges and the unnecessary qualia is discarded. Some qualia is still useful in the memory, however, because the original lesson might not have been the right one, and the qualia in memory may be needed to reassess the original lesson. For example, if I associate a certain color berry with poison, I need to keep the qualia of that color in the memory to inform my lesson. But if later, I realize that the berry color was not the signifier that I thought it was, the shape of the leaf may help serve me better, so it helps if I can recall that detail. Eventually, once the lesson of that leaf shape has been fully learned, it is processed only in the neocortex, and the episodic memory of the original event is no longer needed, and likely forgotten. In fact, as we become familiar with things, they are represented less and less in memory, and even in immediate experience (e.g. the view of our noses, but also familiar pathways, smells and sensations, my messy desk, the pictures on my walls, etc. all fade from even immediate experience because they are no longer useful for memory).
I think these are reasonable possible explanations as to why experience happens the way it does.
best,
matt faw
Well, I think it's a great example of lateral thinking, even if it turns out not to be right. Anyways, I'll try to get to and reread the paper and all our emails and I'll come back to you in a week or so with my final opinion for whatever that's worth. All the best with it just the same.
One thing that occurs to me, and it's likely due to me not having a clear enough idea of how the actual physiology and processes hang together (or maybe it's in the paper and I've forgotten!). If NSE is the first activation of the memory which is in effect just a reactivation of the neural states that contributed to the memory, how come this activity is conscious and not the original activity? What distinguishes say the neural pattern of vision as it is reported TO the Hippocampus versus the neural pattern of vision as activated BY the hippocampus? I understand what you mean about the HC binding these reports into NSE, but why is that conscious (or even how does it become conscious) rather than the earlier states? And why is this first activation of a memory more vivid than subsequent activations? I understand why that is desirable in behavioural terms, but what is the distinguishing physiological feature?
All the best
Graeme
Thanks, Graeme.
When the idea first came to me 3 years ago, the first thing I did was make a list of the major mysteries of subjectivity, like the hard problem, anosognosia, driving mind, etc. and saw that they all lined up within the mechanism, without hand waving. That was my first confirmation that the idea might make sense. Those mysteries also made certain demands upon the theory, which created my first testable hypotheses; I'd go do the research to make sure the anatomy fit those demands. The further I go along this path, the more clearly it seems to align with the evidence, which is why I was willing to put all that time into it, and have rested my doc's credibility on the idea.
I am very grateful for your feedback, precisely because you are an interested party, but have no dog in the fight. The response from academia is more likely to be freighted with preconceptions and the inflexibility that sometimes comes with being an expert. So it's good to hear from someone who has been asking similar questions, but who hasn't yet made up their mind. Thanks again!
best,
matt faw
Hi Matt,
Just a quick one. I haven't had a chance to really think about what you say below, but I did quickly scan it through. I've been thinking a lot about your theory the past few days while out walking and I am really starting to like it. It hits so many marks. As I said, I'm not sure I have any right to go saying things like that given I have no great knowledge about the topic, but gee... it makes sense!!
I think I'll go over that last email more thoroughly and maybe reread the theory paper. What a fascinating idea!!
Cheers
Graeme M
Hi Graeme,
Thanks for your latest e-mails. Again, I am responding at work, so I'll be brief.
I think it really is important to drop the word "conscious" because it continues to muddy the conversation. It means too many things, and is thus, ultimately useless.
What I'm describing with the HST is merely NSE. I also assert that NSE creates the persistent illusion of a unified self, with unified will, etc. This gives rise to the illusion that there is a 'mind' (i.e. consciousness) which decides and does things, but that is an illusion. There is only a brain. The brain senses the world and responds to it, and then creates an after-the-fact internal movie about that interaction, primarily for the sake of recalling later. Part of that movie is what we call mind.
There is also cognitive activity, but it is nothing like what we experience as mind. It is synaptic firing of various regions communicating with each other. However, since the cognitive DMN uses the HF as its buffer and workspace, the most newsworthy cognitive activity becomes part of the movie. That is what we experience as mind.
Absolutely, the extrinsic networks are 'responsive' to the outside world (responsiveness and ability to initiate behavior are often synonyms with 'consciousness'). We should not call these consciousness, however, because the philosophical term only means subjective experience (i.e. what it's like to be). Also, they are the work of various parallel agents, and not unified. Also, most of the responsive activity of the extrinsic networks never makes it to awareness, and so those aspects cannot be called "conscious". Only the newsworthy activity gets reported and thus becomes part of the global awareness of NSE.
Another problematic term is "unconscious" because it also means too many things. That's why I prefer pre-memory. Of course the neocortex is not "unconscious" in the sense of being asleep or in a coma. You could even say that the neocortex is 'aware', but it is not the same kind of awareness that makes up NSE. It is micro-agent ground-level type of awareness, instead, i.e. processing of sensation, decision and motor action.
This granular, multi-agent type of awareness is what H.M. relied upon. We neurotypicals also have that level of processing, but it is eclipsed by NSE, so we have no direct awareness of the neocortical agents and their processes. We only have the gestalt awareness, which has bound together only the most newsworthy reports from the pre-memory processes.
So yes, the brain can sense, process and respond to external stimuli without an HF, as H.M. displayed. That said, the output from the HF is nonetheless useful, and is still involved in choosing behavior, just much less than it seems. Primarily it works by informing the default mode network of the big picture, which allows for extrapolating beyond the current scene, guessing what's going on in others' heads, and future planning.
But it also works by sharing one common story with the extrinsic networks. This is significant, because the 'one common story' is much more context-laden than most immediate processing. A lot of that context comes together in the perirhinal area (part of the HF), which is an association hub that links together the conceptual aspects of objects, etc. It connects how the object looks, how it feels, smells, sounds, and even how I feel about it. Then more context comes together in the entorhinal cortex, the main hub into and out of the hippocampus, which brings together data from all over the brain. Thus, the HF does more than create a news story; it creates a news story that is more context-rich than exists elsewhere in the brain. That context does inform decision-making and behavior, and so is part of the (less-immediate) behavioral loop.
So NSE probably doesn't inform behavior RIGHT NOW. But it will inform emotional response and behavior half a second from now. And even more, it informs the DMN about how to plan for future behavior. In that sense, NSE is a very useful (but not necessary) piece of information for normal interaction with the world.
To sum up, the brain is still a government of many agents, each doing their job, each 'aware' in their own way (but only aware of their own necessary inputs, not of the global state of the brain. There is no central node that keeps track of the global state of the brain, not even the prefrontal cortex. It's just semi-autonomous nodes, receiving and analyzing input, and outputting reports for other nodes/departments to work with).
The only (semi-) global event is NSE, and that's because of the binding action of the HF, weaving input reports into one gestalt experience. I call it semi-global, because it leaves out so much of the (non-newsworthy) processing that led up to the event. It is a good-enough system that keeps data lean, lets the neocortex remain pliable by forgetting quickly, and creates a chain of memories for psychological continuity and off-line learning.
I hope this makes sense, and helps clarify some of the outstanding issues.
best,
matt faw
Matt, I said I wanted to tackle the quoted para below. I think it reflects something I might not have fully grasped in your paper. You are actually arguing that we do NOT have a "conscious" experience of the moment that directly informs behaviour. Rather, you are saying behaviour is a consequence of entirely unconscious processes. What we take for moment by moment consciousness is in fact memory - that is, we generate a memory, or summary report of input/output, of what actually did happen in the moment and this first activation of memory is used for further predictive priming and also background for future episodes. Is that correct?
If so, we are in fact a kind of "zombie" - we do stuff, but without any actual awareness of what we do, nor any "qualia", in the moment of behaviour. Qualia and all that entails arises a fraction of a moment later when the summary report of what just happened generates the memory. The memory gets a whole slew of qualitative properties such as emotional salience, sensory properties ("quales") and so on that help place it in a context.
This sounds pretty good to me but I do have a couple of problems with it. The first concerns what we've been talking about with driving mind. As far as I can see from what I've read, we do have to be conscious to produce behavioural responses. What I mean by "conscious" is the second state I described earlier. This seems to inform the cognitive form of consciousness when we want to direct behaviour using top-down cues. I agree that behaviour based on bottom-up cues is non-conscious in the directive sense, but even here we still need the second kind of consciousness - awareness of qualia - to prime the response.
A simple example. If a person exhibiting normal NSE suddenly sees a ball flying towards their face, they will react using bottom-up cues and flinch. The flinch avoidance behaviour is unconscious in that is not directed. However, if the ball is part of a game of tennis or cricket or whatever, and the person has been following the game and the ball's progress, the resulting behaviour might be different - top-down cues direct a flexible behavioural response that over-rides the instinctive flinch. On the other hand, if the person has no conscious awareness of the scene - that is, they do not perceive some qualitative property of the scene - there will be no behavioural response at all. This is where we can use blindsight as an illustration of what I mean.
Blindsighted people can detect objects and make assessments about those objects at better than chance levels. They can also detect movement and may even have some residual subjective experience of the movement. But they seem to need a cue to do so, which is usually provided by the experimenter. Left to their own devices, a blindsighted person will not flinch from that ball, as I read the literature.
So it seems to me that we have to have awareness of a scene in order to offer more than the most basic of responses to stimulus. As Prinz argues, this kind of consciousness - my second kind - is required to form the role of menu for action. However, you argue that we don't actually have this kind of consciousness at all in such scenarios. All of behaviour is generated unconsciously and what we experience is an after the fact record. I find it difficult to reconcile this apparent contradiction.
One way out of this dilemma perhaps is to appreciate the speed at which all of this happens. Given the massive number of parallel processes and the extraordinary number of connections active during everyday neural processing, much of what we've just described must take place at millisecond intervals. Indeed, the value in generating more salient predictive scenarios is faster reactions to external events so the quicker we can put all of this together the better. Perhaps the fact that NSE is a memory, but a predictive priming memory, isn't so much of an issue in experiencing scenes. The fact that it so closely succeeds the actual moment AND primes future behaviours means that at the gross level of experience the two are practically indistinguishable, although as you note Libet's experiments perhaps shed light on this phenomenon (though consider Schurger's recent paper on this topic).
If though at any point in the processing hierarchy there is a loss of signal, such as V1 damage in blindsight, we simply cannot process information well enough to generate behavioural responses. And thus our memory, our NSE, reflects that fact. Whatever NSE the HC generates includes whatever reports it has to hand, and lacking anything from the visual stream it just doesn't include that in the report.
Hmmm... if any of this is largely what you are proposing, I think I rather like it. When we try to make sense of "consciousness", we are immediately struck by the problem of the neural correlate for consciousness. Where is it? We only have three possibilities, it seems to me:
1. There is actually no such thing so it doesn't arise anywhere. I guess this is the eliminativist position, and it's not so easily dismissed as some think. Still, it's hard to ignore that something is going on that we seem to be aware of.
2. Consciousness arises in widely connected networks in some kind of memory buffering process. Susan Greenfield's work seems to suggest this and it's backed by considerable research into anaesthesia effects. Prinz's AIR is typical of this school of thought - consciousness arises in each sense modality through intermediate representations being modulated by attention. But why exactly these neural assemblies should cause consciousness to occur is harder to see, in my view. I think I can come up with plausible scenarios but these require a bit of sleight of hand as it were.
3. There is some central location where consciousness occurs. Dennett would oppose this idea as he sort of straddles both the 1st and 2nd camps. But it seems the most obvious to me because it answers neatly why it is that only some neural arrangements are conscious and not others. Why is there some privileged set of arrangements that cause consciousness?
I know that in the face of detailed reasoning we can probably make good claims for any of the above, or dismiss all of them in turn. But I like simplicity of explanation, and at this point I rather think the HST could be a great candidate... I'll have to think some more on this. But I need to know if what I now understand of your theory as explained in my opening few paragraphs reflects your proposal?
Cheers
Graeme
Hi Graeme,
I just got your latest e-mail, at the same time I finished my hallucination response. So here's that for now. About hallucinations: I think we can understand hallucinations better if we look at what causes REM dreams (as I understand it). During REM sleep, the neocortical extrinsic networks are consolidating semantic (knowledge) and procedural (skill) learnings. It's the brain's sleep rehearsal. The hippocampus remains active during that time, in order to help provide spatial, sequence and other memory cues, which help inform the rehearsal. As during waking, the connection between the HF and neocortex is a loop; the HF informs the neocortex and vice-versa. The extrinsic neocortex sends its reports to the HF, as always, and the HF, as experience generator, confabulates an experience out of it. This confabulation just takes whatever near-random information that's coming in and turns it into a compelling experience, strikingly similar to waking experience, even if it doesn't make any sense.
(Even though the memory generator (the HF) is producing the dreams, the brain doesn't need them to make sense because 1. behavior is almost impossible, and 2. most of them won't be remembered. It's what we remember that defines reality for us). Hallucinations may also reflect confabulations, as well. The error is likely within the prediction system of the extrinsic neocortex, which is why it tends to predict motion in static objects, or faces in walls, etc. The brain nodes that are altered include the predictive sensation systems (mostly hearing for schizophrenics, vision for psychedelics), the error correction circuits (especially orbital-frontal), and some feedback loops. This allows errors to be made, and to remain uncorrected. The HF confabulates the faulty information into an absurd but compelling experience.
Schizophrenic heard voices are likely verbal thought that is missing the metadata tag which indicates that it is self-generated. Just as alien hand syndrome subjects are missing the metadata tag saying "I did this" and therefore perceive the hand movements as being made by someone else, so too the schizophrenics mistake the thought as being other-generated. This is an important issue when considering what "will" is, because some node in the brain is generating the willful movement or thought, but it only feels like "me" if it includes the metadata. And then there may be hallucinatory-type problems that stem from the predictive powers of the HF, itself. If you remember my paragraph about anosognosia for hemiplegia, that's one such case. The HF body model predicts movement to accompany reports from the pre-motor area and, missing an error correction from the motor cortex, precisely because of the damage to the motor cortex, the HF prediction remains unchallenged, and what is subjectively remembered/experienced as what happened. Again, the metadata tags of "I did this" or "I didn't do this" are very important for the sense of agency and ownership.
I think there's a good argument for HF involvement in the out of body hallucination that some subjects have when their TPJ is hit with trans-cranial stimulation. I hypothesize that the TPJ is a comparison site for various models of the body-in-space, as generated by the cerebellum, dorsal visual stream, right hemisphere temporal lobe, motor cortex, somatosensory cortex and yes, the hippocampus. All these streams converge at the TPJ, and I suspect the differences in calculations/predictions/representations of body-in-space are worked out there. When TCS is applied there, the hub is disrupted, and the hippocampal representation of body in space is no longer constrained by comparison with the more body-bound representations. Free of error correction, but still possessed of a cognitive map of the space, and still receiving audio input, it's possible for the HF body model to feel unmoored and apparently move about the space. There's a lot of speculation in this last argument, but I think it's anatomically reasonable. best, matt faw
Hi Matt,
Thanks for continuing to discuss this too, I enjoy thinking about the whole topic even though I am just a beginner.
I think what you've said below makes sense in terms of a fundamental disunity, although I guess this is really more a case of multiple parallel processes that drive the organism (which in this case is us). My guess is that temporal encoding of neural activations ensures some kind of unity but one in which any specific process is dissociable from the whole. But that's just a naive guess. It just reflects what I said earlier - my experience of the world seems to be of many things going on but which have temporal coincidence. I have a wonderful example from just this morning. I was watching a documentary and there was a narrator giving a running commentary of the story. But interwoven with this was the occasional scene in which one character or another offered their own comment. In this case, the character had a male voice very similar in pitch and timbre to the narrator. At one point while I wasn't paying a great deal of attention they switched to this character and I didn't immediately pick up that he was speaking. I assumed the narrator was still speaking, until it occurred to me that the words and the character's lips coincided temporally. At that point the experience of the character saying the words rather than the narrator, just "popped" into awareness!
I agree about using the word "conscious" in these kinds of discussion because this term seems to get all mixed up in various contexts. At one extreme we have the idea that it equates to subjective experience which seems to boil down to experiencing qualia, at the other extreme is say Julian Jaynes' idea of a metaphorical "I" space founded upon language. I'm never quite sure what anyone is actually talking about! That said, I think your point regarding memory gets to the core of it. A lot of what we take to be consciousness is what we remember - if we had no memory moment to moment I suspect we'd have no qualia at all.
Perhaps we can distinguish between these various takes on consciousness?
When I speak about consciousness I think mostly I mean cognitive or psychological consciousness which is largely about the physical operation of the brain and how an organism behaves in response to stimulus. I think I just mean, does an organism sense its environment and use that sensory information to adapt behaviour, or respond to novel situations, or learn from experience. Such consciousness doesn't necessarily need "qualia", but perhaps as per Graziano it is impossible to do any of these things without an internal model that the organism has access to. By contrast, I get the feeling that often, everyone else talks of consciousness as qualia, or the private, ineffable essence of experience. So there are two senses of the word - one means to be a sentient, flexibly behaving system, the second means to have an inner experience of the world that has a "feeling" to it. It is what it is like to be that system.
I think the former is possible in any physical system that obeys physical laws and produces agreed outputs, so in that respect I would say I am a functionalist. But that's at a sort of philosophical level - I have no idea whether that is possible. Perhaps only organic systems can do this. The latter though... this seems to be the real point of contention. Why does red feel like, well... red? The trouble with this particular view is that it seems we can never explain it. If something only has whatever qualities we ascribe to it but which we can never identify, describe or point to externally, I think we are in trouble. My guess is that they don't actually exist.
This leads us to my claim about being conscious to experience driving mind. Here, I mean conscious in the second sense - that is, we have an awareness of qualia (I know, I just said they don't exist, but what I mean is that we do have the experience, it's just that this does not constitute some separate mental state that has some causal relationship to a physical state). This is just guessing, but I am pretty confident that we have to have the state of awareness (or whatever we want to call experiencing qualia) to behave in the form that I used in my definition above for "cognitive" consciousness. I maybe don't know enough about sleep walking/driving/speaking, but I am sceptical that anyone in such a condition could do any of the things I claimed as definitional. Someone who is genuinely "asleep" and hence not conscious or aware of external sensory input isn't going to be able to behave flexibly or to learn. If they can, then by definition they are aware. They may not recall that fact, but they must be.
I make this claim on the basis that awareness or consciousness (or whatever this form of experience can be called ) clearly evolved for a purpose and I think, like Prinz and Graziano, that it is intrinsically tied to behaviour. As Prinz calls it, it is a "menu" for action. Take it away, or reduce its field through attentional focus, and you lose flexibility of behaviour. Because we don't exist as a real subjective "I" but are rather simply the operation of the brain, we don't get to call the shots. We are conscious when we make behavioural decisions because that's what brains do.
I see consciousness therefore as some kind of continuum. If we are sleep walking and we can actually interact with our environment in some meaningful way, produce flexible behaviours and so on, then we are conscious in the second way - we have awareness or qualia, regardless of whether we remember that fact. If we produce random behaviours that do not correlate with external conditions but which require that our brains are still doing stuff and producing motor actions, then we are conscious in the first sense but cannot have awareness of qualia.
I would say typical driving mind in which we make it home with no incidents suggests that we are conscious in both senses. If we lacked the second case, we'd either have an accident, or an incident in which attention would be quickly brought to bear on whatever external stimulus demanded it. What I mean by this is that if we successfully navigate the drive home but don't recall it, then by definition we are conscious in both senses. The failure to recall the drive is just that - a failure to recall...
Cheers
Graeme
Hi Graeme,
Thanks for your e-mail. I'm at work, waiting for an automatic process to complete, so I'll be brief.
I agree with you about attention, and how driving mind can only happen when attention is divided. I think the only reasonable way to look at attention is not as a single faculty, but as several parallel channels. If you go for a walk (or drive) while having a conversation with a friend, then certain attention agents will be focusing on the physical activity, while the other agents are paying attention to theory of mind, hearing your friend's meaning, word choice, etc. These all happen simultaneously, and don't seriously compete, so they must be handled by multiple, parallel agents.
Most of the attentional systems work in the neocortex extrinsic network, but there are likely default mode network versions as well, which decide what simulations to build. Also, the hippocampal network probably has something similar to an attentional system, which allows some things in, while leaving others out. That way, during driving mind, the HF can serve primarily as a holodeck for the intrinsic simulations. It is very likely that all the 'news reports' from various extrinsic departments are queued up, ready to enter memory should an attentional system demand it. You notice that driving mind only seems to happen when nothing goes wrong. If things were to go wrong on the road, not only would the driver be roused from their reverie, they'd probably also remember the moment before, which was buffered, but not included in the memory formation.
So there are buffers and switches and attentional demons at many stages of hierarchy, which explain the complexity of parallel processing, when doing multiple simultaneous tasks.
I try not to use the words "conscious" or "consciousness" much, because they are classical tropes that just confuse things. We cannot say we are conscious of this or that during the driving mind; rather we have memory built of some stimulus, not others. That memory is what we are subjectively aware of, but to say we are "not conscious" implies that the brain is not busy doing its job, which is the opposite of the truth.
That's the most difficult conceptual bit to rethink: there is no consciousness that does things. There is just a brain that does things, and some of those things are remembered. Only the parts that are remembered are part of NSE, and so we say we are "conscious" of them.
When you describe a fundamental disunity, I think you are describing the parallel nature of the brain. The brain is indeed a disunity of processes. They are relatively combined at various hubs (prefrontal for cognition, TPJ for body-in-space awareness, and the HF for episodic memory, among others) but there is no master unity in the brain, no global thinking. That's why I wrote of the brain as the government of the body, to emphasize that it is a whole group of semi-autonomous agents (with a powerful communication scheme) rather than one global brain with one global process or self or consciousness. It is much more mechanical than our language and intuitions suggest. Our intuition about the self and consciousness are based upon episodic memory, which IS unified, and so it fools us into thinking that cognition and perception and attention are also unified.
I disagree with the notion that we have to be "conscious" in order to respond behaviorally. I offer as anecdote: sleep walking/driving/speaking. The brain doesn't even need to be awake in order to behave, not to mention needing memory to be active.
We are fooled into thinking that consciousness must precede behavior, because both stimulus and response usually both make it to memory; they are what memory is for. Our brain creates a story of a stimulus happening to 'me' and 'I' respond to it, but the awareness of all that is after-the-fact, like in Libet's will experiment. I think it is worthwhile considering that attention is a correlation with awareness (and an imperfect one at that), rather than the cause of it.
I hope this makes sense. Thanks for taking the time to consider this thoroughly and have this conversation with me.
best,
matt faw
Hi Matt,
Sorry to take so long to get back to you - I've been caught up with quite a few other things just now. I rather enjoyed reading your last email - it is an excellent summary response, thanks for that. I think it definitely fills in some of the gaps or inconsistencies I was getting at (which are more about my own sketchy knowledge of the subject). Overall, much of what you say accords with both the general idea I've drawn from what reading I have done plus my own thinking about it.
One of the things I've observed is very much akin to what you describe below as semi-agents or demons, and which I alluded to in terms of experience being less unified than we might think. As I said I tend to feel that my own experience is of perceptual representations that coincide temporally and that coincidence allows me to draw inferences about my relationship to the world. You seem to agree when you say that "I" am not "unified at all", but then you later talk of binding reports into one "unified structure" so I'm not quite sure whether you accept a level of fundamental disunity in what we call experience or whether you argue that unity is a necessary feature of experience that it arises from binding.
I'm also not sure about driving mind. It seems to me that driving mind can only occur when attention is divided - for example, when we attend to driving we are conscious of it and can remember it but this fades as we attend to some other feature of the internal/external environment. However, we clearly need to be conscious or subjectively aware of environmental features in order to respond behaviourally and that requires attention. That sort of implies to me that both conscious awareness and memory are affected by attention.
I must say that I am not at all convinced that consciousness actually fades in driving mind; I tend to the view that we are indeed conscious in the moment, but a moment by moment representation is not maintained (in other words, we ARE conscious of driving, we just don't remember the fact). This points to the capacity for attentional processing to be spread across two or more separate environmental presentations and that this impacts what is passed to memory. Perhaps there is some threshold value which, when exceeded, causes the representations that constitute NSE to be encoded into memory. That is probably at odds with your notion of NSE being generated by the HC, but maybe not.
Still, I am simply saying that I think we absolutely have to be conscious in the moment in order to generate the motivation to respond behaviourally to environmental stimuli. I agree with Prinz that conscious awareness is critical to a capacity for dynamic behavioural responses. It may very well be that in blindsight cases, people do exhibit some kind of response to unconscious perceptual representations, but without conscious awareness they, like us, lack motivation to spontaneously attend to objects that they cannot perceive consciously. It is only when cued to attend to objects not perceived consciously that they can show behavioural awareness of these objects, which again points to attention as a key process in both consciousness and memory encoding.
How all of that squares with HST I am not sure.
All the best
…
Graeme
Hi Graeme,
I’ve written part of the response to your initial e-mail. Since work is pretty busy, I’ll send that now, and address the rest when I get a chance.
As you remarked, the distinction between NSE and ‘consciousness’ is important to understanding the theory. Specifically, I am not trying to explain any classical idea of consciousness which reifies subjective experience into an ersatz self, a decider, an agent, a thinker, or anything like that. I think that the classical definition of consciousness conflates many neural processes: wakefulness, alertness, ability to respond, thinking, imaging, decision-making, etc.
I think that subjective impression we have, of our own consciousness, is an illusion. There is indeed a workspace, the hippocampal formation, which can buffer all the contents of perceptual awareness, and the imaginative contents from the default mode network, and bind them together into one master representation of mind and body in the world. But what we experience is the output from that workspace. This output gives us the experience of thinking, feeling, acting in the world, but none of those functions are performed by a process or entity called consciousness. Instead, all those processes are performed by pre-memory (i.e. ‘unconscious’) brain structures, and news reports from those structures are bound together in the HF, for possible long-term storage as a memory, and as immediate experience.
This is hard to square with the subjective experience of being a mind, because it is so clearly ‘me’ that is thinking, etc. Except, of course, if we analyze our own minds, we realize that this ‘me’ is not unified at all, but reflects many distinct, even contradictory cognitive semi-agents, what Dan Dennett called “demons”. Some internal agents try to tell a self-aggrandizing story, while others tell the opposite. Some agents want me to stick to my diet while others push me to break it. These agents are algorithmic structures, running if/then computations upon complex input to decide emotional response and behavior. There are a multitude of them, because they have evolved to balance each other out. They are upstream of the HF, in the default mode network, and they send their messages, in the form of language thought, into the HF for buffering, so that different agents’ recommendations can be weighed against each other. This is the process that we know as ‘conscious deliberation’, but it is only ‘conscious’ because it is part of the loop that creates NSE. And because all the various deciding agents use my own language center to create thought, they all sound like ‘me’. It is only when our thoughts stray into surprising conclusions and reflections that the agents sound like ‘not me’. We speak about those moments as: “I wasn’t myself” or “I was listening to my inner demons”. But in the HST, all experience of mind is via the binding of various agents’ reports into the brand new memory engram.
I agree with you that Prinz and Graziano are primarily sketching out ‘consciousness’ as a model of attention. And indeed, I think that’s probably largely correct. Attention is a pre-memory process that biases one input over another, based upon the needs of the moment. It is not surprising that attention should lead to some news reports being more salient than others, which means that NSE will feature more of those reports. There are some limitations to the attention/awareness corollary, however, as demonstrated by Robert Kentridge, whom I also interviewed. He has performed many experiments which showed it is possible to separate attention from awareness.
And my own example of driving mind I think also separates the two demonstrably. In driving mind, the vision, somatosensory and motor parts of the brain are attentive to the road and the body’s connection to the car (steering wheel, pedals, mirrors). However, during driving mind, there is no NSE of the drive, and no memory formed. This suggests that we are able to attend to multiple things simultaneously, but only a limited amount of that attended activity and reports can be represented in NSE. In the case of driving mind, the deep hippocampal simulation of daydream, memory recall or theory of mind actually prohibits the extrinsic reports from the drive making it into the new memory, and hence making it into NSE. But in many lesser cases, we can drive and be semi-aware of the drive, while most of our hippocampal processing is of the conversation we are having with a passenger, or of the commentators on the radio.
HST is primarily a theory about the binding of reports into one unified structure, which allows for global, simultaneous experiencing. Now, as you point out, many theorists, including Prinz but also Dennett, Tononi, Koch, etc. have asserted that the binding problem does not need to be answered, that global synchrony may be enough to give the sense of unification. However, with all due respect, I think that is hand-waving. When I look at the world, I definitely do not see it as two 2D visual streams, or as two separate left and right visual fields, but as one global 3D world, which surrounds me. I do not see location separate from objects, although they are processed separately in the dorsal vs. ventral streams, respectively. All objects in the world immediately have not only their color and shape inherent in them, but also their meaning, aesthetic, and emotional association, already bound together with the item, even though all these latter details exist solely in my brain, rather than in the objects themselves. In this way, NSE is highly contextualized, associated and bound together. The processes that bind these various aspects must all be pre-conscious, because I have no experience of assigning meaning to the objects I see (unless they are very novel or disguised). The whole world just seems instantly available to me, as part of my perception, which implies that perception must happen very late in the processing chain. It is only when a brain is extremely fatigued or altered that it starts to experience things as less than unitary.
I do think that global synchrony is highly correlated with attention, and the more salient a stimulus, the more synchrony the brain undergoes. You can think of brain waves as carrying structures for neural information. Since one structure sends information to another structure via a ‘spike train’, a precisely timed series of synaptic activations and pauses, the brain wave acts as the time signal, which allows for the receiving structure to decode the incoming message. In normal, inattentive life, the brain sends messages back and forth on many slightly different versions of gamma, which are not fully synchronized, and that’s sufficient for low-level communication. However, as saliency grows for an input (like a snake in the path ahead), more and more structures synchronize their brain waves, which allows for very rapid communication and response.
It is easy to see then how synchrony is related to awareness. As attention is placed on an input, synchrony grows in the structures which process that input and which prep a response. And all those structures send reports to the HF which binds them into an experience that closely maps to the synchrony that preceded it. But I think this is a correlation, not a causation, since NSE also occurs when synchrony is low and even during REM sleep. We are more likely to remember synchronous moments, because the saliency of the input fires off enough emotional alarms to make the event salient for memory, as well. But NSE also occurs during deep thought, when synchrony in the extrinsic network is very low. It’s just NSE of thought, rather than NSE of the world.
So yes, as you asked, I do indeed think that attentional processes are upstream of the HF, and happen prior to perception. Attention helps influence what makes it into the HF, and hence into NSE, but NSE can also ignore a good deal of what some of our agents are attending to.
There are some other glaring gaps in the synchrony models of consciousness, as well, which I think the HST answers well. In particular, there is no mechanism in synchrony models for ‘filling in’ our scotomas, saccades, etc. The HST provides that mechanism, via the pattern-completion processes of hippocampal field CA3. As you’ll recall, the default mode network uses the HF as its mind’s eye / holodeck for simulating imaginary scenarios, and dreams are also simulated within the HF. So it is not surprising that the HF can also simulate ‘filling in’ for the scotomas, etc., and erase our view of our nose, etc. None of that is explained in the synchrony models.
Also, vision at the neocortical level is egocentric, meaning that the world is viewed in relationship to the eyes. Perception which was egocentric should display the world swimming about us, rather than reveal us moving through a static world (as per the example I gave of strapping a camera to your head). However, our perception is allocentric, meaning the world seems to be stable, while we move through it, a neural translation that only happens in the HF (at the parahippocampal gyrus). In the HST, perception is made allocentric in order to create memory that is more contextual than just where were things in relationship to me. Rather memory needs a cognitive map in order to make sense of where things were in the past, and how my current perception relates to my previous experience. In my paper, I gave the example of the hippocampal patient who perceived the world without a cognitive map, and therefore, whichever direction she looked, it all looked the same to her.
As for your question about the ‘final edit’ of NSE, this is related to Dennett’s ‘multiple drafts’ model. And indeed, I think the science shows a feedback error correction for brand new memories which can provide revisions upon stimulus, answering Dennett’s concern. This also implies that the ‘nowness’ of perception is something of an illusion, and that some buffering and revision may happen to an event, before it becomes part of NSE. In theory, there is a ‘final edit’ from memory, but memories become labile upon recall, which means that every time we recall an episode, we may alter it somewhat. And since the revised memory is what is subsequently recalled, then there really is no final final edit, rather perception can continue to change, years after an event.
I’ll have more to say about hallucinations, etc. later.
best,
matt faw
Hi Graeme,
I agree that the human experience of the self is pretty language bound. But I'm pretty sure that 'self' exists in other species as well; we've just added language to the pre-existing architecture.
Episodic memory is there so we can analyze an event, after it's happened, and try to make sense of it. If an animal is surprised by a predator, there is no time in the moment to figure out how to avoid it next time, but memory allows the animal to remember it later, and make sense of it from relative safety. But the memory must have a 'self' in it, or it will lack context. How was I feeling? What was I doing? All this must be included in the memory, in order to be able to learn from it.
Likewise, future projection needs a self. If an animal is going to try to ready itself for a physical goal, like squeezing through a narrow gap, it needs at least to have some sense of its own body first. This is a more primitive model of self than our own, but it is probably the basis upon which our own selves are created. Language just makes the self more concrete, by including notions of is/isn't, adjectives, etc. Granted, language changes reality drastically, as exemplified by Helen Keller when she started learning it, but I doubt language is the whole story.
I do think that the memory self is essentially the same as the perceived self. This is only demonstrably so upon the recall of exceptionally salient and vivid episodes, in which the re-experiencing is so deep, that one seems to be actually living it, as if for the first time. This is also experienced in episodic memories that are only a couple seconds old, when the self-perspective still feels very true and current. However, true, most of our episodic memories quickly fade, and become less experiential and more narrative over time. Every night during slow wave sleep, we consolidate our episodic memories, shedding unimportant details but reinforcing the gist. This is also true when we ruminate over memories, or tell stories about our episodes; we boil things down to the gist and keep only the important details.
There are probably also some meta-tags that the brain uses to distinguish between memory recall and new experience, just as it distinguishes between imagination and real stimulus. These make the memory (and the self within it) seem less immediate, less real.
For me, at least, the nowness of the remembered me is most clear in memories that cause me to feel shame. If I allow myself to get lost in recall of a shameful episode, I can find myself powerfully projected into the earlier episode, feeling all the feelings over again. I've even caught myself speaking out loud in response to the recalled episode, and kind of 'wake up' out of the reverie. That (and recalling near misses on my motorcycle) are probably the two most immersive recollections I have.
But yes, I think you're right: the recalled episode is mostly a prime for the current moment, a way of expediting predictive processes.
As for the hard problem: I agree this much with Chalmers, that experience is not a physical process in the same way as digestion. Experience is rather an informational process. It is the decoding of a piece of information, the episodic memory engram, by the receiving cortices. The activation of the receiving cortices we know as experience, but it is just the first instantiation of the episodic memory information upon the brain.
With actual player pianos, they are made to uniform specs, so one roll can play in many players. Every brain is unique, however, so the memory episode that plays in my brain will not work in yours. Therefore, the brand new memory must be expressed in the brain in order to make sure it fits. The original experience of the episode makes sure the memory is correct, and conforms to reality as I've known it. This keeps the memory from being encoded with false information, which would make it a confounding influence in the future. It also makes the later recollection familiar.
Why does the subjective experience of tasting one berry differ from another one? Because later, if I get sick, I need to be able to remember that taste, associate it with the sickness, and thereby avoid those berries in the future. If I can't re-experience the previous episode, then all my learning is due to immediate interaction with the world and is low-level Pavlovian, like an amoeba. With memory experiences and simulated imaginary experiences, I'm able to feel things that are not immediate, and come to conclusions in the abstract. This is part of the hippocampal/ventromedial prefrontal cortex connection: the hippocampus creates the 'situation', the full experience of right now, and the vmpfc uses that situation as a complex algorithmic if/then conditioning response, firing off amygdala response and appropriate behavior.
I don't believe there is a hard problem, really. Explaining the experience of red doesn't seem any harder to me than explaining how the brain coordinates muscles or does theory of mind. They are all just products of strategies the brain uses to cope with the world. It's just because we've given consciousness this huge importance that there seems to be some big mystery to it. But no one doubts that memory is useful to the brain, or that there is a reason for an episodic memory to feel like something. As long as we recognize that memory has a purpose in the brain, then experience is just a strategy of memory.
I'm not a big fan of the Mary thought puzzle. Of course knowing facts about physical phenomena that lead to sensation is not the same as actually experiencing; it's a straw man argument to conflate the two. It's like thinking that because I know water is made up of H2O, therefore I know what 'wet' feels like. Only a philosopher could confuse facts with experience. A wine sommelier can expand her palate by learning different categories of esoteric qualia, but she must actually experience the different wines, not just read about them. Animals are first and foremost experiential learners because we are organisms, not computers. We learn knowledge, but most of what we learn is actually experience. Experience is what allows you to put your key in the keyhole without struggle; no amount of book learning will change that.
As for the question about computers, the way I think about it is: does the computer keep a news report of its processing for later reference, is that news report confabulated to have a thinking & feeling 'me' as the hero of the story, and does the computer refer back to its news report as if it were an accurate portrayal of reality? That is what would be needed for computers to have the same kind of 'experience' as us. Because the computer is not an organic being, it does not feel, but given enough data and a useful 'self' schema, I don't see why a machine couldn't be programmed to have a self-reporting loop which would be as experiential for it as our NSE is for us. Because that's all it is, an informational loop which is useful for survival.
I hope this helps.
matt faw
Hi Matt
Thanks for your reply about the "hard problem". I don't think my understanding of the hard problem quite matches of the two cases you've described, but I'll come back to that. What you've said though does give me a better sense of aspects of the theory that I still need to think through a bit more.
In terms of subjectivity, what you describe sounds a lot like Graziano's social brain concept. I guess you are generally in agreement with his ideas about that then? Except that you disagree about where and how that becomes instantiated into an actual experience. While I like Graziano's model in this respect, I can't help but feel that Jaynes comes closer with his idea of an introspective "I" that arises from linguistic foundations. This just fits better with my own sense of what an "I" is. I am of course hardly in any position to offer substantial insights, I'm just reporting how it seems to me. Mechanically I think Graziano is largely on the right track and his description captures the sense that any social self-aware organism must have. But when I think about this, it's language that seems to me better facilitates the introspective mind-space that we tend to think of as "I".
I must ask something I haven't quite understood from the paper. Do you think memories have the same sense of self as do immediate experiences? It doesn't seem so to me - when I introspect, memories have more of an historical narrative sense than a current perspectival sense. That is, I am not at all sure I feel the same emotional flavour to memories. For example, a very frightening episode will usually contain powerful and undeniable emotional responses (eg fight or flight), yet replaying the memory doesn't contain anything like that. It contains the sense of it to be sure, but I can choose my emotional response. I think that my memories don't really have a sense of self at all other than that I know they are mine (put another way, memories seem more like a prime for current experience).
How in the HST do you account for the vividness and emotional salience of the current moment versus the somewhat flatter form of a remembered episode? Or maybe to be clearer, the engram activates contributing departments in order to produce the current experience, but what happens that constrains the engram in reactivating those departments in later memory retrieval such that they don't regenerate the entire current moment in all its vividness?
In regard to your comments about the question of why we even have experience, I think some of that goes over my head. I'll have to reread that and the relevant part of the paper and try to figure it out. I think you are saying that we need to have a conscious experience in order to create a canonical stereotype of the experience, against which the recalled experience has contextual value (or is it more the reverse?). I'm not sure I quite get the bit about the usefulness of this mechanism in learning, but it does raise for me the question of how this process distinguishes between playing out mental simulations while also entertaining the current moment. As "driving mind" illustrates, the more engaging our simulations the more the present moment fades, so what exactly is happening there?
Coming back to the hard problem, from what I've read and understood, I think the hard problem as framed by Chalmers is about the physical basis for experience not being amenable to scientific inquiry. That is, while we can observe neural processes in action and draw inferences about how those contribute to conscious experience, none of this explains how it is that conscious experience can arise in the first place. We cannot observe experience empirically and yet we can note that it has qualitative presence. Nothing of the mechanics of neural processes can tell us what the colour green or a sharp pain "feels" like. I suppose Chalmers and those who argue for there being a hard problem think that the feeling of qualitative experience is simply not a physical process - how can the interaction of cells and molecules and ions and so on generate something that we all have and yet which we simply cannot quantify.
When I read about Prinz's AIR, or Graziano's Attention Schema, or Matt Faw's Hippocampal Simulation Theory, I am reading about attempts to identify where in the brain, and by what processes, consciousness can arise. Each in its own way endeavours to identify the neural correlate for consciousness. But none can explain why it is that these mechanisms can cause me to know that green looks like that, or that coldness feels like this.
Graziano tries a little sleight of hand by attempting to paint it as entirely epiphenomenal - that is, we only imagine that experience has subjective quality because the attention schema model says it does. And while that might be true, it still avoids the issue. Whether we directly apprehend the qualia of experience, or whether we only indirectly apprehend these as properties of a model, the "hard problem" remains.
There is nothing about mapping out how a schema of attentional processes is instantiated in the brain, nor how the hippocampus activates the neocortex that can tell us that we experienced a colour or a feeling of heat or cold. If we built a computer that can generate an attention schema and which can use that to output a report of its inner experience, as Graziano is fond of arguing, should we believe that computer is really feeling heat or experiencing green? Graziano thinks so, many others do not. Frank Jackson's thought experiment with Mary the super scientist attempts to capture the flavour of this whole matter of "qualia as distinct from physical process" - there is something about the quality of experience that is irreducible to physical matters.
Jackson says:
"The trouble for physicalism is that, after Mary sees her first ripe tomato, she will realize how impoverished her conception of the mental life of others has been all along. She will realize that there was, all the time she was carrying out her laborious investigations into the neurophysiologies of others and into the functional roles of their internal states, something about these people she was quite unaware of. All along their experiences (or many of them, those got from tomatoes, the sky...) had a feature conspicuous to them but until now hidden from her... But she knew all the physical facts about them all along; hence, what she did not know until her release is not a physical fact about their experiences. But it is a fact about them. That is the trouble for functionalism."
The hard problem therefore is how to explain the presence of intangible and unquantifiable "mental" properties that carry semantic and functional value and seem to supervene upon physical properties but for which no mechanism has been identified by which such an inexplicable thing should happen.
Cheers
Graeme
Hi Graeme,
Here is my response about the hard problem:
If the hard problem is "why do we even have subjectivity at all?" then the answer is, because subjectivity is contextual data needed to make sense of memories. If there is no sense of 'me' in the memory, then I will not have the context necessary to make sense of it. The perceived/remembered self allows the memory to contain 'what I did' and 'how I felt'. Without these, memory cannot serve its primary purpose, allowing the brain to make sense of and learn from past events.
Any brain that makes sense of the past or projects into the future must have a simulated sense of 'self', because all planning and episodic recalling are done in relation to the self. At the very least, this must include a bodily self, including emotions, pain, pleasure, nausea, etc. so that we can learn from those inputs. And any social animal also needs a social self model, in order to predict and learn from social interaction. If instead, you read the hard problem as "why do we even have experience?" then the answer is also the needs of memory. It is important that the new episodic memory activates the neocortex, creating experience, because upon recall, the memory needs to be familiar. If that hippocampal output has not been shared with the rest of the brain when it's first created, then upon recall it will not trigger familiarity sensors, and it will seem like a daydream, rather than an experience I once had. So the original experiencing of the new episodic memory also lays the groundwork for making sense of it later. As the phenomenon of 'driving mind' reveals, if I don't have an experience of something now, I won't be able to remember it later.
The current experience serves other purposes, as well. The immediate feedback of the new episodic memory from the hippocampal complex provides error correction for the new memory, primes the receiving neocortex for its predictive processing, and shares one common story of 'what just happened' around the brain. This allows for cognitive continuity over time, as a chain of memories keeps the current moment relevant to past goals. The chain of memories is part of the context of the current moment experience (see the section of the paper on field CA3).
Furthermore, the default mode network (especially the ventromedial prefrontal cortex) uses the complete hippocampal output as a complex code to trigger emotional response and behavior. We all know about simple Skinnerian conditioning, but the vmpfc uses the hippocampal output for complex conditioning, whereby the whole situation contributes to the response learning.
The default mode network also simulates on top of the current moment experience. For example, a chess player may mentally move around virtual pieces on the chess board, or someone might figure out how to pack boxes in one's trunk. Both use the current moment experience as the ground for simulating the virtual actions.
I hope this helps resolve that question. I'll deal with the other questions soon.
best,
matt faw
Hello again Matt,
Thanks for that update on progress with the paper. I am a little disappointed to hear that people haven't read it through; I wondered at what the professional researchers might make of it. Good to hear that hippocampus experts seem sympathetic. As I said I am just an interested person with a smattering of knowledge and ideas so my thoughts on the paper are of little consequence, nonetheless I think the fundamental idea is unlike anything I've read so far and it offers intriguing insights to someone like me who is trying to get a handle on things.
I mentioned being fascinated by the subject. I really only began thinking seriously about consciousness a couple of years ago. Like most people I was of the view that we exist as actual observers inside our heads. I was probably very much a believer in some form of dualism in which the mind exists as a separate entity.
The trigger that set me off on a quest for knowledge about this subject was Daniel Dennett's book, Consciousness Explained. I had bought it years ago, read it, didn't understand a word of it and forgot about it! I found it again in 2014 and gave it a more serious read. I sort of followed what he was saying but didn't really think it was on the money until I started to introspect on exactly what was happening in my own consciousness. I posted on some science and philosophy forums and had some great discussions and as time passed I completely changed my view to become a strict physicalist.
I tried to dig my way through a variety of papers and so on from people like Block and Chalmers etc, but it was Graziano's book "Consciousness and the Social Brain" that really solidified the ideas I had running around in my head. Prinz's book "The Conscious Brain" was even better as it gave me a great and comprehensive review of the current state of the research. I also read Julian Jaynes' book "The Origin of Consciousness in the Breakdown of the Bicameral Mind" which I think is quite brilliant, even if I don't agree with his central premise.
The memory connection came about when I first came to understand how critical memory is to our ongoing sense of self. I read an article ("What our memories are made of") in the New Scientist from November last year and then went off and read up some more. I had an interesting discussion on the Physics Forum in June this year about memory (https://www.physicsforums.com/threads/are-memories-made-of-this-or-that.879861/) and it was in the course of researching how to answer various comments that I think I came across your thread on one of the science forums dating back to February. I downloaded the paper and finally got around to reading it in October.
And here I am, asking you questions! I hope I am not imposing on you too much by doing so. There actually is a lot more I'd like to ask but I restricted myself to the more obvious things that grabbed me straight up. So, I look forward to hearing more from you, and especially hearing if any consciousness experts do give it a fair reading and offer feedback.
Best regards
Graeme
Hi Graeme,
I started writing a response to your longer e-mail when I was at work, but then left that response there, so it might take a little longer to finish that.
Regarding your statement about memory: I think you're right; memory is a data-lean information structure that points toward re-activating the originally reporting cortices. This is known as the indexing theory, created by hippocampal researchers Teylor and Discenna. That's the basis for my 'player piano' metaphor.
The memory engram is a complex, bound-together collection of pointers, and so hints at the "totality of the network state". But in our theory, if experience is the brand new memory, then it is clear that much of, probably the majority of the network state is not included in the memory. That's just too complex of a data load, which would quickly overload the system. Instead, only the salient bits of cortical experience, the 'news reports' are sent to the HF for inclusion in the new memory. That is why the persistent smells, air conditioning hum, feeling of our clothes and view of our noses are left out of experience (and memory), because they are all non-newsworthy.
As for the response to the paper, I don't have much to report. Most of the feedback I've received thus far has been by people who have only read the abstract, or seen the explainer video. This is true of the response I got from interviewees, when I released the paper to them earlier this summer. None of the respondents claimed to have read the entire paper, so it's hard to know how to gauge their reaction. So far, there's been more a visceral response against the idea that I as filmmaker should get so involved in the subject I'm covering.
I do have two high-profile endorsements, however, and they're both hippocampus experts. One is Lynn Nadel, who I interviewed in 2014 and who ended up publishing my paper. The other is Gyorgy Buzsaki, whom I interviewed this year, and who read the paper shortly thereafter. Plus Demis Hassabis, whose work with hippocampal patients first gave me the idea, also recently responded that the theory was very likely true. But I'm still waiting to hear from consciousness experts who have actually given a fair reading to the paper, the way you have. I'll try to get a more complete response out in a couple days.
best,
matt faw
Hi Matt,
Thanks for the prompt response, and I am in awe that you've interviewed both Prinz and Graziano (and of course all the other big names in the field). The documentary promises to be well worth watching so I am keen to see it when you get it finished.
By the way, I mentioned I'd been thinking about consciousness and memory and that was how I came to find your paper. The question of memory jumped out at me when I thought about hemispatial neglect and similar cases. I wondered at whether memory of an event was storage of the totality of the network state, or was rather an encoded pointer to the structures that were then reactivated. After some research I learned it was largely the latter state which I sort of felt pointed to experience being in some way identical with, or very similar to, memory activations. While doing that research a few months ago I found your thread on a forum somewhere which led me to your paper and Facebook page!
Look forward to hearing further from you.
Regards
Graeme M
Hi Graeme,
Thanks for your e-mail. Just upon cursory reading, I think you have a good sense of the HST, and your questions are excellent. I'm also a big fan of Prinz and Graziano, and interviewed them both for the documentary.
There's a lot here, and I have to go to work now, but I'll try to craft a more thorough response in the next couple days.
best,
matt faw
Hi Matt,
My name is Graeme; I commented on your Facebook page a little while back and you suggested I write directly to you. I have finally made the time to do exactly that! :)
I found your paper very thought provoking and now that I've read it again I think I have a better sense of what you are proposing. Up front I have to say I am just an interested person with no special background in the field - I've read a few books and given it all some thought but lay no claim to any great depth or breadth of knowledge. Still, I find it all absolutely fascinating and I am always keen to hear new ideas and theories about the topic.
In reading your paper, I was struck by a few things that didn't quite make sense, or that didn't seem to fit with some other ideas I've read. I suppose I am interested in further clarification though of course I don't mean to intrude on your time by asking. However I rather like what you are proposing and I'd like to make sure I properly understand it, so this is rather a long letter I'm afraid. I hope you can find time to have a read and maybe offer a few thoughts in reply. I apologise if some of what I say doesn't make sense - I can only work with the relatively limited understanding I've developed over the past couple of years of very part time reading!
Now, before I ask any questions, I should first check that I understand what you are suggesting. That is, do I have a fair grasp of what your Hippocampal Simulation Theory (HST) means. So, here's my take on what your theory offers. Simply put, you are proposing that the Hippocampus generates a model of the relationship between self and world based upon input from a variety of networks and control processes. This unfolding model, or simulation, is committed to memory for later recall. However, upon first creating the model, the HC reactivates the structures that contributed to the model in the first place and this reactivation is what we call subjective experience (NSE). This 'reactivation" contributes to predictive scene generation processes and thus generates NSE.
Because the model is not a direct representation of the world as it is but is rather a composed experience, NSE exhibits a range of discrepancies from a direct representation that is best explained by this idea of a hippocampal simulation.
While the HST may explain how NSE arises, you are not explaining the "hard problem" as such, but rather identifying a neural correlate for subjective experience. However, you also define NSE in a particular way, which is to say you define it as the complete, unified, and recollected experience of self in the world in which the self is aware and able to use NSE to make decisions about behavioural responses. While this experience is generated in the HC, some elements of awareness may be found in network buffers upstream of the HC and these can contribute to whatever awareness that people without an HC, such as the person HM, might experience.
Is that a reasonable approximation of your proposal? I hope so, otherwise some of the questions that follow might be way off beam!!
Moving into my areas for uncertainty, the first concerns "attention". I don't see much mention of the contribution of attention to the process of consciousness and so I gather that you see the attentional system as a part of the overall data flow of neural processing but not an enabler of consciousness itself. However many researchers see attention as essentially fundamental to consciousness.
For example, in Jesse Prinz's book "The Conscious Brain" he makes much of the modulation of intermediate level representations by attentional processes and he is of the view, if I follow him correctly, that it is this modulation that leads to representations becoming conscious.
His view appears to be that all of consciousness is grounded in perception and that we do not have conscious access to cognitive functions. And it is when we attend to perceptual representations that we become conscious, hence his "Attended Intermediate Representations", or AIR, theory. Prinz argues (quite cogently it seems to me) that representations need merely be available to working memory buffers rather than specifically encoded within any kind of global workspace. He seems to be suggesting that consciousness arises through synchrony and resonance of those structures representing perspectival features of the environment (which I suppose includes the sense of self in the world).
He also proposes that unity by way of binding may not be the appropriate way to consider how these attended representations become conscious - rather, that unity may not be as notable a property of consciousness as we think. He says "unified experiences are those that resonate causally under attentional modulation... (and) are phase locked in the gamma band" and that "disunity may be a common aspect of experience". I am sympathetic to that last point as it seems to me that unity is not so much an integral part of experience as it is an inferred property. On introspection I observe that various inputs across modalities simply occur with temporal coincidence and I can draw inferences from that, but none are necessarily bound to each other.
Graziano proposes something quite similar. He argues (as I understand it) that various representations of sensory experience compete for processing resources in the brain and those that "win" the competition are boosted through what he calls the biased competition framework. In effect, this process is the allocation of attention to a signal or signals and thus attended signals (representations) are passed forward for further processing.
Both Graziano and Prinz propose mechanisms for NSE - Graziano suggests specific places in the brain that facilitate both social awareness and attentional processing such as the superior temporal sulcus and the temporo-parietal junction, while Prinz tends to focus more on neural mechanisms such as synchrony and resonance in network buffers. Both therefore are proposing that consciousness depends upon modulation of representations by attention. However, in the HST, you are saying that it is actual encoding of representations and the immediate reactivation of those structures that delivered up the representations that constitutes conscious experience. This seems to me to be a definite step further along the neural processing path than either Graziano or Prinz are suggesting.
In drawing out the idea of a hippocampal simulation, are you suggesting that attentional processes occur upstream of the hippocampus and that the mechanisms and structures Prinz and Graziano refer to simply pass their contributions to the HC (ala the idea of departmental reports)? More simply, are you saying that consciousness arises at a later mechanical stage than Prinz and Graziano suggest?
On further consideration it seems to me that perhaps another way of looking at this question is draw a distinction between what Prinz considers consciousness and what you are defining as consciousness. For example, you note that people such as HM are not zombies and have some kind of subjective experience or awareness. You suggest this is because they can access network buffers to draw upon perceptual representations and so do not lack consciousness, but rather lack a more complete form of consciousness. If I have understood your point here, Prinz and others are talking about what it is to have awareness rather than the more complete form of consciousness you are defending. That is, Prinz seems to be trying to uncover why it is that we can experience red and for red to have a "feel" that is different from green or hot or pain. You are not explaining how it is that we can have awareness, so much as explaining how awareness proceeds through the processing hierarchy to finally be instantiated in a complete experience of self in the world. Does this seem a fair approximation of your case in this respect?
My next area for uncertainty concerns the fact that you are arguing for NSE as being intrinsically related to episodic memory. Once the HC has generated the engram of what just happened, it broadcasts a summary model of experience by using the engram to activate those structures which contributed to the hippocampal simulation. So NSE itself, as a unified and experienced moment, does not occur until after encoding by the HC. To me this implies a certain sense of a "final edit". That is, what we experience is the representation of what just happened immediately after all relevant error correction and predictive analysis has been completed.
However, if that is so, how does it explain such experiences as hallucinations, or even momentary and fragmentary edit sequences? By the former, I mean cases wherein people experience entirely unrepresentative visual or auditory imagery; by the latter, I mean the fact that we occasionally see things that aren't there or which with further consideration become more clear.
In the former, we have reports of people whose experience seems to be proceeding perfectly well but which suddenly take on a quite different form. For example, observing a car driving towards one, which suddenly transforms into a dragon and flies off into the sky. Do you mean to say that such transformative processes occur upstream of the HC?
In the latter, one can occasionally perceive something that isn't there, usually in response to a momentary but not fully perceived stimulus. For example, while driving I might catch a glimpse of a tree branch as I traverse the scene visually and for a moment I think it is an animal (it usually seems that I have an actual perception of the animal even to the extent of shape, colour or even species), but when I orient to that trigger stimulus I see it is a branch. Clearly my brain has predicted, on the basis of information to hand, a possible object for further consideration. But it isn't really at all, so I fill it in with the actual object once more data is to hand. But the fact I am conscious of that momentary representation seems at odds with the idea of a "final edit" constituting NSE. Is this an example of the network buffers contributing to experience before the complete NSE edit is available, or have I misunderstood exactly what the engram and subsequent activation comprises?
Just as an aside, and on the idea of an editing process and the final edit being NSE, I have some sympathy for the idea though I am not entirely sure it is what you are describing. I have noticed that sometimes experience lags objective events. For example, you might be familiar with the experience of hearing a noise and then dreaming about that noise. I've examined this quite closely and what seems to happen, at least for me, is that the noise is woven into a narrative which includes an explanation for the noise, however the noise itself occurs prior to the narrative (and the noise) becoming conscious. For example, I might hear a door close unexpectedly as I am dozing, but in my consciousness there is a dream sequence in which someone opens the door and then slams it in anger. The interesting thing though is that my body jerks in response to the noise noticeably BEFORE the dream sequence begins. In such cases, it seems to me that the narrative is generated to explain the noise however without visual input to help explain things the brain has to generate an entire visual scene, post hoc. The thing is, the visual scene so generated is consciously perceived AFTER bottom up processes have responded behaviourally!
My last question again centres around the idea that NSE is a final edit event. If I have misunderstood the proposition and NSE is NOT such an event then this question may be unrelated or even unfounded.
On introspecting about my own experience of NSE, I observe the interesting fact that I am conscious of movement. That is, if a car drives past me, I see it moving. Yet, if I were simply generating a moment by moment scene, it seems to me that I shouldn't be aware of movement as such. rather I would have a series of still images from which I could infer movement.
An illustration of this is perhaps what happens when I examine actual experience and imagination. Experience has the quality of actual movement, imagination does not (at least in my mind it doesn't). By this I mean that in imagination, I see static images that I connect together with the "idea" of movement. For example, if I imagine a man running along a street on first blush it most certainly appears that he is running. But on closer examination, there is no sense of fluid movement. Rather, I see static images of each element of the experience to which I attach a sense of movement. That is, I do not see him passing by the background, nor do I see his legs moving rhythmically, or his hair blowing in the wind. What I do see is each of those elements when I think of them, actually separate from the complete image, and to each I assign a sense of motion. Perhaps I even have a series of static images. It *seems* like a man running, but it isn't really, it is merely the idea of such (something similar also happens with say colour or multiple items in a scene).
Getting back to actual "live" experience of the world, it seems to me upon examination that the current moment is around .5 to 1 second in length. I can actually see movement because around half a second or so of the scene is actually maintained in consciousness rather than simply being presented as a series of static images. Once that extended moment fades I can recall the scene via normal memory, but the half second of present experience seems not amenable to that kind of retrieval. Prinz argues for a form of neural maintenance. He suggests that "mental maintenance... (comprises) mechanisms in working memory that sustain perceptual states rather than representing them". Now here he is arguing for something somewhat different from what I am describing as he is pointing to the possibility that a percept might remain available via maintenance of the representation of a stimulus in working memory. He says "A working memory encoding is an internal state that causes a sensory representation to remain active after the stimulus that caused it has been removed".
However it doesn't seem at odds with this notion as Prinz describes it to suggest that some similar maintenance process might cause around a half second of representation of sensory input to be maintained, perhaps again through the mechanism of some kind of neural resonance. This is pure speculation on my part and might be completely unlike what is happening. However if it is at all close to what is happening, it points to the current moment's experience not being that generated by a final edit of a scene ala HST but rather some process within the network buffers upstream of the HC. This is more in line with what HM might experience in his moment by moment awareness of a scene.
If this kind of situation were true, it might suggest that in all of us, the current moment is generated by a both availability of representations to working memory buffers allied with some kind of resonant maintenance of those buffer representations. However almost immediately thereafter the generation of the engram representation activates a fixed view of what just happened upon which further processing can build. That is, might the present moment of conscious awareness be more in keeping with Prinz's suggestion of availability and maintenance in network buffers, but ongoing subjective awareness of self in the world makes use of the Hippocampal Simulation process (after all, our actual sense of the moment is somewhat ephemeral whereas the ongoing sense of self, which seems rather more solid, depends entirely upon remembering that moment...).
I hope that this hasn't been too long or too rambling; on the whole I think I grasp what you are proposing via the Hippocampal Simulation Theory and while I am in no position to actually judge its validity, I can't help but think the whole thing turns on how we define "consciousness", or as you describe it, "Neurotypical Subjective Experience". I look forward to hearing more of this theory and how it is received by other researchers. I am also hoping someone will finally solve the hard problem one of these days!!
All the best,
Graeme M.