Ep. 10 - Awakening from the Meaning Crisis - Consciousness
The last episode introduced the relationship between mindfulness, the structure of attention, altered states of consciousness and the cultivation of insight. To really understand the cultivation of insight, we need to better understand the machinery of altered states of consciousness. But to do that, we need some cursory treatment of consciousness.
Nailing down the nature and function of consciousness is probably one of the hardest philosophical and scientific challenges known to mankind. There’s even something called the Hard Problem of Consciousness. Fortunately, it seems that we don’t need to solve consciousness in its entirety to solve meaning making in our cognition.
There’s two ways to explore consciousness:
What is the nature of consciousness? For example, how does it emerge from the brain?
What is the function of consciousness? What does it do? Or rather, what’s it there for? What purpose does it serve?
We can somewhat intuit the answers to these questions. For example, most people prize their ability to be conscious. Suppose we’re offered a billion dollars and the ability to spend it however we want. The catch is that we’re not conscious to actually enjoy any of that wealth. And we can’t give it away to future generations, or anything “noble” like that. It’s purely for purposes of consumption. Most people wouldn’t take the deal, because they’d rather be conscious.
But really, we don’t have any philosophically rigorous answers to either the nature nor the function of consciousness.
Global workspace theory
One of the best accounts of the function of consciousness is called the global workspace theory. It claims that consciousness functions similarly to a computer desktop. That is, we have a desktop in the center, and a number of files in the periphery. Our minds “activate” files by bringing them into the desktop as necessary. Once there, all the relevant pieces of information from the files can interact with each other. Once the work is done, this information can then be broadcasted throughout the machine. In this analogy, all the different files in the background are unconscious processes in our cognition. Our consciousness retrieves these files and brings them into the desktop, sort of like our working memory. We then perform some computation on it, and broadcast the results back within the system.
We need some kind of centralised processing workspace for the same reason that we don’t simultaneously open every file on our computers all at once. There’d be an information overload, and it’d be chaos.
This filtering of what’s relevant seems to be the function of consciousness. This is important, because there’s three areas where such filtering is necessary:
Filtering from all the information available externally to me, to find what’s relevant. The amount of signal that isn’t even captured by our sensory organs is astronomically vast. Even the information hitting our organs is extremely voluminous.
Filtering from all the information available internally to me, to find what’s relevant. That is, all the information and nuance contained in our lifetime of memory.
Putting together the external and internal information in a relevant way to make meaning in the world. For a given set of data, there’s nearly infinite ways that it can be put together.
The core function of consciousness seems to be to help us realise relevance, or relevant information. In his lecture, John Vervaeke offers an overview of various psychological and neuroscientific models that all seem to point in the same direction. They’re all interesting, but I decided to omit them from this blog post for expediency.
John doesn’t know if this relevance realisation is a complete account of consciousness. But it does explain one thing - when we have an insight, we have a flash. It’s like we get a sudden brightening of consciousness. It explains why we might want to alter our state of consciousness, because it would help us find what’s relevant or salient.
Putting this together
The diagram above shows all the different layers of the system we call consciousness. Let’s break it down.
As sensory input comes in, our machinery picks out some relevant features. This is similar to the example from the last post on “THE CAT”. We can’t pay attention to every piece of information in the room. It’s just too overwhelmingly vast. Another layer of filtering and relevance needs to get applied to these features, to “foreground” the few relevant ones. The stuff that’s foregrounded gets “figured out” into a gestalt of the situation, which is ultimately “framed” into the problem confronting us. The arrows in the diagram above indicate that there’s simultaneous top-down and bottom-up processing. For example, it’s just like this discussion on the structure of human attention. So stuff gets passed up this stack, it gets processed and fed back down to the lower layers of this stack.
All this can seem a bit abstract.
Let’s look at a practical example of picking up a cup. As we look at a cup, we start noticing details about the cup. But notice the feedback happening here. If we get too close to the cup to inspect it (e.g. with our eyes a millimeter away), we start to lose the gestalt of the cup. We can’t even see the handle at this point! If we start to get too far away from the cup, we start to lose the finer details (e.g. is the handle actually a bit broken?). We need to get to the right place to best examine and know the cup. We’ll call this “right place” the “optimal grip” on the cup. This optimal grip will evolve over time. For example, the configuration of us relative to the cup is different when we’re initially picking up the cup, versus when we’re about to gulp some water. The optimal grip isn’t a static thing. It’s a continuous optimization procedure that we’re engaging in as we conform to the cup.
As we’re optimising around the cup, we’re trying to get an “affordance” on the cup. That is, different parts of the cup start to appear salient as this optimization progresses. The affordances on the cup, that allow us to pick up the cup, aren’t a property of the cup per-se. For example, they aren’t available to penguins, fleas or things without the capacity to grasp the cup. Affordances also aren’t properties of our hands, because our hands alone can’t grasp things. Our hands always need something to grasp onto, when using an affordance. Affordances on the cup are relationships of coordination between the constraints of the cup and the constraints of my hand, enabling an interaction.
Let’s put some of this together:
We’re thirsty and look around the room.
The cup full of water is made salient to us.
We move towards the cup and try to find an optimal distance from the cup. We do this until we find an optimal grip onto the cup.
This optimal grip produces many affordances onto the cup. We choose one and start to interact with the cup.
We drink some water and our thirst is quenched.
In some sense, we don’t really see colours and shapes when we interact with the world. What we actually “see” are various affordances. When we look at a floor, what we really see is that it’s walkable, that it’s solid, etc.
Using the machinery described in the figure above, consciousness creates a salience landscape for us. The figure below has details of how this salience landscape is consumed by downstream modules. Within that salience landscaping, we size things up to get an optimal grip on them. That produces a presence landscape, which contains a network of affordances that we can interact with. As we interact with objects via their affordances, our cognition attempts to disambiguate causal patterns of interaction from correlatory patterns. This is our depth landscaping. All these different components are also in continuous feedback with each other in a top-down and bottom-up fashion.
Most of us have seen toddlers pick up spoons, and constantly drop them to the ground or bang them on other objects. This entire process is what they’re likely doing. They’re building up an internal causal model of the world. That is, they’re going up the stack until they have a depth landscape, and trying to disambiguate real patterns in the world from fake ones. They’re getting a deep participatory understanding of the spoon.
Transformation of consciousness - overcoming systematic errors
As mentioned above, this machinery is locked in a constellation of top-down and bottom-up processing. Altering our consciousness involves altering, or updating, all the different layers of this machinery. It’s not just a flash of insight like the nine dot problem. It’s a systematic transformation, because it affects the entire system. It’s not an insight derived from a given “setting” of these different pieces that constitute consciousness. It’s an insight of the machinery of consciousness itself. That is, it’s a radical transformation that takes place simultaneously across landscapes across a class of problems, rather than for a particular problem.
Previous posts have talked about how wisdom is linked to development. Or that wisdom is akin to the process of “waking up”. Let’s look at an interesting study about childhood development done by Piaget.
Suppose that we give each child in a collection of very young children two options for candy. The first option corresponds to the first row, where five candies are laid out. The second option also contains five candies. But they’re spaced out much further apart. We’re assuming that the children understand basic numeracy. That is, five is less than six, and that five is greater than four. Again, the only difference between the two options is the spacing between the candies. If we ask the children which row they prefer, most of them will consistently and confidently pick the latter row. This is surprising, since we’d expect equal chance in a population that understands that both rows are identical.
Why are the children more likely to choose the row with more spacing between the candies? The variable of spacing between the candies is very salient to them. Their salience landscape is forcing them to pick up on that. We don’t fall prey to this family of errors because that variable is not salient to us. So the children don’t have the same set of affordances onto the problem as us.
Piaget’s insight was that the family of errors made by a child during such testing is more informative of their development, than their successes. If the errors are systematic (i.e. not random), then there are constraints operating on the child’s cognition. So the process of childhood development is the process of updating these constraints inside their cognition.
As adults we’re wiser than the children that fall prey to these errors, because we’ve trained our salience landscape to hone in on the relevant information in the relevant way. It’s not just that our salience landscape is better than a child’s for this particular problem. Our entire machinery has been improved so that it’s no longer susceptible to this entire class of errors. That’s what causes us to be free from this illusion. However, we’re still susceptible to some form of self-deception. The only way to escape that is to make a systematic change in the structure of our consciousness. That is, the machinery that produces the different landscapes in our cognition. The improvement in this machinery is what it means to be wise.
When our salience landscape has been correctly tuned, we’re less susceptible to bullshit. We’re more easily able to see through illusion and into reality. It affords us to have a more comprehensive and flowing state with reality.
Ontonormativity of higher states of consciousness
Altered states of consciousness have the potential to create this systematic insight. Some engender such tremendous insight into the agent and arena, that people feel compelled to transform their entire lives based on it. People with such experiences report that these “higher” states of consciousness are more real than what they normally experience in their everyday lives. This feeling of “really real” is what John Vervaeke calls the problem of “ontonormativity”. Ontology is the study of the structure of reality. Normativity is the phenomenon when certain things are placed as being better than other things. In this context it means that these states place a demand on us to “be better”.
These higher states seem to be universal across populations and cultures. In Waking from Sleep, Steve Taylor suggests that ~30-40% of the population have these experiences, with variation in intensity. Roland Griffith’s lab has shown that attaining mystical experiences on psychedelics tends to lead to substantial personal transformation and quantum change. I specifically recommend this paper from his lab. It’s one of his highest cited papers, and shows a reduction in depression/anxiety in terminally ill patients when dosed with psilocybin.
We need to understand why these higher states are ontonormative, and why other altered states like dreams are not. Like dreams, these states are temporary. Again, like dreams, they don’t always have coherent content. They’re somewhat ineffable. Yet people that undergo such experiences promote these higher states, and reject other altered states like dreams.
On some level, all the various axial traditions are patterned around the idea that some person had a higher state, and shared some beliefs justified by those states. If we want to understand the legacy of the axial revolution in our cognition, we likely need to understand the machinery of these states.
The next few episodes take a deeper dive into these higher states, and attempt to unpack some of these questions.