Episode 109 - Predictive Brain with Andy Clark

Transcript

David: 0:12

Hello, and welcome to Overthink.

Ellie: 0:15

The podcast where two philosophers like to share exciting, cutting-edge research from fellow philosophers now and again.

David: 0:23

I'm Dr. David Peña Guzman.

Ellie: 0:25

And I'm Dr. Ellie Anderson. There's been a real revolution in the cognitive sciences in recent years, which has constituted a turn towards consciousness. In the 1990s, talking about consciousness within cognitive science was somewhat taboo. But in recent decades, a lot of cognitive scientists have become interested in the problem and nature of consciousness. And this is good news for philosophers, because philosophers have very long been interested in the problem and nature of consciousness. And so there have been some really exciting dialogues in recent decades between cognitive scientists and philosophers. And people working at the intersection of those two, people who are trained in both philosophy and cognitive science, including our guest for today. And most recently, I don't know what your experience has been of this, David, but in the circles that I run in, in phenomenology, studying selfhood and consciousness, really, this model of what's known as predictive processing has come to the fore, where the idea is that the brain's primary purpose, obviously, is to keep us alive, where keeping us alive means predicting what surrounds us in the world. Simply put, experience is a kind of prediction or set of predictions. And this turns on its head a traditional view of consciousness, according to which consciousness kind of passively receives input from the surrounding world.

David: 1:52

Yeah, in that view that is upended, or what was the term that you used, Ellie?

Ellie: 1:56

Don't ask me to repeat myself, I don't know.

David: 1:59

It upends or turns around this older conception, which is the empiricist's conception of how experience is constituted, which typically presupposes two things. One, it presupposes that experience is built from the bottom up, beginning with the sense organs receiving information from the external world, and then sending that information to various parts of the brain for higher and higher order processing, binding, and integration until you get, at the end of that process, something like a percept that is much more coherent and organized. And the second dimension of that empiricist model is that process of integration is always linear. In other words, it only goes from the senses to the brain inward, but never the other way around. And one of the central tenets of the predictive processing literature is that in fact, our central nervous system operates in a much more cyclical way, where there are flows that feed forward from the senses to the brain, but then also backwards, the other way: there's feed-forward and feed-backward loops, making sensation and prediction the two poles in between which experience is constituted. So here we have a loop rather than a line.

Ellie: 3:16

A loop rather than a line, but also a top down approach rather than a bottom up approach, which I think is a little bit different from what you just described, David, which is maybe a more recursive model. Because what you get with predictive processing, of course, you're right that there is a feedback loop and feed-forward loops, perhaps we could say, but you get the idea that perception really begins from our brain's model of the environment rather than from some raw environment itself. And this idea is new to neuroscience in its contemporary forms with the predictive processing model, but it's not a new idea altogether. In fact, in the late 19th century, the German physicist and physiologist Hermann von Helmholtz developed the idea that perception is an inference. And this, with various changes is what comes down to us today as the predictive processing model, which, like we said, is getting a lot of uptake. So the idea here is that if you see a tree with a bird on top of it, you're not having the experience of this raw sensory data of these shapes, of tree and bird or this single shape that you then distinguish in your ideas between tree and bird, the color field, etc. That would be, like you said, the empiricist picture.

This is something you see in David Hume: 4:38

in the idea that we first have impressions and then those impressions develop into ideas. That would be a bottom up approach. Instead, on the predictive processing view, what you first have is the brain's model of the tree with the bird on top. And then that model is constantly being updated by new information within the sensory field. So we go from, let's say, ideas to impressions, to put it super reductively, and then, as you said, this kind of looping back and forth.

David: 5:10

Yeah, that's right. So you begin from that top down expectation of prediction, then you confront the external world and start tweaking or updating the model as you move through time. And to add a little bit more detail to this"reductive" sketch, Ellie, that you've presented, according to Andy Clark, our guest for today, the way in which the brain operates and makes those predictions and updates that internal model of the world happens in four stages. And because this is somewhat technical, I thought we should talk a little bit about those stages as a way of preparing for the interview, so we know exactly how, on his theory, the brain brings about those predictions in the first place. So, according to Clark, the brain first generates an internal model using past experience, and then uses that internal model to make predictions about what is most likely to happen to me, given everything that I know about the way the world works and how my body interacts with it. So you first generate a model and then make predictions. Of course, many of your predictions will come true or be fulfilled, but in many cases, our brains make predictions that simply don't hold up to reality. And in those cases, we have a prediction error. There is a little bit of a red light that goes off in our brain that says, look, this prediction that I made based on the internal model is not matching what my senses are telling me is happening in the external world. And whenever you have this mismatch between prediction and sensation, the brain has to pivot and pay more attention to the world of sensibility, to the information that is actually coming in through the senses, and it has to learn to disregard or re-evaluate its own previous expectation, and then it uses that information to make better predictions moving forward. So you have modeling, prediction, dealing with prediction error, and then the final stage of this process, according to Clark, is what he calls estimations of precision. In other words, when the brain does have a conflict between prediction and sensibility, the brain has to look at the situation and as he says at one point, turn up or down the volume, either on the sensation side or on the prediction side. In other words, the brain has to ask itself, "Hm. My prediction is telling me one thing, the senses are telling me another thing, which one am I going to give more weight to as evidence before I decide what to do?" So there is this kind of back and forth where the brain has to consider which way to lean towards before making that final decision.

Ellie: 7:53

So, in situations of high familiarity, like walking around my own house, my predictions are probably going to be pretty reliable. But in situations that are really novel to me, such as experiencing an immersive artwork that, plays on the boundaries of those senses, my predictions aren't going to help me as much. And so I will be existing more in this space of ambiguity where there's a pretty low precision weighting that's being placed on, the predictions that I have there. And I want to talk to Andy Clark in the interview about a lot of really interesting mental health and social implications this has, but I'll just mention a couple of examples now that he brings up in the book. One is the example of implicit bias among the police force. We know that we have an epidemic in this country of police officers killing unarmed Black men, and that this has a lot to do with racial stereotypes. But what's happening here on the level of the brain, according to a predictive processing model, is that you will sometimes, as Clark puts it, perceive what you feel. If you feel like there is a threat to your person, then you're more likely to act as though there is a threat and not just act but actually perceive a threat, right? And so he suggests that retraining our precision weighting and our predictive models in these cases could actually help mitigate some of the effects of implicit bias.

David: 9:19

Yeah, that example stood out for me because it's a clear illustration of the danger of these predictions that are deeply rooted in our perception of the world that, we come to perceive what we expect to perceive. Another example, a little bit less politically charged than that, that I really like that he gives, Ellie, do you remember that controversy that happened a few years ago over that white and gold dress. Is it white and gold or is it black and blue? He talks about this. I don't know if you remember that passage where he says, the predictive processing model helps us explain what's going on in this case and why people perceive the dress as one color or another. And basically his answer has to do with whether or not you're a morning person or like a late night owl in this case. that was the pattern that best explain the distribution of how people experience the dress. And he said, for people who wake up typically in the morning, because they're used to being awake and looking at things when there's a lot of light, that amount of light is baked into their expectations of what they see. Whereas for people who tend to wake up late and go to bed late at night, they are more used to working in low light intensity environments. And so they typically predict objects under those ambient light conditions. And depending on which kind of person you are, you will see the dress as either white and gold if you're a morning person, or black and blue, which is so far really the only explanation, really, that I've seen for this bizarre phenomenon.

Ellie: 10:54

I don't even remember how I originally saw the dress color, to be honest.

David: 10:59

I'm a white gold, I'm a white gold girl all the way through.

Ellie: 11:03

Both are like imprinted in my memory now.

David: 11:09

Andy Clark is Professor of Cognitive Philosophy at the University of Sussex. He specializes in embodied and extended cognition, artificial intelligence, robotics, and computational neuroscience. He is the author of several books, including"Being There: Putting Brain, Body, and World Together Again," "Supersizing the Mind," and the book that he's here to talk

to us about today, "The Experience Machine: 11:31

How Our Minds Predict and Shape Reality."

Ellie: 11:43

Andy, thank you so much for being here today. We're so happy to have you joining us on Overthink.

Andy: 11:48

It's great to be here. Thanks for having me.

Ellie: 11:50

I want to start by asking about one of the core ideas in your theory of the predictive brain. And this is that our brains constantly generate models and representations of the world, which are then used to make predictions about experience. So on this view, perception is a top down process. Our expectations shape what we perceive rather than our perception just being something we receive passively, impressing itself upon our minds from an external world. And some folks that are in this field of predictive processing think this makes perception continuous with hallucination. I'm thinking in particular of Anil Seth, who's, been a popular proponent of this position recently. But you reject this view that the predictive processing model implies that perception is just hallucination. You instead want to preserve a role for the rich sensory information that we receive from the world as well. And so to put it philosophically, You don't want to go in the direction of idealism, but how do we get to the real world beyond the experience machine in your view if we're always just predicting? What is your view on idealism versus realism?

Andy: 13:04

Yeah. There's a, there's, a lot going on there. I think a good place to start is, by backing off a little bit and saying that the difference with Anil Seth there is more a difference of emphasis. So the people that are talking about hallucination, they tend to say, okay, perception is a controlled hallucination. And as long as you put enough emphasis on the controlled, I wouldn't have any real disagreement with that. We both agree that brains are home to a kind of generative model, actually, just like in all those generative AIs that surround us now, brains are home to a generative model that's busy trying to construct the best guess of the current sensory signal. Using what it knows about the world. So there is a real sense there in which the brain is trying to hallucinate the current sensory signal. But that's just the first step. They're also anchored to the world by all those flurries of prediction error that then ensue. So the brain tries to predict what's coming in, but then where the prediction doesn't match the sensory evidence, you get prediction error signals, are really carrying the sensory information that's not yet been predicted. And that's used to select a better top down guess until a good match is achieved. And that happens so quickly. That you very seldom experience the first guess, if you like. What you experience is the way that things settle down after a big flurry of prediction errors have recruited better and better guesses. So that's anchoring perception to the world. And that's what the whole machinery is there to do. The whole machinery is there in order to get a better grip on what's outside you by using what you know from your own previous experience to sort out what's important, what's signal, what's noise, all of the stuff we'll probably be talking about more later. So that's why I think epistemically, when all goes well, this machinery is all about staying in touch with the real, structured, external world. And all that's rolled into the'controlled' bit in the phrase 'perception is controlled hallucination.' But I feel that's not epistemically putting the shoe on quite the right foot. I'd rather say that hallucination is uncontrolled perception. Because the machinery is optimized to keep us in touch with the world. That's what it's for. It can go wrong. But I think putting hallucination in the sort of driving seat there seems to me to be misleading from a sort of philosophical perspective. I wouldn't, I don't think Anil Seth is an idealist either, I'm certainly not an idealist. I think this machinery keeps us in touch with the world that's actually out there, as it matters to a creature like us, most of the time, when all's going well.

David: 15:53

And it seems that in your view, our experience of the world is guided by our expectations that we have built up and accrued over the course of past experience. And then only the things that surprise that model make it through, eventually leading to particular experiences at a personal level. And I here want to ask you a question about what happens when the model that I have of the world that leads me to make certain predictions, then gives me predictions that don't hold up against the sensory information that I'm encountering. So I want to ask about these cases of error when, as you say, sometimes we do make the wrong predictions neurally and then our brain reassesses and changes its wagers. And the reason I want to ask about this is because there are a number of people who have argued, psychologists and cognitive scientists, that our brains are actually notoriously bad at detecting mismatch. And so I'm here thinking about the phenomenon of change blindness in particular as an illustration. There have been these psychological experiments that show that if you distract somebody enough, you can change the entire layout of the environment. You can change the person they're talking to. You can change paintings on the wall, the carpets, you name it, without them even batting an eye, they don't notice that the world has changed, even if it's a somewhat familiar environment for them. And for some of these people, this phenomenon of change blindness indicates that our minds are not actually prediction machines, or if they are, they're just really bad ones since they, miss major instances of mismatch. So I just want to ask you point blank. What's your take on change blindness?

Andy: 17:41

Okay, the point blank answer is that I think change blindness is actually really good evidence for the predictive processing picture. But in order to see that, we actually need another bit of the predictive processing picture to be on the table. So, the idea isn't just that the brain is making predictions all the time, but that it's doing something else simultaneously, which is predicting its own confidence in its predictions, if you like. So it's estimating how confident it should be of the current sensory evidence and how confident it should be of its current predictions. What that means in effect, is that if the brain turns out to be overconfident in what's in a particular scene in front of us, then it will not respond to incoming sensory evidence. That evidence will be assigned what in, that just to give me a sort of momentarily technical, is called a high precision weighting in the computational models here. And that's just a weighting on the prediction error signal. So it's how seriously it's going to take particular chunks of prediction error. And of course, if you don't take them seriously, they don't cause anything to happen upstream. And so if you're confident enough in the scene in front of you, then bits of new evidence in that scene are very likely to not make it through the kind of filter machinery here. And there's one other thing that I'd like to put on the table at least gently to start with, which is that predictions are also controlling our actions. They're not just controlling our perceptions because there's a picture of action here where it's, if you like, it's a way of making certain predictions come true by changing the sensory evidence to fit the prediction. I move my head around to change the sensory evidence to fit what I predict I'm going to be seeing if I move my head over there. So because predictions entrain action too, we saccade, we look around. The scene that's in front of us in ways that are highly determined by what we already think is out there. And I think there's a lot of that going on in change blindness and inattentional blindness as

well: 19:56

we're not harvesting, we're not trying to harvest the evidence that would go against the scene that we expect to encounter because we're not looking in the right places a lot of the time. There's a lot of road safety work that invokes what they call 'looked but didn't see' errors, where basically, people have actually saccaded in the right direction, maybe even moved their head around but not seen things. That's the sort of standard case there. But also cases where, because, for example, you're approaching an intersection, and there's only two places you expect vehicles to come from, you just don't look at the cycle lane that is maybe at 45 degrees to that intersection. So there's a lot of, there's a lot of factors I think that make us sometimes ignore good evidence.

Ellie: 20:47

Yeah, one of my cousins works in transportation and I was, I live on this really busy street and it's such a pain to walk to a crosswalk. You have to walk like three minutes in either direction minimum to get to a crosswalk. And so I was telling him I wish that I had a crosswalk, across the street from my house because then that would make it easier to cross this busy street. And he was like, you actually really do not want a crosswalk there. Because it's such a busy street in Los Angeles that cars are not going to be looking for this crosswalk and it would be really dangerous, people could get hit. And so I think that is an illustration of precisely what you're describing.

Andy: 21:23

That's a great example. it's also the reason why expertise is always a mixed blessing. In some of the road safety experiments, they had novice drivers approach a complicated intersection with a bicycle coming from a weird direction, and they were more likely to spot the bicycle than the expert drivers who are looking around in line with their predictions and not looking in the right places.

David: 21:48

It reminds me of that research that philosophers, especially ethicists, are probably the worst people to make ethical decisions because they just overweigh their own expertise and assume that they know more than the average person about moral thinking. But could I ask you a quick follow up about this? Because now that we're thinking about not just the predictions that our brain makes, but also the confidence that it attaches to those predictions, in the of change blindness, to stay with this phenomenon, or even the case of the sidewalk, does this mean that then our brain is overly confident if it is missing a lot of things in its environment that are changing? Is our brain just excessively hubristic in ways that maybe we should be wary of or is that something that only happens in certain cases, on your view?

Andy: 22:40

It's something that only happens in certain cases on this view, but those cases are really important and interesting. There are the cases of misfiring expertise that we were just talking about. But there are also a huge range of cases in what's currently talked about as computational psychiatry, where the idea there is that certain brains have become overconfident in their own predictions, and those brains are likely, for example, to hallucinate things that aren't there, just because they strongly expect them to be there. There's a lot of interesting work on psychosis that is based on that core idea. And then there's the idea that in other forms of unusual or non-neurotypical experience, example here is autism spectrum condition. There's an overemphasis on the sensory evidence. And so the world seems to be full of all of this important information that you should be taking seriously. And that of course makes it very hard to negotiate a complex world, particularly one that's been structured by neurotypical folk. So computational psychiatry I think is one of the really interesting application areas for this picture. And it's nearly all about differences in this precision weighted balancing act. How that balancing act varies between different individuals, same individual at different times, different situations and so on.

Ellie: 24:07

Yeah that was one of the most fascinating things about your book for me was precisely those implications because I think this was the first time that I'd really read something in the predictive processing space that went so far in the direction of talking about mental health and neurodiversity and in particular, there are a lot of implications of your work for how to better adapt to our environments, you mentioned autism just a moment ago, and you have a place in the book where you talk about how training that improves what's known as interoceptive self awareness in people with autism spectrum condition could improve their ability to discern subtle emotional information. And so this was, there's so many interesting examples that you give in the book, but an example that you mentioned just before you talk about autism spectrum condition in this quote that I just mentioned is hostage negotiators. And hostage negotiators tend to have extremely high interoceptive self accuracy. That is, if you ask them to measure the rate of their heartbeat and tell me if I'm getting this right or not, they will do so with exceptional accuracy. And that means that they're deeply attuned to the difference between their own bodily cues and the bodily cues of others, which makes them unusually empathetic, we might say, people. You say that they have an ability to pick up empathically on how others are feeling so as to judge when and how best to intervene. One of the suggestions here is that people with autism spectrum condition can train themselves in precisely what these hostage negotiators have had to train themselves in doing in order to be able to pick up on social cues more and more.

Andy: 25:45

Yeah, I think that, this is a very interesting area of work. It's quite early days in this area. A lot of the work is being done by one of my ex-Sussex colleagues, Sarah Garfinkel, who is now at UCL in London. So she's piloting interventions with autism spectrum condition folk, where you literally train interception in ways that then seems to somehow work against the experience of anxiety that otherwise occurs when you're trying to cope with these waves of unexpected prediction error that are always hitting you. In effect, the mechanism of this I don't think is well understood. It's not, on the face of it, obvious to me why improving interception is really going to help there, apart from the fact that on the predictive processing picture, you're never just trying to predict the external world. You're always trying to predict the external world and your own actions and your own internal world all at once. And all of those sources of information have to somehow come together into a coherent whole. And so maybe if we improve any of our grips there on any of those domains, it actually flows over into other domains. That's a kind of, at least a thought that that I'm tempted by. But it's certainly, practically speaking, Sarah Garfinkel seems to be getting results. So there's something there to investigate.

David: 27:14

Yeah, and I'll want to come back maybe a little later to this issue of some of the things that we can do to reinstate that balancing act that you talked about between sensory receptivity and prediction. One thing that I really loved about your book, and I just have to say it, is that it's filled with concrete illustrations of the ways in which human experience is a top down process where we experience what we expect to experience. You talk about cases of pain, if you expect to feel pain, you're going to feel pain even if there is no organic "cause." You talk about optical symptoms, you talk about action and comportment. You also talk about the example of the placebo effect, which is, I think, a very good illustration of this. You give a patient an inactive substance, you tell them that it's a drug that's going to make them feel better, they report feeling better, they give you a five star review on Yelp, you get more patients as a doctor. There is one example that you use in the book that left me with questions, and that is the case of the honest placebo. A) I had never heard about this, so thanks for introducing me to this concept. But B) it's a really counterintuitive idea. Because with the honest placebo, the idea is that you give a patient a placebo. You give them a sugar pill, and then you are honest about it. And you tell them, look, I'm giving you this pill. It is not real. It is an inactive substance. Take it. There is no science behind this. And what's really bizarre is that patients still report feeling better even when they know that the thing that they've been given has no efficacy associated with it. And so my question to you is, how do you see the honest placebo fitting into this predictive theory of experience that you've developed? Because I definitely see how the placebo effect fits: you expect it to be real, it's kind of real for you. But in this case, it seems to work in the opposite direction, because the patient upon hearing that it is not a real medicine, should expect it not to work, and yet it works. So can you talk to me a little bit about this thing that's going on?

Andy: 29:36

Yeah, I think the honest placebo case is very interesting because what it highlights for us is the importance of unconscious predictions. So conscious predictions are clearly just part of the mosaic here. When, for example, if you show me, a standard case like the hollow mask illusion, this is an ordinary sort of joke shop mask with one side convex, the other obviously concave where your face would fit in. If you light that from behind and then slowly turn the mask so that the concave side is facing you, you'll still see a convex face. Even though you know that's a joke shop mask, you know that it's the concave side that's facing you. And that seems to be because our brains are really, expert at predicting the shape of faces. It's one of the things that we've seen the most of, we care the most about, and our brains are really confident that when you've got a certain amount of face and eye shape information, you're going to have a convex shape around it. So it's very hard for us to see the concave shape there. And that's a case where your conscious mind knows full well what's in front of you. But all those layers of unconscious prediction are trumping the information that would give you the other experience. If only it was allowed to speak for itself as it were. And I think that honest placebo is a case like that. What's going on, I think, is that there's a lot of information hitting our senses from stuff like, it's being given to me in a hospital setting. It's being given to me by a doctor. It's in a nice piece of packaging. All of those things are going to recruit predictions against our will, if you like. Our brains will start to fire up the prediction machinery in ways that will have effects that go against what we consciously expect to happen, which I suppose is nothing if we truly believe it's an inert substance. But I do think it raises a really, super important and super unresolved question, which is exactly how does the conscious prediction stuff get together with unconscious stuff? Why is it that sometimes a conscious seed in that sort of cascade can make a huge difference? And other times it just seems to make no difference at all. I don't think that we've got an account yet of how conscious prediction gets together with the unconscious stuff, probably cause we don't really have a good account of what conscious anything is anyway.

David: 32:16

Or what unconscious anything is, anyway.

Andy: 32:19

Yeah, I guess that's right. Somehow the unconscious side seems easier to imagine.

Ellie: 32:24

Yeah,

Andy: 32:25

But yeah, so I think that, think about reframing. That's a very powerful technique that is often done with a simple little verbal flip. I sometimes experience tingles in my fingers before I'm going to give a talk. And I've learned to reframe those tingles as not anxiety, but chemical readiness to give a good performance. And I find that really helps. And yet, reframing my experience of the hollow mask is that's a hollow side in front of me. It doesn't do anything.

Ellie: 32:58

Well, and I think this leads me to ask a little bit more about the relation of what you're talking about here to what a layperson might call something like positive thinking, I live in Southern California. Positive thinking, manifestation are all over the place here. And, I've always, I grew up here, I have a little bit of that in me. And I have seen some of the ways that your perception can be wildly changed depending on what you are expecting to find, and I feel like the predictive processing models have given me a really helpful framework for understanding what's going on. Just to give you a really woowoo example. One of the weirdest experiences I ever had was being 19 or 20 years old on a meditation retreat in Thailand, as David and many of our listeners know, I've been a longtime practitioner of meditation. And I saw a mosquito biting me on this meditation retreat in Thailand. And I didn't feel pain, and I did not experience an itch afterward. And I was like, there's got to be a scientific way to explain this. But I don't know what it is. And it seems like maybe the predictive processing model can help us understand a bit what's going on there. Having such a different mind state by virtue of being on a multi-day silent meditation retreat was like training me away from expecting pain and itchiness and more towards maybe an experience of just like physical well being. But there's a really toxic or silly version of this, which is just, think your way out of feeling the itch of the mosquito bite, or much worse than this, just think your way out of suicidal depression. And what you just said in response to David's question, I think, already suggests that it's not as simple as that. But I'm wondering, at the same time, you do really valuably talk about how we can try to predict our own sensory flows, and this will drive learning, and you just mentioned an example where you've been able to do this yourself. And I wonder a little bit about how you're thinking about the distinction between this model and something like positive thinking or manifestation.

Andy: 35:08

That's a great question because I think there's a genuine worry here that the vocabulary that we're using will seem to buy in to stuff that pretty clearly doesn't work, like very simplistic accounts of positive thinking or simplistic accounts of manifestation. And that again is because you can't entrain the complicated prediction machinery that really helps bring things about just by putting a word in normally. It's just, there are some cases where we seem to be able to do something a bit like that. And the one that I gave is a little bit like that, the one about reframing my own anxiety. Nonetheless, in nearly every case that's a powerful real world case, there's a lot of expertise that needs to be in place before the reframing will actually do the work. And so the sort of phrase that we tend to use here is realistic yet optimistic prediction. So we want to take a certain amount of control over our own experience because this will be good for us. But if you have unrealistic expectations about what that can do, it will actually have the opposite effect. Because very soon you'll realise that your attempt to make yourself feel better just by saying to yourself, I'm gonna feel better, is just not working. You're gonna feel worse. So thinking about this in a domain like expertise I think is useful. So if you're gonna steer your car in the right way through a narrow gap then according to these pictures, your brain has to predict the flow of sensation that you would be getting if you were performing those actions. But of course, that's a very complicated flow of sensation and you can't activate that set of predictions just by thinking to yourself, the car's going over there. That actually only works once you've engaged in complicated training regimes that enable that kind of simple top level idea to activate all the right lower level predictions. Until your brain is predicting the very subtle flow of sensation, you will get, if you do, perform that action correctly. So there are hooks here into sports science too, where, in order to learn to perform a sports skill better, you need to learn what it feels like to be performing it better, then if you can predict that feeling, that will help bring it about. And that's about as close to manifestation, I think, as these accounts are going to get.

David: 37:45

As you wanna get.

Ellie: 37:47

Which is actually close, but with the robust scientific grounding and an absence of some of the sort of toxic ideologies that underlie it, I would say.

Andy: 37:56

But also the example that you gave of the mosquito raises some very interesting questions because we don't really know what the limits of the prediction machinery are. It seems intuitive that changing your predictions isn't going to cure a cancer or kill a virus. It clearly can alter experiences of anxiety, it can alter experiences of pain, it can alter experiences of chronic pain. But we don't really know quite where the limits are because predictions probably have effects on the immune system, for example. And as soon as you've got effects on the immune system, you're getting quite close to a lot of the physiology. So there is interesting work just emerging by Karl Friston and colleagues on what they're calling immunoceptive inference. And it's the idea that there's a kind of another layer of prediction here where the immune system is being brought into the act somehow. It's responding to predictions that the brain is making and it's altering the predictions that the brain makes. So if that picture ever really got fleshed out, then maybe we get a sense of where the limits of altering prediction are going to lie. Because that would be about as far as they could go. And Maybe that can get you the not actually getting an inflammatory reaction to the mosquito bite. I don't know. That would surprise me. Not caring about the mosquito bite? Yeah, that's a lot easier.

David: 39:29

Yeah, it's really interesting because the incorporation of the immune system response really takes us away from what I think a lot of people who might be familiar with this work might think are the limits, which is the limit of attention, right? If you redirect attention in various ways through predictions, then of course you can minimize pain. You can maybe change the way in which your senses are operating or what they're paying attention to, so on and so forth. Because the boundaries or the limits are open, I want you to talk a little bit about some of the strategies that we currently have for improving our well being because at the end of the book, you talk about, for example, the use of virtual reality for, for example, biofeedback. You talk also about things like reframing, which you touched upon just a few seconds ago. And so I want to ask you what your favorite hacks are and what implications you think those hacks might have for how we think about the role that the conscious mind has in shaping our mental and our physical health.

Andy: 40:40

Yes, it's a great question. I think my favorite hack, although maybe this one isn't well labeled as a hack, but my favorite treatment of this kind is probably the area called Pain Reprocessing Theory. So Pain Reprocessing Theory is a whole thing that is currently being worked out in quite a lot of different clinical settings. There have now been quite a few randomized controlled trials that show that Pain Reprocessing Theory really does work as a way of pushing back against chronic pain in a lot of patients. In particular, chronic lower back pain. It's a very well studied example. And what Pain Reprocessing Theory really involves is a number of different things all designed to slowly nudge the prediction machinery away from the prediction that if you try to do more you will cause bodily damage. So it seems as if one of the things in chronic pain or at least in some types of chronic pain is that the brain has started to make a prediction that continuing to engage in a certain activity will cause bodily damage, therefore you shouldn't. Experiential pain reflects that prediction as much as it reflects action at the sort of nociceptive nerve endings and so on. So if you can push back against that by first of all telling people about chronic pain, telling people that chronic pain is often something that is entirely curable, because pain itself is a construct, built very much out of predictions, and if we can nudge that prediction machinery in a better direction, that can help with the pain. This, I think, is a very powerful intervention. There's quite a nice documentary about it, called This Might Hurt. Part of the idea here is that, experiential pain can at times be a bit like a misfiring warning light in a car. So there's something like the warning light is on, the car seems to be telling you, you better stop driving me or something very bad is going to happen. But in fact, it's the warning light itself that is malfunctioning. And not only that, If you managed to continue driving the car, then you will slowly make the warning light get dimmer. that's the kind of surprise in twist in the tail there. So that's probably I think, the most important current hack or procedure. I'm also interested in meditation, which Ellie mentioned a minute ago. Meditation I think has a potential to be the most powerful self administered hack that we can come up with. It looks as if what meditation is doing is giving us a bit more control over that weighting mechanism that I mentioned earlier, the mechanism that puts a certain amount of weight on sensory predictions, on the expectations that we're bringing to bear on the sensory evidence. If we had more control over that, would be an incredibly powerful thing. We would be able to attend away, as David was just saying, from unpleasant sensations. We'd be able to attend towards more pleasant ones. We'd be able to attend away from intrusive thoughts. And that's, I think, one of the benefits of meditation practice. So they're probably my favorites. Pain Reprocessing Theory and meditation.

Ellie: 44:14

Yeah, thanks so much for that. And I think in general, following up on this idea about chronic pain and how we can transform our experience of that and potentially even eliminate it, one of the things that I really valued about this book is your move away from moralizing language around what are sometimes known as psychosomatic disorders following a really rigorous scientific approach drawing on predictive processing. For one, you suggest, perhaps unsurprisingly, not using the term psychosomatic. I think coming from a background in phenomenology, I've always had major issues with that term psychosomatic. It presumes a mind body dualism that doesn't really make any sense. And so instead you talk about functional versus structural disorders. And one of the things I think is so beautiful about this is how you point out that functional and structural disorders are not different in kind, but rather in degree. They exist on a spectrum. So you say that there's no such thing as a raw or correct experience of a medical symptom anyway. And so if the idea is like you have a pain in your stomach, is it all in your head or is it actually real? It's just the wrong way to frame it. The better question is there some structural reason for that? do you have a diverticulitis or food poisoning? Or is it functional? Does that mean it's happening on the level of experience but it doesn't have a structural correspondence to something in the body? But that doesn't mean that we shouldn't take it seriously, right? Pain in the stomach is pain in the stomach and it's a spectrum rather than, difference in kind between these two. So I'd love for you to tell us a little bit about that reframing and why it's important to take seriously what historically have been denigrated as "psychosomatic disorders."

Andy: 46:09

Yeah. I think that what used to be called psychosomatic disorders are a really interesting class of cases because coming from a predictive processing starting point, they just look like business as usual. They just look like, this is how we construct our experience. We construct it by attending to our bodies in certain ways, by having certain expectations, and by having certain sensory evidence, certain sorts of stuff going on at the sensory receptors. And nothing that we experience is ever purely what's going on at the sensory receptors. It's always heavily constructed by this other stuff. When you see that's going on in very ordinary cases all the time, then the more sort of dramatic looking functional neurological condition cases seem less different somehow. Think about a simple case like phantom phone vibration. I think most of us probably feel sometimes as if our phone is going off in our pocket when actually it's not in our pocket or it's not in the room or it's turned off or something like that. And that seems to be a case where the brain has started to chronically expect a kind of buzzing sensation. It's exacerbated by stuff like caffeine. It's exacerbated by stress and both of those act on the very same neurotransmitter systems that are doing this balancing act, weighting certain signals over other signals. So under those conditions, a perfectly innocent little, ripple of random sensation in your body. Can be taken as evidence for your phone going off and then amplified by the way that you're attending and expecting things into a strong experience of your phone buzzing away in your pocket. This happens to all of us. This is exactly sensory business as usual. And in functional neurological disorder and in chronic pain. I think this is exactly what's going on. So aberrant attention and aberrant expectation play a big role, and they're the things you can then push back on if you want to change your experience. So at the end of all of that, I think we start to see a real continuum of cases, even the ordinary, as I think you were suggesting there, ordinary medical symptoms. Where there's a perfectly good structural cause are experienced very differently by different people, that's obvious. They're experienced very differently by the same person at different times, which is perhaps a little bit less obvious. It's as if we started to make predictions about ourselves based on maybe idiosyncratic expectations about how we're going to feel in a certain context or something like that. And they can bring about the feelings that they strongly anticipate. Once that larger picture is in place, then I think we won't see the functional neurological condition cases as extreme outliers. They're just constructing experience in the same way as we are. The big positive outcome of that will be that there won't be as much resistance to the diagnosis of a functional neurological condition as there currently is. Because currently there's an awful lot of pushback at that point. It's as if people feel that they're being told that their experience isn't real, that it shouldn't be taken as seriously as other experiences and there's nothing to support that kind of attitude in the current picture, so I think getting the predictive process in picture of neurotypical response on the table, is often a really therapeutically helpful thing to do. And I think it will actually benefit all of us as we try to negotiate a world in which how we feel is so deeply tied up with what we expect.

David: 50:07

Andy, this has been a great conversation. We appreciate your time and we are going to recommend your book to our listeners, The Experience Machine.

Ellie: 50:16

I know, I've been telling so many people about it.

David: 50:19

Yeah. How our minds predict and shape reality. Check it out.

Ellie: 50:23

Thank you so much. This has been excellent.

Andy: 50:26

Thanks so much for having me. It was really fun.

Ellie: 50:32

Enjoying Overthink? Please consider supporting the podcast by joining our Patreon. We are an independent, self supporting show. As a subscriber, you can help us cover our key production costs, gain access to extended episodes and other bonus content, as well as joining our community of listeners on Discord. For more, check out Overthink on patreon.com. David, that was such a great interview.

David: 50:57

I agree, there is a lot of other questions that I didn't get to ask him, especially about optical illusions, but you know, you can't have everything that you want in this world.

Ellie: 51:07

For what it's worth, I thought his answer to your question about the change blindness was totally valid. I actually, I anticipated, you told me you wanted to ask a question about that. I was like, he's gonna have an easy answer to this, David. it's already in there, to be honest. But I thought the answer was, still really illustrative. I want to think a little bit more now about how predictive processing helps us understand experience because I will say, since I've become aware of the predictive processing movement, I felt like it intuitively tracks with a lot of what I understand about experience coming from phenomenology and coming out of German idealism. But I also know that there are some critiques of it for sure. And obviously in these last couple minutes of the podcast episode, we're not going to really canvas those, but this model, ultimately, in stemming from the cognitive sciences at the intersection of philosophy, really gives a lot of sway to a mathematical principle known as Bayesian inference. Thomas Bayes was an 18th century philosopher and statistician who popularized the notion of inference to the best explanation, and who created a formula to model what inference to the best explanation looks like. We're not going to get into what that formula is here. It's pretty simple, but sharing mathematical formulas on a podcast is not very us, and also I don't think would be very effective. But I just want to mention that this notion of predictive processing in its current forms depends quite a bit on this mathematical formula. And I wonder what that means for us when we're thinking about how this maps on to experience.

David: 52:46

Yeah, you're right to foreground the mathematical origins of this concept of a Bayesian inference. And I think that you see that mathematical orientation in the very title of Clark's book, which is The Experience Machine. I think Clark sees the brain as a machine that runs calculations, that runs inferences on a regular basis, as a result of which we get experience. Now, that is different than how some people, especially those associated with phenomenology, sometimes think about experience, because they tend to move away from thinking about the brain as a machine, and from thinking about experience as something that is the result of calculation. And that's why I ask that question about change blindness, because somebody like Alva Nöe, for example, does point to the phenomenon of change blindness, the fact that we don't really notice changes in our environments, to make the argument that the idea that our minds are constantly modeling the world around us is not true. maybe is misleading, because why would we have to model the world when the world is right there? The best model for the world is just the world. And so we don't need this kind of representationalist understanding of what's going on in the mind. We're not constantly making representations. And so that's a point of tension, I think, between maybe more phenomenological and more computational approaches to experience. But I was going to add that two things that remain open questions, and I don't have anything intelligent to say about this other than to mark them, is that the people who work in this space of predictive processing still have a lot of work to do to figure out the relationship, and Clark mentioned this in the interview, of course, the relationship between the conscious and unconscious, and also between the neural and the personal. In other words, if our brains are constantly running predictions and making these Bayesian inferences about what the world is most likely like, how many of those predictions are actually felt by the subject as expectation? And which ones are just happening behind the veil of consciousness and what makes the difference between the two? So I think this is something that I'm really excited to see where the field goes in the next few years because a lot hinges on how one answers those questions.

Ellie: 55:16

We hope you enjoyed today's episode. Please rate and review us on Apple Podcasts, Spotify, or wherever you listen to your podcasts. Consider supporting us on Patreon for exclusive access to bonus content, live Q&A's, and more. And thanks to those of you who already do. To reach out to us and find episode info, go to overthinkpodcast.com and connect with us on Twitter and Instagram at overthink_pod. We'd like to thank our audio editor, Aaron Morgan, our production assistant, Emilio Esquivel Marquez, and Samuel P. K. Smith for the original music. And to our listeners, thanks so much for overthinking with us.