Conversation with J. Kevin O'Regan, June 19, 2021

Sensorimotor Contingency Theory of Consciousness with J. Kevin O'Regan
(clubhouse event)

Kevin O'Regan discusses his Sensorimotor Contingency theory of consciousness as outlined in his 2011 book Why Red Doesn't Sound Like a Bell: Understanding the Feel of Consciousness. See also Kevin's website on consciousness,, as well as several change blindness demos. His 2001 paper with Alva Noë on this topic has almost 4,000 citations.

Participants: J. Kevin O'Regan, Joscha Bach, Frank Heile, Sanjana Singh, Chris Vallejos, Rowshanak Hashemiyoon, Natesh Ganesh, Sky Isaac-Nelson, Aniket Tekawade, and others.

Listen — Main Conversation with Kevin O'Regan and Joscha Bach

(Animated Transcript with Audio — Main Conversation)
(Jump to Transcript, below)

Listen — Selected Q&A and Group Discussion

(Animated Transcript with Audio — Selected Q&A)
(Jump to Transcript, below)

Transcript — Main Conversation

(Animated Transcript with Audio — Main Conversation)

Paul King: [00:00:00] I'm here with Kevin O'Regan. It's an honor to have Kevin here. Kevin is the author of one of the let's say six main ideas in the theory of consciousness, especially consciousness of perception. And  Kevin's  joining us from Paris.

I also notice that we have Jaan Aru, in the audience. Jaan will be with us next week. He has a mechanistic theory of consciousness as cellular mechanism theory based on neuroscience research on  cortical neurons. So he'll be here to talk about that.

So let's get started with Kevin. I'll say that I first met Kevin, although he probably doesn't remember it,  it might have been  20 years ago at a Tucson Towards a Science of Consciousness conference, a conference that is held every two years there, and he gave a pretty captivating keynote on his sensory motor theory of consciousness. And he  presented some pretty radical ideas and I think they were pretty controversial, but sparked a lot of great conversation and I've been pretty interested in his approach. So I I reached out and was happy to see that he was happy to join us here.

 So I want to start, Kevin, with some of the things that you mention in your book, Why Red Doesn't Sound Like a Bell, you start out by saying, what, what got you headed down this journey in the first place. It might have been a  a prompt for an advisor or something like that. What, what originally got you curious about the nature of visual perception and conscious visual perception specifically?

Kevin O'Regan: [00:01:23] Well, it's really well, first of all, Paul many, thanks for inviting me. It's an honor to be here for the first time on Clubhouse. Very excited to see what it's like, and really thank you so much for introducing me to this and preparing me and explaining how it all works.

 Yeah. So how did I get started on this? I was doing a PhD in Paris on eye movements in reading. And I was sitting in a cafe with my supervisor and he was actually visiting from Cambridge in England. And I was sitting in this cafe with him and he was looking at the tourists go by and stuff. He was more interested in that than in talking to me.

And I was sitting in this cafe with him and he was looking at the tourists go by and stuff. He was more interested in that than in talking to me. And as I was sitting there, I thought to myself, look, my eyes are moving backwards and forwards flitting around very quickly. And so the image at the back of my eye is also moving around in this very substantial way. And yet the world seems totally stable to me. Why is this? And he kind of looked at some passing tourists and said, yeah, that's interesting. Why don't you figure it out? And essentially the conversation stopped there, but it got me thinking and it was really thinking about that problem, why the world is stable, despite eye movements that made me realize that people were thinking the wrong way about what vision is.

And when I think I managed to solve that problem, I realized that not only are people thinking the wrong way about what vision is, but people are thinking the wrong way about what consciousness is. And that's really what got me started on consciousness.

Paul King: [00:03:07] Great. And you said people were thinking about vision the wrong way. What is that wrong way?

Kevin O'Regan: [00:03:13] Well, people were thinking that what vision is, is activation of an internal representation of the outside world. Now people would usually deny that what they mean by this is that we have an internal picture somewhere in the brain, because of course that would need a little homunculus a little homunculus in the brain to be looking at that picture.

And it would just cause an infinite regress of questions. You know, seeing the outside world, if seeing the outside world consists of making an internal picture, well, then who is, and what does seeing this internal picture consist of? Okay. So everybody would deny that that the representation of the outside world is an internal picture, but despite the fact that people would deny this, they still fall into the trap of surreptitiously thinking something along those lines.

So for example, the idea most people had in those days was that in order for us not to see the world jiggle around when you jiggle your eyes, you have to have a compensatory mechanism that essentially shifts the internal representation back after the eye has moved. And another, another idea along the same lines, concerns the blind spot in the eye.

You know, everybody knows that there's an enormous blind spot and each eye. It's the size of an orange held at arm's length. And yet we don't see it. And if you read the textbooks, they all say, well, the reason for this is that the brain fills in  blind spot. And this is the classical explanation.

And yet the idea of filling in essentially relies on the, on the syrup surreptitiously brings in notion that seeing is can consist of making an internal picture, that there's a hole in this picture where the blind spot is, and you have to fill it in. But if you think about, for example, the tactile sense modality, if you imagine yourself touching a bottle, holding a bottle with your hand, right?

You feel, once you've, once you've moved your hands around the bottle of it, you feel that there's a whole bottle you add the tactile impression of a whole bottle. And yet in between your fingers, there are blind spots. Now, do people imagine that there's some kind of in between finger filling in mechanism that the brain is doing is, is, using so that our impression of the bottle doesn't have holes in it where the holes are between my fingers?

No. Nobody would think that the tactile exploration involves filling in the holes between your fingers. So you see that if you think about tactile exploration, There's no need to postulate a filling in mechanism in the same way. If you were to think about seeing more like an active exploratory mechanism, then you will not have to postulate anything like filling in the blind spot.

So that was really what got me started on thinking about why what what the danger is of thinking about representations in in to explain perception.

Paul King: [00:06:27] I see now your view sort of has a reputation for being  anti representation. And you sort of mentioned that there's you know, there's no movie screen let's say inside the brain, but of course, if you look out on the world like you take it all in and then you close your eyes, you still have to have some sense of what's out there, even though there isn't sensory input coming in.

What's what's going on there, would you say?

Kevin O'Regan: [00:06:47] Well, obviously there's something that that some kind of information in your brain that you retain about the outside world, but what I would claim is that the nature of that representation of the outside world is more symbolic and abstract. It's more kind of, let's say it's in terms of relationships abstract concepts, it's things like well, the, light is to the left of the box and the box is in front of the chair and the box is very big.

And over on the other side, there's some other objects. And so it's, like a a kind of description. It's a semantic description, but the reason it doesn't seem like a description to us is that because if we're interested in. knowing, You know, pixel wise, what things look like? All we have to do is turn our eye and our attention to the thing that we're interested in and lo and behold, the retina is illuminated with all the pixels and we have the impression of seeing all those pixels and any information we're interested in about the object we want to look at is immediately enriched by all these pixels.

But in fact, the pixels are not in my memory because they don't have to be in my memory. All they have to know is where to look, where to get the information. So I have this, I wrote a paper, and I think in 1996, called the real mysteries of visual perception. You can see it on my website. And I really like this paper because it suggests that the world acts like an outside memory store.

You can access any part of the world by a flick of attention and a flick of the eye and get any information you want about it. And it's this potentiality, this ability to get the information at the slightest request of your mind, that gives you the illusion that you're seeing everything in infinite detail and, and, and, richness,

Paul King: [00:08:48] I think you make a comparison to the refrigerator light.

Kevin O'Regan: [00:08:50] Yeah. So another, another issue for example, is the issue of continuity. So for example, we have the impression, okay, that the information from the outside  visual world is continuous. Now, if you think about representations in a normal fashion, we think, well, if you have the impression of a continuum of continuity of the outside visual world, then you should also then presumably the internal representation, the activation of the representation of the outside world must also be continuous.

But that is a terrible error because how do you know that the outside world is continuously. present? The only way you can know when the outside world is continuously present is by checking. So every time you ask, well, is it not, is it there now you turn your eyes, wherever you want to look and you check. Yes, it's there now.

But if in fact there was a pause of 100 years between the two moments you check, you weren't aware of it, you would have the impression of continuity. You don't have to have continuity of the internal representation in order to, to, have the impression of real continuity. It's the same idea as the blind spot, you don't have to have a continuous internal picture in the brain without holes in it in order to have the impression that there are no holes in the outside world, all you have to do is be able to find the information that you want at will at the slightest flick of your eye.

So it's like the light in the refrigerator. You know, you open the refrigerator, the light is on you close the refrigerator you say, Well, the light seems to be always on, right? You quickly surreptitiously open the fridge. The light is still on. Every time you open it, it's on. So you get this illusion of constant continuity because of the immediate availability of the information.

Paul King: [00:10:37] So I think we'll probably want to come back to representation especially when  we bring Joscha into the conversation. But  before we go there I wonder if you could say something about  what you cover in the book. I mean, the title of the book, why red doesn't sound like a bell. You talk a little bit about what phenomenology is  what the origin of phenomenology is.

You know, maybe it's not special neurons, maybe it's something else.  And you have some, set of criteria  for what makes something feel real. Can you just outline that?

Kevin O'Regan: [00:11:06] Right. So let me, let me be clear that, so my sensorimotor theory, okay. Is a theory, not of all forms of consciousness. It's a theory about what's been called phenomenal consciousness.

It's a theory about why things. have The sensory qualities that they do, like why red feels red rather than feeling green or like the sound of a bell? My theory is a theory about feel about what things feel like philosophers call it qualia. right? It's a theory about qualia. It's what it's, a, it's what a lot of philosophers think is  the hard problem of consciousness, but there are other problems of consciousness, which I'm really, I defer to other people's theories about this.

So for example, questions of  ability to report on something that you've seen  to be able to make use of information that you've gained  from the outside world, in your planning and your decisions in your rational behavior in your language  this kind of ability is covered by a number of well established theories, like particularly the global workspace theory, which has now been enriched with a lot of neuro scientific data about brain mechanisms.

I have nothing to criticize about these theories, which are, which are making tremendous progress in finding the brain mechanisms, underlying our ability to have access and make use of information that we get in from the outside world. And another type of theory  about consciousness concerns, the notion of the self.

So the self is something that psychologists and philosophers have been thinking about for centuries really. And recently we've got some really interesting theories about the self, for example, well, Daniel Dennett, you all know Dan Dennett theory about the self as a story that essentially tells itself a narrative center of gravity.

And I think that actually I was looking at Joscha Bach's web page, and he actually says something also very similar to that  using  ideas from artificial intelligence. So the idea that the self is a story that tells itself is a really useful and interesting idea. And I think that is close to the truth, right?

I don't think there's any mystery about the self. I think the problem of the self can be solved by AI type or psychology type approaches. And we're getting also Michael Graziano, for example, has has a theory about what he calls attention schema theory, where essentially he says the self is the ability to control one's attention.

 So these theories all contribute to some aspects of the notion of consciousness, but I'm interested in something completely different from this, which is what the phenomenal aspects like, why red feels the way it feels, why pain actually hurts. You know, if you have a global workspace type theory, that theory tells you how you can do information processing and make use of the information and use that in guiding your future actions.

Okay. But that doesn't explain why it feels like something to see a red patch of color, and just having the notion of self as as a self referring story that's telling itself in some computer network. Okay. And computer  architecture that doesn't explain why the self says of itself, that it feels like something to see a red patch of color.

So that is the question that my sensory motor theory is addressed against.

  Paul King: [00:14:49] So, so why does red look red and why does it not sound like a bell?

Kevin O'Regan: [00:14:54] Well, you know, that is the question. That is what everybody calls the hard problem. You know, most people have the impression, you know, when they, when, when they, I mean the best example really is pain.

Okay. So pain  the behaviorists would say, well, what is pain? Pain is feeling that causes you to  To run away or to, to avoid the painful stimulation. The pain, causes the finger to, jump out of the, to retract out of the, out of the fire. Okay. Most people would say that now the behaviorist would say, well, no pain is just the sum total of the things you do when you're in pain.

 You know, William James said something like you don't run because you're afraid. He says, you're afraid because you're running. That's a kind of behavioristic approach, but that isn't very satisfactory because how could running actually give rise to this feel of fear that we have?

How could the jerking of your finger away from the fire give you the hurt of pain? It doesn't seem reasonable. Okay. So it's, most people think that red for example, red, the feeling of the redness of red is some ineffable hard to describe quality of what happens when you look at red patches of color, but how to generate this feel of redness seems to be beyond scientific explanation.

And Philosophers and scientists tend to agree that there seems to be what Levine calls an explanatory gap between the redness of red and neuro physical and chemical mechanisms in the brain. So the question is, how can we escape from this  from this  hard problem. Okay. So,

Paul King: [00:16:49] and, and how do we, well, you have, I mean, you have a set of criteria.

Do you want to sort of describe those criteria of presence and grabbiness and such

Kevin O'Regan: [00:16:57] Well, so that's a little bit different. than the presence and the grabbiness, They're addressing a little bit, a different question before I want to talk about presence and grabbiness, etc, I think it's better for us to talk about what I call sensorimotor contingencies.

So let in order to solve the problem of the redness of red  I think that just like when you address the problem of consciousness, you use a kind of divide and conquer strategy, what people have done is well. First, we're going to address the problem of reporting, our ability to report, which is what the global workspace theory addresses.

Okay. And so we divided and conquered that we've addressed the problem of the self And using this kind of theory about what the self is often also recently appealing to social psychology type notions. Wolfgang Prinz has a really interesting book about how the self can emerge by selves observing others, other selves. Okay, so we divided and conquered  the notion of cognitive access we've divided and conquered the notion of self. Now we have this, this mysterious notion of qualia, of the redness of red. We have to divide and conquer and try and understand what we really mean when we see a red, or when we see we're having this experience of redness, what do we mean by having the experience of redness?

So what I suggest is in order to understand this, let's take a simpler case. Let's take first the case of softness. This is my classical example. I take the case of the softness of a sponge. I say, well, what do we mean by having the experience of the softness of a sponge now, would you think that the softness of a sponge is generated by some brain circuit?

It seems like an odd kind of thing to say when you're talking about softness. Right? Because what we really mean by experiencing the softness is the fact that if you press the sponge, it squishes under your pressure. okay, Softness consists in the fact that you know that if now you wanted to squish the sponge, it would cede under your pressure.

It's a potential, it's a counterfactual ability. It's the fact, you know, what would happen, were you to squish the sponge. Okay. So it's a possible action you could make and the expected reaction in your sensory input that would obtain when you, when you made that action, it's what I call a sensory motor contingency.

So softness is not something generated by the brain. Obviously the brain participates in your ability to squish the sponge and to sense that it's ceding under your pressure. The brain is doing something, but it's not generating the softness. That's the wrong way to think about softness. Softness is simply the fact that you know, that a particular thing will happen when you do, a particular thing, it's a sensorimotor contingency.

So thinking that way about softness gets you out of the problem of trying to invent some magical essence inside brains that generates softness.

Paul King: [00:20:19] So it's an interactive potential of the environment

Kevin O'Regan: [00:20:22] yes. With the environment it's a potential you have in your interaction with the environment. And it's much it's your knowledge that at this moment, that potential is can be, can be explored following certain laws, namely the laws of softness it's I call it your mastery of sensory motor contingencies.

You currently are mastering. you're in in a state of mastery of the sensory motor contingency, the sensory motor law of, of softness. And when you're in you're currently in that state, when you know that currently you are able to explore the sponge. Following the law of softness, then that constitutes what it is to feel something soft.

Now, the idea is is that if you accept that it would be miraculous and it would be wonderful if you could take that same idea and apply it to the redness of red. Because if you could do that, if there was some way of applying what I said about softness to red, you will have solved the problem of the explanatory gap.

You will have overcome the hard problem of consciousness, because you will be able to explain why red things look red rather than green, or why they don't sound like a bell in terms of the sensory motor laws that govern your interaction with red things. And you will no longer have to postulate something in the brain that's somehow generating a red fluid, which gives you this impression of redness and which poses the explanatory gap

Paul King: [00:21:57] So we have a lot of folks interested in asking questions and I know Joscha is going to have some questions for you.  Anything else you want to  say to kind of  kind of complete the outline of it before we shift into  more of a discussion.

Kevin O'Regan: [00:22:10] So let me just say that. that is one aspect that, that aspect, the idea of sensorimotor laws is the aspect that allows you to explain the similarities and differences between different experiences.

Why red looks red rather than green? Why red doesn't sound like a bell? Why, why, why red is a visual sensation, not an auditory sensation. So it explains the differences between different sensory experiences, but there's something extra that philosophers often talk about, which is what they call the, the fact that, that experiences have something it's like, rather than having nothing it's like, okay. So part of the greatest, supposedly the greatest mystery of consciousness of a phenomenal consciousness is a question of why there's something it's like at all, rather than nothing. It's like, no. and, and, and I think that what I do for that is I say, well, let us divide and conquer again.

Let us look at what people really mean when they say there's something it's like at all. I mean, there's no use in and just saying, well, it's, it's all very mysterious. And that's the feeling I have. And that's it. Let's, let's try and look more deeply into what people actually mean when they say there's something it's like at all.

And I, and I think I managed to decompose the notion of there being something it's like at all into several aspects. And one aspect is the conscious access aspect that's kind of dealt with by classical theories of consciousness. Today's classical theories, like global workspace theory. Okay. So obviously in order for there to be something it's like at all, you have to have cognitive access, you have to be able to report about whatever you are experiencing.

Okay. There has to be this ability to report, maybe not immediately, but it has to influence your rational thought or your planning or your decisions. So you have to have what Ned block would call conscious access to something. But that is not a mystery that can be dealt with with current theories. Okay.

You also have to have somebody in the cockpit. There has to be a self, has to be somebody who is accessing this information. So that's dealt with the notion of self. Okay. So let's assume that one part of there being something it's like at all, is that there has to be, you have to be essentially paying attention to and making use of the information that you're, that you're perceiving. But then in addition, you have this feeling that there's something it's like, it's not like thinking about I mean, seeing a red patch of color is not like thinking about a red patch of color. Seeing a red patch of color is, has a perceptual presence. It does something to you. There's something about sensory experiences, like the prick of a pin or the sound of the bell.

These sensory experiences seem to come from the outside world and impinge upon you and force themselves on you in a way that thoughts do not force themselves upon you. And they force themselves on you in a way that visceral experiences, for example, your digestive system or information coming from your digestion, or let's say the glucose level in your blood or your heart rate, these are things which are being monitored by your nervous system, but they don't impinge upon your consciousness.

Why is that? Okay, so to answer these questions, I, what I say is that  if you think about what it means to have a sensory experience, that's derived from the outside world, that impinges on you, the sensory motor interaction, the laws that govern such experiences, the ones that, that impose themselves on you have three qualities that I call bodilyness, insubordinateness, and grabbiness. So bodilyness is the fact that if you move you voluntarily move your eyes or your ears your body, or whatever, the sensory input coming from the outside world changes dramatically whereas that's not true of thoughts, and it's not true of visceral information, not true of the glucose level in your [blood]. So  sensory systems that are subject to your bodily your voluntary bodily movements have what I call bodilyness, and That is a that is an indication that the information is coming from the outside world and is real. Okay. So that's bodilyness, insubordinateness is the fact that, whereas you have some degree of control over the sensory input, because moving your body, can change it immediately. Your control is not complete because you are somewhat insubordinate to the information coming from the outside world, because the information on the outside world has a life of its own like the mouse.

Okay. It can flit across the floor and cross your retina without you moving your eyes.  The sound can occur and can change by itself without you moving your head and changing the angle of incidence. So stuff coming from the outside world, you are insubordinate to you're partially insubordinate. It's a question of control.

Bodyliness is on the, control over the information is only partial because you, you can change it by moving your body, but you can only change it to a certain extent because it has a life of its own. So that's insubordinateness, and then there's grabbiness.  Lower level sensory systems in the brain are hardwired up by evolution to be exquisitely sensitive, to sudden changes.

You know, if there's a sudden noise, if there's a sudden flash of light, your attention is immediately and inexorably oriented towards it. I think there are automatic pathways in the brain that capture your attention. Whenever there are sudden changes in the visual in the, in the in sensory input, and this is true for all sensory modalities that come from the outside world. They have what I call grabbiness, that is to say that this neural, the neural substrate is hardwired in such a way as to cause your cognitive processing to be essentially interrupted. It's like an interrupt in a computer program that interrupts your, your processing and causes your attentional resources to be focused on the sudden event, okay?

And things that can happen in the outside world are subject to  have this grabbiness property. And so what I say is that if you have the more of these three aspects, bodilyness insubordinateness, and grabbiness means that you have that,  interaction with the world has the more you have the impression that the thing corresponds to a real stimulation coming from the outside world, and it imposes itself on on you because of its insubordinateness, because of its grabbiness and so, you have the, you have the feeling that it is that it is that it is somehow more real than thoughts, that don't have this grabbiness that don't have the bodilyness that don't have the insubordinateness. okay.

The visceral phenomena in the body don't have these properties or don't have all of them. And that explains why, whereas your brain presumably has sensory systems, just like your normal, your vision and audition and stuff. It has sensory systems, governing visceral processes, you know, governing glucose, glucose concentration in your blood, etc. These, you don't feel them as being real and present to you like you feel the information coming through the real sensory modalities, the five classic sensory modalities. Sorry to have been so long, but that sort of summarizes it.

Paul King: [00:30:02] Yeah, that's great. Well, this is probably a good segue to  invite Joscha to   ask a question or pose any disagreements.

He's, he's thought a bit about this problem. I think from the perspective of what might be involved in creating some kind of artificial consciousness, what could be operationalized in algorithm and and Kevin  what's your view on, is consciousness possible in a machine?

Kevin O'Regan: [00:30:23] Oh yes, absolutely. Absolutely. And very quickly within the next decade or two, I'm sure we will have machines that people agree on as conscious as we are. I'm absolutely convinced

Paul King: [00:30:35] Joscha, Any comment or question?

Joscha Bach: [00:30:38] Sure.  Hi.  Hello everyone. Nice to see you.  Hi Kevin. Hi Paul.  Very  happy to be in this conversation. My own opinions are not that interesting. They are more like standard opinions that people have that have thought about these subjects for a while and read about them and integrated them into a piece of understanding.

So the things that I'm going to say are probably not not that surprising. I don't expect that I can convince Kevin of anything or he disagrees because otherwise other people like  and many, many others would have convinced them decades ago. Right. so the fact that this doesn't it didn't happen, means that  I don't expect that we will change anyone's opinion on the spot, but  me point out in sequence some of the points of contention in Kevin's  perspective.

And  also let me preface this all by saying that I value Kevin enormously as a thinker, and as somebody who generates ideas, thoughts, and perspectives, is in cognitive science. And there are many, many points where we agree. I think the majority of ideas are things that we would agree. And so the things that I'm pointing out are merely the small parts that we do have a substantial disagreement in perspective.

So let's point out that we both agree and about the fact that there will at some point probably be conscious machines and it might be happening earlier than many people think. So.  Let's start with the blind spot, I think that we obviously do fill in the blind spot and we can make this experiment quite easily.

We can  take  a textured surface and  we mark, one of the areas in the picture surface, with something else. So  basically the interuption in the texture is something that some small part of our textured table, there is an area that we have maybe a green blob or a letter or something else. It just doesn't need to be too big.

And then we mark a fixation point. So then we directly fixate this thing that that our small discontinuity in the pattern is in the blind spot. And then we start to show this to a subject, or even if we do what ourselves, by knowing what we did there, this small letter or marking on it, and the, texture, will disappear and the texture, will continue without interruption. And this means, I think that our visual system is filling in the texture. or that is not that we have some kind of direct entanglement with the outside world that would let us know what's going on there, but our retina is sampling and there are gaps in the sampling and we could do with additional blind spots.

in the retina, if you do this early enough in the development of the  neural pathways, we will not see these gaps. And that's  because we basically fell in between the gaps, the, the texture will continue and we can learn to attend to the low level features. But  we can only attend to the gaps if the gaps are somehow represented in the brain.

And we have a low level representation. that Basically the first principal components of the patterns that we get from the retina is going to be a two dimensional. map. Because the neighborhood between the different neurons in the retina can be inferred by the co-occurrence of signals from them. So basically if two neurons from the writing of fire at the same time, most of the time, it means that they're probably neighbors.

And just by doing these statistics you find out that the input of this particular sensory modality is a two dimensional map. And the next    principal components that we'll discover is that  this map makes more sense when you assume that the map is moving with seconds with movements of the eye to some stable environment, right?

So a stabilized image Of what we scan with our retina is going to happen in V2. And then the look for the long tail of principal components that we discover in perception we get to geometry and we get to objects that move. through the retina, and we get to notion of space and as moving through the space, and then we get to different object categories that we observe in the space and so on.

And  all these regularities is something that we get by more and more abstracting finding more and more invariants visit the variances in the perception. And this is, I think what Kevin calls the sensory motor contingencies, this stability that is being introduced. And this is also something that happens in standard machine Learning. we can test the theory that this is sufficient. You can just set up a learning system with sufficient complexity and feed in a randomized set of pixels from a camera. And then it's going to find out how the pixels are aligned on the center of the camera. And then  it discovers objects moving over the camera sensor, right.

In some sense. And if you do this longer then it's going to discover the three dimensional structure of the space. So  this is something that is no longer mysterious the next  difference in perspective between Kevin and me as  if you  think about  what red means. it's not a representation in the mind, where would the red be? red is not in physics, right?

Redness is not a physical property. There are no colors, no sounds and no geometry in physics. The only thing that the universe seems to be giving to us  without loss of generality because we have constructed as a model of our perceptual input is  electrical impulses that are traveling. through the nerve fibers and all these electrical impulses basically have the same properties except for statistical correlations between them.

And so redness must be an aspect of the statistics of the signals and the, this aspect of the statistics is, and here's what I would agree with Kevin relational red has something to do with the relationship to all the other  categories that we discover red is  in some sense, a mathematical property, mathematical properties exist in languages that are not out there in physics that are in this sense constructed in our own mind. and it's a stationary dimension, It's something that is in space, but not across space. So it's surface property of objects that doesn't move, it's a polar coordinate representation, and it's an abstraction already. It's corrected against lighting. So redness is what objects appear as, when they are red under different types of lighting. It's a certain thing that is stabilized. It's a invariant in the perception and redness makes sense in comparison to all the other colors that you have. And the other surface properties that an object can have, like translucency and growth of reflectance.

And so on. So it's a particular dimension in the space of. presets. And  this leads us to the next, thing that is a distinction between symbolic descriptions in our own mind and perceptual descriptions, but actual descriptions take place in a space, for instance, in the color space or in a coordinate space where you can move objects around.

And this is a regularity that our binders covers to make sense of the physical features at a certain degree of abstraction. This is also true for highly abstract things like the self. I think that the self emergence by discovering your own agency as a category, right, you discover that there are agents in the world.

There are things that can be described by position by intentions, by goals that they have in changes that they impose on the environment based on these goals. And we discover that there is an agent in the world that creates representations that are accessible to us and these representations whereas the thoughts and ideas, and so on informed the actions of that agent.

And this is how we discover our own self. I don't buy this idea that use cited of Prinz that self emerges or by observing other selves, because we're what the first self come from, right. that We would not have the self without observing other selves then. And this is a hen and egg problem. I don't buy this at all. I think that you can be  have a self if you are solipsistic too. And there probably solipsists as well. which have selves, the hard problem, I think is not the distinctiveness of red it's  what something feels like to me that something feels like something at all. And also, I don't think that we have a deep disagreement here, but I suspect that this question can be answered in the functionalist way.

We can say something feels like something at all because there is this virtual self in a story in a dream generated by the brain the brain generates this dream because it would be useful for the brain. To know what it would be like to be a person. The brain is not a person. the neurons are not feeling anything.

The brain is not feeling anything, but to have an abstraction of a feeling being that has this divide between sensory processing and perceptual processing, that is very useful to have, because it's informative. It's the best possible model that the brain can generate of its interaction with the environmental, make its own behavior, intelligible and predictable to itself to make plans and goals and so on.

You need to have a model like this. And so this is the model that the brain comes up with. and we are inside of that model. The me is that observes the world is not observing physical reality outside. It's observing  perceptual imagery that is being generated outside of the self, but within the mind.

And  this leads us to the distinction between thoughts versus perception. You have agency over your own thoughts, as Kevin observes, but not over your own percepts. right? This is something that we can see. There's a clear distinctness. The explanation, I think is that the self is a large part of the control model of how we generate thoughts, but the self is not the control model of how we generate percepts. The percepts are generated outside of the self. So it looks like the universe is giving us what we perceive. There is a slight    in there. We can direct our attention, right? So when we move our eyes somewhere else, and we can do this with the control of our body, we will notice that  the contents of our perception change and also when we disambiguate object. So when we, for instance, look at figure ground illusion, where we can switch between what we choose to be the background and what we chose to be the foreground. We noticed that our perception changes, Right. So we can draw things. And within these patterns, we can choose how to interpret them. And then something will pop out and the perception We cannot control the popping out, but we can control what pops out to some degree, right?

So the direction of our attention and the way in which sensory processing works, they can be parameterized, by ourselves. we can have control over them. So the self is a control model of what our body is doing. The body is the object of that control. The body is not discovered independently of this interaction with the world.

We have to discover the entire sensor-action loop to be noticing that we have a body that we have control that we have a self and so on. And I think that this  distinction leads us to Kevin's notions of insubordinate, bodilyness, and grabbiness. They are descriptive notions. They're not explanatory at all, which I think is the shortcoming, The explanation of insubordinateness, bodilyness, and grabbiness requires functionalism. We need to understand control models to understand subordination. We need to understand the self and the interaction of control to understand what the body is. And they need to understand what's inside and outside of the self and this framework to understand what Kevin calls grabbiness. So  this would be my short answer to what Kevin said.

Paul King: [00:42:55] Thanks. Maybe we'll give Kevin a chance to respond to some of that. And I also may start bringing folks on stage.  Who've been raising their hand ask questions.  Kevin  any, any particular angle you want to take or, I think you know, one topic was the blind spot and, you know  it the fact that we can infer things about what's in the blind spot or what a completed pattern would look like, does that tell us anything about  representation  or  of the other comments that Joscha made.

Kevin O'Regan: [00:43:24] Okay. So with respect to the blind spot, what. He says he believes there is filling in. Okay. Whereas I think there is no filling in and and to understand that distinct, I mean, to understand that debate, we really have to define what we mean by filling in. Okay. Now, if you mean filling in pixel wise, pixel by pixel  some internal picture of the outside world.

Okay. I think even Joscha would agree with me that there's no pixel wise filling in okay. He, What he is suggesting is a possible account of what happens in the blind spot, in terms maybe of low frequency components that spread over both the whole area of the blind spot. And could be taken to be a kind of filling in.

And I have no objection to that. It remains to be seen, you know, by neurophysiological tests to see what actually, how the brain actually does whatever it does.    The fact that, so I don't have very much debate about what he said about filling it. Okay. It's just a question of, of understanding what you mean by the notion of filling in.

But if you consider it my analogy of the  of the  of the light in the refrigerator, okay. You see that it's not necessary to fill in temporally, the light in the refrigerator, because all you have to have in order to have the impression of continuity of temporal continuity is for the information to be there when you look at it.

Okay. So the same can be said about the information in the blind spot. There's no need to fill in a blind spot just as there's no need in tactile modality to fit, to fill in the space between your fingers. I mean, I don't think that  that even Joscha would imagine it in the, in the, in the tactile system, there is the equivalent of some kind of filling in  of tactile information where your fingers are spread apart.

Now, concerning Joscha's other points, I essentially agree 100% with everything he said. And in fact, everything he said is completely consistent with  with what I, with my sensory motor theory. So with regard to red, for example, what he says is that red is a relationship to other  sensory stimulations.

Red is how red things change the light. And that is exactly what philosopher Justin Broakes said about red. And that is exactly what we in our in our article with David Philipona say about red and it's it's it's thanks to the mathematical model that we have of the, of the unique hues that allows us to predict with extremely high accuracy, which colors will be taken to be focal colors.

So sensory motor theory really, really really supports the idea that Joscha was putting forward and that we put forward. namely that. And notice it's a sensory motor theory it's a theory about how by moving red things around under different lights, The redness is just the invariant laws that describe the changes that occur that could occur if you were to move a red thing around under different lights.

So that is exactly what, what Philipona and I  showed in our, in our paper about focal colors.  What  I have, I disagree with what Joscha says about the self  necessarily having to exist before you deduce the selfs from other people, because I think that it can be a kind of bootstrap process. So, I mean, one could debate about that

WIth regard to the hard problem. Okay. uh, Joscha says  what is it? He said Yeah, Joscha, as I said, the hard problem, why there's something it's like at all, as I said, in my introductory speech there, I was saying that there's several aspects to the question of why there's something it's like at all. One aspect is the aspect of relating to the self and the fact that you need there to be a self that is aware of something going on for that thing to have a, have  to, to have something it's like at all.

Okay. So, and what Josca has said about that, he framed it in his terms of a virtual self that knows that it feels perceptual imagery. Okay. I have nothing against that. That's fine. But my theory is talking about a different aspect of this, of  this, of this something it's like at all, which is the fact that it has this presence, this feeling that it imposes itself on you and I, I attribute that to the bodilyness insubordinateness, and grabbiness, the perceptual presence of stimulations that correspond to stimulations coming in into the five  classic sensory modalities.

So a final point that he made was that he thinks bodilyness and insubordinateness and grabbiness are descriptions, and they're not explanations of of sensory presence or what it is like. And let me go back to softness. Okay. If you ask what causes you to feel the feel of softness? Okay. One way of thinking about it is that somehow there's a magical essence that, that, that is exuded by brain cells that somehow give you this qualia of softness.

I think we can reject. that. What I'm doing is I'm saying no. If you think about what softness is, what it is constituted by then you realize that it is simply the fact that when you press the sponge squishes, so it's a description and that description constitutes the experience. There's not no further explanation that need be, need be made.

It's similar to what happened with with regard to the question of life. When the vitalists at the beginning of the 20th century were looking for the vital spirit that generates life, okay. They were looking for something analogous to what philosophers and neuroscientists today are looking for when they're looking for the qualia, the thing that generates the redness of red. What I'm saying is that is the metaphysically. It's the wrong way of thinking about qualia, Just like vitalism was the wrong way of thinking about life. I suggest that we must think you must consider explanations of qualia in terms of descriptions, just like life nowadays, we consider the, explanation of life is a descriptive explanation.

We say what we mean by life is things is a system that interacts in a certain way with its environment, it respirate and metabolizes. It replicates. We describe the things that life consists of, and that constitutes the explanation of life. We no longer need to appeal to a vital spirit. And in the same way, I consider that we no longer need to appeal to  to  a notion like qualia, to explain the softness of softness or the redness of red.

If we realize that what we mean by redness is just what we do when we interact with red things, the hard problem dissolves because we are no longer looking for a magical essence that generates feels.

Paul King: [00:50:45] Joscha, any responses to that?

Joscha Bach: [00:50:50] Yes, please. So  let's start with the softness. I think that softness describes the perception of the distribution of force and an object value the format. And it's there's probably not a big disagreement between Kevin and me. I reject substances  intrinsic substances, as much as he does. It's  category that is formed by the mind over interactions in space and time that we perform with, In this case, environmental objects, it's probably difficult to discover softness alone.

In your own mind, you it's a category of how things are happening in, in the space, in which we interact with physical things. And the self. I think that I discovered my own self because before I discovered other people's stuff, so I'm a little bit partial. I suspect that there might be people that you discover the self of others for that discover their own self.

If I look at my own children, it seems to me that  my  one of my children was in the category of people who discover their own self first. And the other one, maybe the other way around, maybe this  something that doesn't  happen have to happen in a particular kind of order.  with respect to the feeling not the blind spot,  the pixels in our case are the nerves on the retina. And we don't have direct access to these pixels, but we get is  already an abstraction that happens within the retinal cells in pre-processing. And  the next level of abstraction that seems to happen in V1, is something that we call Gabor patches. These are small spatial frequency patterns, and it's very interesting that the same patterns that make neurons fire in V1 are being discovered in neural networks, And  quite independently of the type of neural network or training algorithm that we use to make sense of  pixelated images coming in. The first level of abstraction that is being discovered are these patches of spatial frequencies and color contrasts. So there is a regularity a universality between vision systems and the later  levels of representation are constructed by superimposing these spatial frequencies into curves and edges.

And so on and later on into surfaces and there's basically some kind mathematical universality in how we can make sense of statistics of certain input types, right? And  so the pixels are not necessary. you can use stochastic arrangements of sensors like in our retina. doesn't have to be some kind of matrix where everything is rectangular or something like that.

There's a simplification that happens to take place due to the simplification of our production process for camera sensors, But you don't need that. You can just use a random arrangement of  input sensors, and then try to find out the regularity about them. But if the density of the sensors is high enough, then we get to a similar perceptual hierarchy.

I think as we do in the brain, when we take a technical system to learn it. And this hypothesis is something that OpenAI and Chris Olah calls the universality hypothesis. So if you have a general enough learning algorithm and it's sufficient resources, then the representations that are being discovered are going to be functionally equivalent.

And the final thing that I want to respond to is this light in the refrigerator. I think this points to that there have been two processes in the brain, one is the construction process of perception A construction that is going to generate a stable model of reality around your local perceptual space. And this local perceptual space, for instance, contains a representation of what's behind me.

I can basically represent the window that is behind me, without looking, because I've seen it before. I can remember, what it looks like, and it's going to be low resolution and so on, but it's there. And there's a scanning process that accesses this and the scanning is distinct from the construction, by The scanning process is the thing that opens the refrigerator and looks for the light. And the construction process is the one that is storing enough representation to make sure that something is there when the scanning comes around and looks at it. And we can test that these are separate processes by looking at the defects in these processes there it's possible to disturb the construction.

In which case, when you scan the world, you have this  stable  optical illusion, for instance, or perceptual illusions. And  it's also possible to disrupt the scanning process. So you see weird repetitions of patterns where there are none, even when you watch. So if you give people certain drugs. And so on, them, they will be characteristic distortions    and weird ways in which the scanning goes wrong, right?

So that might have perceived recursive patterns and perception or in the memory of time and so on. And  I think that makes sense to look at in Kevin's theory from the perspective of these two separate processes of the construction process and the scanning process. But once you accept that there are two these two separate processes, a construction is a scanning process.

You will need to admit that there are representations in the mind. And the representation is just a function that is generating a model of, of the world, for instance, or a control process, right? It's it's  sometimes requires a type of language, not necessarily natural language that is just a special case, but some kind of mathematical language.

And this mathematical language is about. something About some domain which you perform control process, for instance, and this anti-representational view that Kevin  tries to support. is something that I don't yet understand how he gets that to work.

Paul King: [00:56:56] Yeah. Let's talk for a moment about representation and, then let's  open it up to  some of the additional questions.

So  I suppose this question of whether there's filling in  let's say low level, you know, let's call it pixel-level filling in and the blind spot. could be answered by just knowing if there's representation in area V1 in the brain of the blind spot area. If there's no neurons  representing the blind spot area, then  that would seem to suggest that  if that's the case and there's probably research that can answer it, that there isn't filling in at the low level.

Would you agree Joscha?

Joscha Bach: [00:57:31] Yeah.

Paul King: [00:57:32] So maybe that's there's research out there that can  resolve that  on that, on the topic of resume representations  Kevin, I'm assuming you wouldn't dispute, you know, all the research that shows  edge detectors and area V1 and no contour detectors. and let's say area  V2 and V4 and  you know, object, you know, neurons that respond to objects in the environment.

How would you explain those neurons? in your given your stance on representations?

Kevin O'Regan: [00:58:00] Well, no, the word representation is very ambiguous, you know, and I have no objection if you mean by representation code. Okay. Codes are obviously the brain is coding stuff. Okay. And these are these  these  nerve cells that are found that, that are Gabor functions or whatever  are obviously constructed by low level vision  On the basis of statistical regularities in that, that, that are observed they're tuned  from early, from early  during early development and this tuning  follows  laws, which are obviously shared, these are the universal hypothesis statistics of the, of the codes that emerge in order to make them independent from each other  will be similar in a machine and in a human.

So I have absolutely no I'm admirative of the work that's been done on that. I have no  nope, no problem with that. But where I have a problem is for example, the notion of construction or the notion of model, these are very dangerous, because imagine if you say that vision consists of creating a model of the outside world.

Okay. So once you have this model inside your brain who is looking at the model, okay, and what exactly are the properties of this model? If you think what I said about the, the refrigerator light, you know, if, you want the model to give you the impression of continuity, it suggests that the model has to be continuous, but this is obviously not the case, because if you think about what you mean by continuity, continuity means that you can interact with the, with the world in a way that gives you the information whenever you are seeking it.

Okay. It does not necessitate having a continuous model. So whenever you start thinking about models, you're neglecting the fact that there is no little man inside the brain looking at. the model.

Paul King: [01:00:05] Okay, well, I guess one can take the view that the model is just information in the brain that's available to the global workspace for use by other brain areas, for example, by the language area to report.

Kevin O'Regan: [01:00:15] I have no quibble with that at all, but, the but, but that is not how people think because it leads them usually to think more in homuncular terms and it leads them down a dangerous path. If all you means is just information in the brain. I have no problem

Paul King: [01:00:33] So you're just concerned about habits of thought that might be created by baggage associated with these words.

Kevin O'Regan: [01:00:37] Yes. And that's what happened. For example, I spent my first year, my first parts of my career  studying the extra-retinal signal that is to say studying why the world seems stationary despite eye movements. And I did many, many experiments on this and and most people still believe that there's a need to shift the retinal image, some internal model of the outside world  by the same amount as your eye movements.

But if you think about vision in the way that you think about touch, for example, it's just not necessary. And gradually, gradually people are realizing that they have been misled by the notion that that vision consists of creating an internal model. They've been misled into looking for this extra-retinal signal, which has become completely elusive.


Transcript — Selected Q&A and Group Discussion

(Animated Transcript with Audio — Selected Q&A)

Paul King: [00:00:00] Let's go to some of the questions I wanted to start with Rowshanak

Rowshanak Hashemiyoon: [00:00:03] And I really, have so really a lot of what I was going to ask is what Joscha was talking about as far as our representation of self and so on. So I'd like to take this in a little bit of a more    philosophical direction. So so basically  okay, so the sensory motor theory of consciousness is basically having this, this  visual override or the sensory override of everything and, and  the, the internalization of the specifics of what impinges on us from our sensory motor inputs. But, but when you think about things like the development of self over time, and the ideas that we develop ourselves in our thoughts in our consciousness through more than just the sensory motor inputs. you You can change  the way you think about things through something like making a space through meditation, the default mode network, which also has constructive and  evaluative opportunities for the way that we would envision ourselves in a certain situation in the construction of local perceptual space versus this sort of, sensory processes. So when you're, when you think about in terms of describing an explanation, that, the preamps the need for qualia, and you think about descriptions changing through our different experiences that are not sensory motor, that cause transformation, even if you think about in terms of psychedelics expanding consciousness or or people who have these experiences that are non sensory and, the talk about, clairsentient and other phenomenon that have been shown that are related to consciousness and the fluidity of consciousness, then how would you explain these in those terms?

Kevin O'Regan: [00:01:51] So I think I'm out of my depth in this question, because as you say, my theory really is mainly concerned with you know, every day primary sensory experiences, like hearing, seeing touch, smell, etc. And so I  I have nothing really, I haven't thought about what happens you know,  when in, in situations like the ones you're describing.

Paul King: [00:02:18] Although perhaps we could say that the interactions with the outside, world are, creating in the brain  the vocabulary, the building blocks for constructing sensory experiences, maybe those experiences are sensorimotor contingencies, or some kind of expectation about what could happen under different hypothetical interactions. So you have that vocabulary, they often get called basis functions in theoretical neuroscience. Could these other types of experiences, these inner experiences just simply be a rearrangement of that vocabulary within the global workspace, let's say, of the brain? Would, would you be okay with that, Kevin?

Kevin O'Regan: [00:02:52] Yeah, that sounds very reasonable.

Paul King: [00:02:54] So that would be something in that, I guess, representation space, just maybe avoiding that word would be a coding, a coding as it were  that could not be derived from the environment, but  could still be experienced  by the individual.

Kevin O'Regan: [00:03:07] Right. I think that's fine with me. Yeah.

Paul King: [00:03:10] So  Rowshanek, feel free to clarify if you want, otherwise  Frank, did you have a comment or question?

Frank Heile: [00:03:17] I do have a question. So I think I might understand the sensory motor theory as applied to situations where we perform actions on something we are paying attention to, but I don't understand how your theory explains diffuse awareness. diffuse visual awareness, for example occurs when we keep our visual attention on one object and simultaneously we can choose to experience the whole visual hemisphere a visual, phenomenal awareness of colors and shapes and objects and things like that.

However, we cannot peripherally. We cannot just verbally describe any of these peripheral objects without directing some covert or peripheral attention to this objects. So why do we have diffuse awareness without directing peripheral attention to the visual periphery?

Paul King: [00:04:03] So something about  I guess our experiences  when we're not focusing on anything in particular.

Frank Heile: [00:04:09] Right.  Diffuse awareness is where you keep your eyes on one object in the visual field, but you can experience the entire world around you. And, it seems to me that the sensory motor theory requires that you be paying attention to your things that you're having phenomenal awareness of. So I'm trying to understand how I can have a phenomenal awareness of things that I'm not paying any attention to that for that I'm going to use in terms of a motor activity, a sensorimotor activity.

Paul King: [00:04:37] And we haven't really talked about attention and how attention might or might not figure into Kevin's theory.  

Frank Heile: [00:04:41] Well, I kind of assumed that attention is what you're doing when you are doing sensorimotor theory. You're, you know, you're, you're, you're perceiving something and you're paying attention to it and now you're manipulating it, by describing it or manipulating an object. So I guess you're right. I did assume that attention was part of the sensorimotor theory, is it?

Kevin O'Regan: [00:04:59] No, it's not essentially whole idea is that it's orthogonal. So I would say that attention governs the access component of consciousness, so determines what you're able to talk about, what you're able to report.

What's going to modify your, your behavior, your planning, etc, Okay. But sensorimotor laws are being extracted by your brain  all the time. When you move your arms around, proprioception is moving at the same is changing at the same time, your brain is continually model monitoring all the sensory motor relations  relating your actions to the incoming sensory changes.

So all this information is being monitored and you have information telling telling the brain, whether or not this is a previously encountered sensory motor law or not. And then depending on whether you cast your attention upon it, you have the impression of perceiving it or experiencing it consciously.

Okay. So sensory motor laws are one thing, but a consciousness.  the access, the cognitive access to it is another thing. And only when you have the two together, are you conscious of the sensory motor laws and thereby being thereby conscious of the experience itself? So with regard to diffuse attention, I think that, you know  that attention is just can be diffused.

Like if I look at, if I'm trying to figure out if a circle is actually circular, I can look in the middle of the circle. What is nothing? Okay. So the, actual circle, circumference of the circle is in my peripheral vision, I can check whether or not it is circular or not that way. So your attention can be diffused. It can be, you can actually be looking with your eyes in the middle somewhere and be attending to somewhere out in the periphery. So you can control where more to a certain degree anyway, where are your, how, how your attention is diffused around. And  there's been a lot of work in psychology, visual psychology  about that.

 There's no real problem since is completely orthogonal to the sensorimotor theory.

Paul King: [00:07:08] Although wouldn't, wouldn't the movement of attention, the movement of focal attention be a form of internal action, something like fixational eye movements.

Kevin O'Regan: [00:07:16] Well, so yeah, actually I, that is, that is an interesting question. You know, I'd have to think more about that, but a priori, when I talk about bodilyness and insubordinateness and grabbiness, I'm really talking about changes that occur through voluntary body motions, like your eye movements. That's okay. But attention is not a bodily motion, and it does not change the influx, the sensory influx coming into your system. Now, I don't know, maybe I'm not sure what to do about this. It'd be interesting to look more closely at that.

Paul King: [00:07:51] And, Joscha, feel free to chime in on anything that strikes you as we're going through questions. Sanjana, did you have a comment or question?

Sanjana Singh: [00:08:00] I actually had two questions and I will keep them short and precise.

So  as far as I understand the redness of the red is  can be explained by some sort of relational or associative aspect of the electrical impulses in our cortical mapping, which is basically render shunning an image. So it produces as its constituent parts, which is like the  interactions between different electrical impulses what is producing the very color structure size and also texture to some extent of the image perceived.

So would it be fair to say that  the knowledge of imagination or the knowledge or descriptive aspects of imagination is what constitutes somewhat the perceptual visual, sensory motor contingency?  And the second question would be  Kevin was talking a bit before about the particular kind of behaviorist viewpoint wherein  running is not what causes someone to be afraid rather fear is which is making you wrong? if I'm not mistaken.

And so I was wondering whether this is simply a psychosomatic or neuro psychological claim of some sort that mind here imposes itself  on the very behavior of the subject, which makes this particular behaviors viewpoint emerge. So yeah, those are my two questions. Thank you.

Paul King: [00:09:26] Yeah. So maybe starting with the first one, it sounds like this is maybe a little bit  in relation to this idea that  perception is a hallucination we're sort of hallucinating  sort of an idealized version of the world that  is useful to us that has affordances. Anil Seth has talked about this idea as have others  I don't know, Kevin, what's you, what's your view on that? Of course, visual imagination is another area where that's not being externally driven. How would you explain these types of things?

Kevin O'Regan: [00:09:56] Also, what Sanjana said, I strongly disagree with her summary of what, of what she said about the redness of red, because she said the redness of red was a relationship between the impulses    caused by red stimulation, you know? And  is exactly the opposite of what I'm saying, what I'm saying. I mean, obviously there are relationships in the impulses, in the brain that are determined by red stimulation, but they are not what constitutes what it is to see red.

And they do not explain why red things seem red rather than green or rather than the sound of a bell, What makes red things seem a red rather than smelling like the sound of the bell is the fact that when you're looking at a red surface and you close your eyes, the redness goes away. It's a fact that when  you looking at a red surface and you sniff, there's no great sensory change because that's the sniffing causes a change in the olfactory input.

It's the fact that when you turn the red surface more towards different colored lights, the light reflected back into the eye, obeys a certain law, and that law constitutes what it is to see redness. So although obviously the brain and those impulses are, are, are, are, reacting and coding. This inflammation, the redness is constituted by the laws, just like you do when you squish the sponge, it's a better explanation of what softness is  to refer to the fact that it presses when it squishes, rather than saying that you have this or that neural impulse  occurring in the brain. Right? So that, so I strongly object to what she said about the redness of red.

Paul King: [00:11:51] Well to push on that for a bit you know, red, looks different from green and, there are red and green photoreceptors roughly speaking, they're sending, different chromatic signals to the brain for processing. So what's going on there? why are red and green different and how does that relate to  whatever light frequencies are being picked up?

Kevin O'Regan: [00:12:10] So, so, so  my explanation, I have this paper with David Philipona, and there are a number of other papers based on that paper that that claim that. that if you look at a red surface, okay, and then you move the surface around under different lights, the light coming into your eye will change in a certain particular way, which is different from other colors.

And it's that law, that sensory motor law that constitutes the redness of red. Okay. It's not so much, obviously this is constrained by the fact that you have long, medium and short wavelength cones in your eyes. And that's obviously constrained by the, by, by the fact that your ganglion, ganglion cells in the retina  do certain kinds of  processing, etc. So it's constrained by the neural architecture and the neural  neural structure of the lower level visual system. But the ultimate cause for the redness of red is like the ultimate cause for the softness of soft, it's a law that describes what it is to interact with red things.

Paul King: [00:13:21] Well, and one of the points I think you make in the book is that  whatever neurons are in  whatever area of the brain that seems the process color, I think V4 often gets associated with that.

Those neurons don't know that they're color neurons.  They don't know that they're, they're just dealing with signals that are coming in. They don't know that those relate to color  or presumably initially maybe after life experience  there's, there comes to be some  some coding scheme that relates to what changes when light reflects off the different surfaces.

So is that, is that sort of part of what you're getting at the fact that  there's nothing that's really special about the neurons other than what changes when you interact with the environment and how that affects the sensory input?

Kevin O'Regan: [00:14:00] Yeah, right, exactly. In fact, you know, exactly. Yes.

Paul King: [00:14:06] did that address your comments Sanjana, on the first point?

Sanjana Singh: [00:14:10] Yes, it did. I think what I sort of did here was to take  Kevin and  Joscha's theories together about electrical mapping and  the sensoryiotor contingency and produce     a knowledge of imagination  contingency problem in itself.  So that's why maybe I conflated the ideas there, but yes, I definitely see the redness of red more clear now.

Joscha Bach: [00:14:33] I think that the dimensionality of the color space is given by the types of receptors that we have that can pick up independent dimensions. Right? So if you basically take  the, as a simplification, all the different color receptors that we have as giving you intensities for the activation of these color receptors, like something like a numerical value that is somewhat continuous in a certain range.

So you measure the amounts of activation for  the receptors that are more sensitive to certain areas of wavelengths, Right. If you now do the statistics over these different preceptor types, you find that  the resulting cloud of data can be decomposed  projected into a space. with a certain number of dimensions, and Eigen vector decomposition and.

these base vectors of that space  define the dimensionality of your color space and this color space can now be  rotated and stretched and made to fit on, on the objects that you perceive. And the semantics of the colors  are only given I think by the properties of the objects. So, but what makes red characteristic with respect to blue and green and so on is  of course it's difference. It's sensory difference. It's saliency, but it's also the properties of these objects. So basically you perceive red as a harder color than blue because blue objects tend to be colder than red objects. On average, when you look into your environment and the red hot objects might be the ones that are hot to the touch and so on.

And  all these qualities that objects have  that are correlated to the colors  give rise to the relationships that these colors have to the rest of your integrated representations of the universe. And the meaning of, of red is exactly the set of relationships that redness has to all the other objects in your mind, that you cannot represent read in the way that humans do it independently of these properties.

If you want to get to a similar presentation of red. So once we build robots, we didn't build the robots. We took them from Sony, the Sony AI robots, to play soccer with them. And the task that these robots had to fulfill as part of that was to find the ball on the field and to make that task as simple as possible, he made the ball bright orange and the field uniform green.

And then we hope that we could determine orange and green just by looking at the pixel values of the sensors of  the camera sensors of these robots. And it didn't work  because red, or orange or green, or not just wavelength, They are, have to be interpreted in the context of the objects. And that was because the ball reflects the green and  the  of the field and the orange of the ball was also projected by the light  on the green field, below the ball and something that we don't consciously perceive unless we focus on it when we look into the environment.

But  if you look at the camera pixel values, we find that the  directly below the ball  the field is more orange, that the ball is not green above that, right? So just by segmenting the colors, you don't get a nice orange circle and a green field. What you get is areas where the colors are even inverted.

And so you have to make a transformation that takes knowledge about the geometry of the space into account, right? You have to interpret color in the context of the scene that you're looking at. So you're correct it and seeing it. This is an aspect of the surface under certain lighting conditions. You have to look for an invariance it's  to speak with Kevin's terms, sensory motor contingency, right? You have to identify something in relationship to all the other things.

Paul King: [00:18:28] Yeah. That  and that, that point you make about  the orange soccer ball, creating a little bit of an orange halo on the, on the grass below it  that we don't notice that would almost seem to be in support of the sensorimotor contingency model  which says that, what's significant is a ball there that you can interact with.

And so what you're experiencing has to do with that potential for interaction and the orange  the, the reflected to you on the grass is that really relevant to that interaction. And so  we don't see it because the brain isn't, you know, the brain showing us our interaction option options, not  not  the pixel values from the retina.

 One thing I did want to sort of possibly challenge Joscha on, although maybe your soccer ball example addressed it, is. the  color constancy experiments would seem to create a complication for the view that  the, three receptor types, the three photoreceptor types  create    color space because  if you're in a room  like they have these monochromatic rooms that are lit by single frequency light, and we've experienced like a single frequency orange light.

And what you experience is no color at all. The whole room looks gray  even though the you know, red photoreceptors are being activated. So it seems that there's something about  I guess cognitive scientists would say that what you perceive is not light, what you perceive is paint, paint on the surface  on surfaces  and that  perceived paint color is inferred irregardless of the lighting circumstances. So the brain takes the lighting circumstances and makes whatever adjustments let's say.

Joscha Bach: [00:20:04] I tried to explain that. So what you would see if you have a room that is constant orange, that after a certain while, once you are accommodated to the colors in that room, right after a few 10s of seconds, your neurons in your brain will, or in your retina will have basically forgotten the context of the previous room and you will no longer notice this orange, but you have what you will notice this, that everything in the room is now only very in the single item intensity dimensions.

So your color perception will be reduced from, three or four dimensions to just one dimension. And this is what you just described because the statistical structure of the data will change suddenly the space of colors that you perceive only has a single dimension.

Paul King: [00:20:50] Yeah. That makes sense. I think that's right.

So Sanjana's other question was about  the relationship between  fear and running, does the running  is the running the cause of fear or is the fear of cause of running? And I think Antonio Damasio's   view comes to mind there, the feeling of what happened. That  that a lot of our feelings have to do with our body reactions to what's going on.

Sanjana, I quite got your question there on fear and running. So are you asking what is causing what in Kevin's view?

Sanjana Singh: [00:21:19] Yes. Yes. Within the behaviorist   Yes.

Kevin O'Regan: [00:21:22] Yeah, so I brought up that example because you know, the behaviorist says it's the running that causes the fear. And  essentially in my, my, if you think about what I say about softness, okay, I'm saying that the softness, I'm not saying that the softness is causing the squishing or the squishing is causing the softness.

I'm saying the softness is squishing. And in the same way, I'm saying the redness is the sensory motor laws of redness.  So it's kind of, I'm saying that nothing causes anything. If If you abandon the notion of qualia, if you abandon the notion that there is some underlying essence that's causing the pain, for example, then then since there's no hurt that causes you to jerk your finger out of the fire, it's the jerking, your finger out of the fire and the attentional capture of the, of the, the stimulation of the heat  etc, all that constitutes what it is to have hurt.

Paul King: [00:22:37] So, so perhaps the impulse to escape is the fear, you know, people talk about facing your fears, like literally facing the thing that's  that's causing your fear as a way of  managing fear.  So you're suppressing the impulse to run away. Is that  why then you have less fear.

Kevin O'Regan: [00:22:55] Yeah. so. I mean, yeah. I think that fear is just, a yeah, I would agree with what you said.

Fear is a word that you use that combines together a whole host of different types of change in your body. there's the motivational changes. There's a cognitive change there's perhaps even    visceral changes where, you know, you have butterflies in your stomach, etc, etc. all this constitutes what, what fear is.

And you can act on any one or other of these, of these different aspects and perhaps reduce the global fear experience.

Joscha Bach: [00:23:35] I think that fear is best understood as a configuration in which your cognitive system is in it's one in which you are focusing on a particular dangerous things the control of your actions flows from this.

There are two different types of fear. You could say. One is basically the blocking fear that you cannot act on. And  this can be paralyzing and make it very hard to come to a decision because you are beholden to directing your attention on a particular kind of thing in your environment, but you cannot do anything about it, right?

So you are immobilized, but there is also the other free flowing productive fear is for instance, when you drive a car  the thing that makes you not crash the car is fear that you will always pay enough attention on the road to make sure that you're not crashing, But If there is some kind of obstacle coming up in the road, your conversation that you might have with the people that are driving with you in the car will falter because you will direct the necessary cognitive resources on the obstacle in the road until they had to pass. And it's not that this fear is going to paralyze you in any way, because it's a situation that you feel that you can do this, but you will make sure that you will do this, even if you have attention deficit like me, you will never Focus your attention away from the road for long enough to crash. And that's crucial, right?

It's, a very particular thing. I don't have the same fear in a conversation. It might happen in a conversation that I trail off and think about something else because I don't have that configuration there. So it's a particular way to think about configurations that our mind can be when we talk about these emotions.

Paul King: [00:25:20] Thanks, Chris.  Did you have a comment or question?

Chris Vallejos: [00:25:23] Yes, hi, thank you. I have a question.  My question is about qualia. And I'm trying to understand that theory properly, because I have a question about the feeling of awareness or why it feels like something when I'm aware that I'm aware. So what I'm understanding with qualia and the hard problem is that the sensory motor theory is giving an explanation for why, for the hard problem of consciousness, which is why does red or soft feel like something?

And it sounds like it's these sensory motor mediated changes. In the sensory inputs that give rise to the feelings. So something is soft because it's squishy that change and squishiness is the feeling which implies that the sensory input changes are, what if I'm understanding this right is explains feeling or it is the feeling.

So how does this theory explain just the feeling of awareness or when I'm aware that I'm aware? because that really, to me, and maybe I'm misunderstanding it is it's the mystifying hard problem of consciousness. So does it explain that? And also why did we evolve to have feeling, it seems like we could just label all these colors a certain way and then behave in a certain way without having the feeling itself.

So those are my two questions.  Kevin  thoughts on that?

Kevin O'Regan: [00:27:10] Yes. So  yeah, I think you more or less got the  the main idea, except you did use a word, which I didn't like when you were describing what the sensorimotor theory said, you said it's the sensory input changes that give rise to the feeling that that would be a mistake.

In my theory, it doesn't give rise to the feeling, the sensory changes and the knowledge that the laws are being obeyed, constitute what we mean when we are having the experience. Okay. So having the experience of softness is constituted by currently paying attention to the fact that currently when you press the sponge, it squishes.

Okay. Nothing is giving rise to anything. Okay. It's just what you mean by softness. Okay. so that We don't want to use the notion of generation. The brain is not generating anything. Nothing has been generated. It's a definition, essentially. Okay. So  you're asking a different question, which is you say that when you are aware of something, it has  it has a special quality  and you want to know where this quality comes from, what generates this quality.

And I would, I would just say that you, as a self, when you are telling yourself a story about what you are currently doing, you know, I am currently  looking at a red patch of color. Okay. What, what you mean by having an experience, what you mean by having your this awareness is precisely that namely, that you are telling yourself you as a self exist and this self is telling itself that it is currently occupied by  interacting with a red surface in a way that corresponds to the experience of redness.

So it's  it's a definition. I don't need to explain anything, officially, I'm just saying, if you think about what you mean, when you say you have an experience of awareness. Well, what you mean by that is that you, as a self are currently, etc, etc, etc.

Chris Vallejos: [00:29:24] Is it a bit of an illusion or a misrepresentation when I, because you said something like I'm aware of the red surface, but what if I'm aware of awareness is that it feels like that to me, but maybe that's not the correct way. And this comes from mindfulness practice. So, am I mischaracterizing this when I say it that way.

 Kevin O'Regan: [00:29:48] Right. I think most of us when we are aware of things, Part of being aware of the bottle, as you look at it is being also aware of the fact that you are aware that you are currently being aware of the bottle.

So this kind of meta aspect of awareness is usually often is usually present it kind of disappears in fluctuates. Like when you're in the middle of playing a rugby match, for example, you lose parts of this kind of meta level of the awareness. I think there's this, this sort of self recurs is recursive nature of the self, and one's notion of the self being, being a recursive kind of self, a story that's telling itself as this infinite loop of, I think Douglas Hofstadter kind of talked about this as well, but I don't think there's any mystery involved.

You know, it sounds a bit mysterious and mystical, but I mean, there's lots of recursive things in nature, you know, self sustaining phenomena, dynamical phenomena that are self sustaining like hurricanes, or even life, life itself is a self sustaining phenomenon. Living things create an environment around them, which allows them to continue to live.

So, this recursive aspect  a system, dynamical systems is not a mystery. So the fact that the notion of self can be recursive in this way, it need not be considered to be a mystery  people talk about  the flow state  this   moment when one is so engaged in a concentrated activity that  some people would describe it as a type of hyperconsciousness. but it also  is characterized by, kind of forgetting about, you know, where you are and  you sort of lost in the moment. So  how would you make sense of that? Is it hyper-conscious or is it less conscious because there isn't the meta awareness of of existence?

Well, really my theory is sensory motor theory, is a phenomenal content. So, I don't have a theory about the self. I'm going to leave that to the people who work on the self. I, I, haven't thought about all that much.

Rowshanak Hashemiyoon: [00:31:58] Isn't that just a matter of going back to the idea of attention and focus, isn't that just a sensory motor override of that meta, you know, we're, depending upon how much time that you have, if you bring time into the equation, a little bit of physics that, there is this urgency that the brain recognizes that there's a sensorimotor override.

And so the attention that leads the mechanisms of consciousness sort of overlap here, which is a little bit of what I was asking about earlier here. Does that, is that a possibility

Paul King: [00:32:29] It seems potentially reasonable to me.  Chris, you wanted to elaborate  well, I mean, the second part of my question was about  the evolutionary purpose of this feeling or qualia. And now I'm wondering if based on Kevin's clarification about it, doesn't give rise to feeling it is feeling it is what we mean by feeling.  Could I take that to mean that we didn't evolve to have feeling, we just evolved to be able to sense these differences and that sense of differences where we evolved.

Chris Vallejos: [00:33:08] And it happens to be that we call that feeling, but it's not that we evolved feeling for some evolutionary advantage in and of itself

Kevin O'Regan: [00:33:16] Exactly. That's exactly what I'd say exactly

Chris Vallejos: [00:33:21] Okay. And then I could extend it to the feeling of awareness of awareness, perhaps like There's no purpose. It just is there and it's not as mystical as we make it out to seem.

Joscha Bach: [00:33:33] I think that's possibly best understood as a simulation or as a simulacrum. So  the reason why we perceive ourself in this particular way and perceive the state of our attention in this particular way is because it's the best model that can be made by the brain at this moment.

It's a model that is describing what's going on, so it can be controlled. And  when we find a better model then this might disappear and replaced by something else, and a little bit confusing is what is finding the model and what is observing it. I think  the notion of the common colors that Kevin brought up might be a little bit confusing.

Yes, it might be true that many people have the wrong notion about what is perceiving there and put a full, human mind in there on the receiving side. But  what most people think is not that important. The question is, can we understand what the mind is doing this out introducing the notion of models.

And I think that the notion of models is essential. the several ideas  that we have in order to characterize models  based on simple formal logic, the model is basically the mapping of values to variables. Or you could say that a model is a set of variances. in investment in the variances for the variances or constraints between parameters in the models and the parameters are sets of possible values that is, that can be had in certain locations of the model.

And  you can also. Define the model with respect to its control purpose. And there's the good regulator theorem and by Conant & Ashby from the 1960s that describes that every system that is controlling another system needs to implement a model of the dynamics of the system that it controls. Otherwise the control cannot work.

So I don't think that we can get around the notion of models and in some contexts we can replace it by encoding or coding as Kevin said, but not an all of them. And the thing that interprets the model is some kind of mechanism. It is not necessarily a human being that is interpreting it. And I suspect that introducing the hormone coders in the first place is not resolving anything because they don't know what that thing is in the first place, right.

It is a meaningless term unless you are able to construct it from the ground up. And that is, I think what we have to do to understand that It's also the reason why I am so dissatisfied by Kevin's notion of  grabbiness and  subordination and so on because they presuppose too much. They, they're not wrong.

Once you have constructed your own self and you observe what the self is doing and how it relates to its attention, and its perception, and in this sense it's correct. But  I think that these notions can only become intelligible once we deconstruct the self and once we deconstruct attention and we can do that,

Paul King: [00:36:41] Let's keep moving to additional questions, Natesh?

Natesh Ganesh: [00:36:43] Yeah, just to clarify something when Kevin was answering something Chris' last question.  what was the answer that the, the experience, the, the feeling itself did not serve any evolutionary purpose. Maybe I misheard that. And the the actual questions I actually wanted to follow up was on two things. And that kind of related to the thing I asked right before, was that, is it right to think about that these, the redness of red and the softness of something is adding extra information to  information gained through actual perception. I mean, you know, I'll use information in, quote unquote and say that like on top of seeing something  viewing something, say, I don't know what like, I don't know, like a Teddy bear or something you view it, you know it looks like a Teddy bear, you, touch it.  And when you squish it, the softness adds extra information and that extra information, Kevin raises it in terms of knowledge of the law of the laws, but in general, can be viewed as information overall.

And the second thing I guess, is    maybe just clarifying the title of the book is that like  of the book from right is like why red doesn't sound like a bell. And perhaps Kevin was trying to say that there  you know, each of these  different phenomenon  the redness of red, and the sound of bell, they kind of serve, that they have them different quote unquote laws in my terms will be providing different information to the brain, and they're not the same type of information that they serve different kinds of purposes is the suits our purposes at all.

And that's why they're different because, clearly  for people with colors could sound like a bell. There's nothing wrong with that. And I'm wondering if Kevin is saying that like, under his framework of consciousness, colors cannot sound like a bell period.

And  maybe I'm just going to clarify that, or maybe it was purely, to kind of say that because these two phenomenon have different laws or information they provide, they're just distinct in that sense. That's it. Thank you.

Paul King: [00:39:03] Kevin, do you want to start with synesthesia, and then  something about  is there an evolutionary purpose or functional role of phenomenology?

Kevin O'Regan: [00:39:11] All right. Okay. So yeah, you're right in saying that  I'm claiming that feel has no evolutionary evolutionary purpose, right?

So it's just a word that we use to describe a certain way of interacting with the world. There's no purpose. It's just the word we use. And with regard to redness remind me what the question is? Sorry.

Paul King: [00:39:34] Something about synesthesia, Natesh? 

Natesh Ganesh: [00:39:36] Oh. So yeah, I was referring to Kevin's, name of Kevin's book. that said that why red  doesn't sound like sound of a bell.

And I was trying to ask if Kevin's point with the with the name of the book was that, that these two essentially are different quote unquote laws. And hence they're not the same thing. And not saying that, not saying that the framework of consciousness does would not allow someone to.  See red, and also hear the sound of a bell because for synethesia patients that could happen.

Kevin O'Regan: [00:40:09] So I think I understood your question. So the point what I'm saying, let's look at it for softness. Okay. If what we mean by softness is the fact that when you press and it squishes, okay. Then it's just inconceivable that if, when you press and it doesn't squish that you should feel softness because by definition, what you mean by softness is that when you press it squishes, okay.

So you couldn't have a kind of spectrum in version with softness it's inconceivable, and the same is true for red. If what you mean by red is the fact that when you move the red surface around under different lights, it obeys a certain law, which is what we studied. upon our   we've shown in our papers.

Okay. So redness really is a particular law it's actually described by a three by three matrix. So this three by three matrix describes the way the light coming into your eye changes when you move red surfaces around. Okay. That's what we mean by red. So it is not possible for anything to look red, unless it obeys that law right now, let's take synesthesia.

Synesthesia is a feeling of, seeing redness.  When you look at, say the letter a, okay, now, in fact, it turns out that synesthetes, they, you know, the feeling that they have of the redness, when they look at the red or a is not, does not have the same degree of perceptual presence  that    really seeing red has, it's more like imagining red, then actually seeing the red this is. Anil.

Seth actually has an article about synesthesia and its relation to sensory motor theory. And it's a very nice, very nice    article that it really combines. Sensory motor theory and, and predictive processing theories together and gives a very, very convincing account of how one could get synesthesia. And what is the actual phenomenal experience?

one has of these concurrent colors that you get when you look at letters?

Joscha Bach: [00:42:20] I think that feelings are basically continuous representation, mostly a geometric representation at the interface between the perceptual mind and the symbolic analytic mind. So we basically have this mind that is constructing low dimensional, mostly discrete representations using mental languages or  conceptual abstractions. and  this perceptual mind that is more or less continuous and  somewhat inscrutable, and so on.

And that is analyzing the world in similar ways as the neural network would be doing, which means usually  parallel. And with a lot of recurrences and so on. It's very hard to generate abstract knowledge about how this works. We can get there, but it's slow and tedious and sometimes it doesn't work, like this. So this interface between the symbolic way of making sense of the world. And the other one needs to translate it's ready. How do you talk to a symbolic part of your mind if you don't have a symbolic language you use  multi dimensional continuous features. And I think that colors are mathematically speaking, polar coordinate representations which means they're characterized, by angles and distances from an origin and  the certain spaces and their basic amount to color spaces, and you can map them to the same concepts.

And when you are in training your own mind to perform mathematics, using perception you can use  these color spaces to perceive these color spaces as an input as. a feature of certain calculations. And I know some people that are very good at intuitive mathematics they basically look at numbers and they see colors in them, and these colors are indicative of  the way in which they should solve certain equations.

Right? So it seems to be a mapping that the mind establishes between  the geometry of visual perception and the geometrical calculations that you are performing on real numbers.

Paul King: [00:44:29] Now, Kevin, it sounds like you're saying  regarding the evolutionary question that  you think    David Chalmers talks about philosophical zombies that, you know, act to do all the things, but don't actually have phenomenology, it's open question, whether such a thing could actually exist.

 But  the addition of phenomenology, the addition of subjective experience and red feeling like something you would say that that is not converting conferring, any evolutionary survival value, or do you have a view on that?  Yeah,  I agree. There's no, there's no evolutionary, well, there's evolutionary survival value in all the capacities that humans have as compared to animals, for example, or in their environment.

Kevin O'Regan: [00:45:12] And the fact is that we call certain conglomerates of those capacities. We call them consciousness or we call them perception, or we call them fear or whatever, but, no, there's, there's taken together. There's a evolutionary value in the things we can do otherwise we wouldn't be here, but I don't think consciousness is one particular thing that that kind of emerges or is created or generated  that, suddenly provided additional value.

Paul King: [00:45:44] Great. And we have a few more questions.

Natesh Ganesh: [00:45:47] Paul, since both Kevin and  Joscha both believe in artificial consciousness. I just thought I'll throw a quick question on those areas  really short.  It seems to me that    from Kevin's perspective, it seems like he would be more of someone who takes like a more embodied view.  And I'm curious so if with respect to artificial consciousness, whether he works, I wonder whether he thinks that that's a system that can achieve artificial consciousness, requires to be embodied    from a sensory motor motor framework.

And I'm also curious if Joscha assumptions the same. along the same idea or whether he thinks that  that  artificial conscious system does not need to be embodied. Thank you.

Paul King: [00:46:32] So on embodiment quickly, and then we'll go to the other questions.

Kevin O'Regan: [00:46:37] So I think that, so as I say, consciousness has different aspects, right?

So there's the access conscious access aspect. There's the self aspect. And then there's the phenomenal aspect. Now, a machine that does not have a body could clearly pay attention to things it's doing like like imagine a machine that interacts with other machines on the web. So it has no body, but it can perhaps modify web pages, or distant locations.

So the actions that can perform are, are not embodied, but they're nevertheless actions and can have perceptual processes in the sense that it can take account of what's going on on these websites. right. So one could no doubt talk about the notion of self in such a machine. You know, if there were a society of such machines are good imagine that they were interacting with each other and discussing and talking to each other about things so they could evolve a notion of self that would be useful for them to communicate  using this notion of self.

So I can perfectly well imagine that they would have both the access consciousness aspect and the self aspect Now, with regard to the female aspect the phenomenal aspect, in order to have fields like humans, they would have to have bodies quite at least similar to humans. Okay. Otherwise they wouldn't have the same kinds of fields, but if you, if you counted actions on the web as actions, just like my action.

When I move my arm, then you could also perhaps talk about sensory motor contingencies in terms of interact interactions with websites, you know, actions on websites, and maybe it could have different kinds of fields. So it's consciousness would be slightly different from that of humans, but, but you could probably use the word conscious    equally well for such a machine.

And if you made a machine that had a body, not like humans, but perhaps  made out of metal and  you know, and what have you, which is just different. Well in such machines could perhaps be perfectly conscious in the sense of access consciousness and in a sense of the self, but perhaps the fields that would have would be somewhat different from the fields that humans have.

Just like presumably dogs, you know, have fields  which are very different because maybe their, their, their, their, their olfactory senses 1000 times better than humans. know, that, we will know the  the world of the bat. Okay. And what it is like to be a bat, the same can be said about dogs. You know what it would be like to be a dog, No doubt dogs  have a feels, but they'll be very different from those of humans.

Paul King: [00:49:18] And one could argue that  the idea of a body is simply an abstraction of the interface between an agent and its environment.  The agent is able to emit certain actions and those will have consequences that come from the environment.

And  it's  the agent's understanding of its own repertoire of potential for action and potential for sensory input is what the body is.  The brain doesn't know what's attached to the body. It figures that out through, the I guess we could argue sensory motor interaction. Joscha? That's Exactly what I would say.

Kevin O'Regan: [00:49:49] Yeah. I'd say exactly that. In fact, with my colleague, Alexander Terekhov, was here earlier on, I think he's gone now. We've written a paper showing how the hour how an agent, an artificial agent by interacting with the outside world could deduce the existence of outside physical space. show its group theoretic properties.

its three dimensionality. it has three dimensions and also to show the existence of its own body,

Paul King: [00:50:18] Joscha, embodiment, and artificial consciousness,

Joscha Bach: [00:50:22] I, would like to chime in on the notion that dogs have very different feelings on us. I don't find that to be plausible. That is  it seems to me that dogs are basically stupid people and  the, because they have a slightly smaller brain and a much, much shorter childhood, so they don't get to the same degrees of abstraction as we do.

But our feelings are mostly not characterized by our ability to abstract. especially dogs might have different feelings about things. Maybe dogs have, don't or don't have feelings about the development of the stock market or about  the American government, but  because they cannot  create the necessary representations of these abstract  circumstances.

But  they have similar social emotions as people have, and they have similar reactions to pain. And to some degree they have a similar self model. I think so based on the when we observe dogs and interact with dogs it seems to be the simplest explanation of what they're doing, that the dimensions in which they conceptualize their own control are very similar to our own.

Paul King: [00:51:27] And is embodiment required for artificial consciousness? Do you think

Joscha Bach: [00:51:33] it depends on how you define body? I think that  in order to    have a way in which you discover your own agency, you will need to be able to perform operations that you can observe where you can observe the outcome, but I don't see avenues to how a mind could not be conscious if you are, for instance, in a virtual world.

So imagine that you are spending your entire life in Minecraft, right from birth directly coupled to a VR that puts Minecraft  data into your brain. And all the constraints that you observe are those that are observable in Minecraft. And I suspect that this might be sufficient to become conscious about something, even with a brain, like a human brain that is particularly tuned to the way in which he interact with a very particular environment, and that is mostly  evolved to control the movement of our skeletal muscles. And  you could take the algorithm that runs in Minecraft and encode this into a small part of the neural tissue by basically hardcore, to get in there and then implanting this into your brain. So you basically have a brain area that is encoding The constraints of Minecraft, just as a thought experiment. And your interactive is that one in a perpetual dream of Minecraft. And if you think that it would say this is you have a body in this thing that exists entirely in your brain, if you're fine with calling this a body, then I would say yes, in this sense, I agree with the hypothesis.

That embodiment is the crucial part. What I don't buy is the notion that you have a physical blob of matter. That is situated in a very particular way, in a very particular physical universe. I don't buy that at all. That may be true for human beings because we are attuned to this by evolution, but not for minds in general.

Paul King: [00:53:23] And I think Kevin would agree with that.

Kevin O'Regan: [00:53:26] Yes, I would.

Paul King: [00:53:28] Well, one thing that seems interesting is how to think about automatic action.  When driving a car on the highway, people talk about road hypnosis, you're sort of lost in thought and not necessarily that aware of your environment, but you're, you're, having sensory motor interactions with your environment. You're keeping the car on the road. Kevin, any thoughts what your view would be on the perceptual consciousness of  these situations where actions are automatic, like picking up the coffee mug?  

Kevin O'Regan: [00:53:53] It's really true that most of what you're doing most of the time you're you have very few qualia really, because you're not, you're doing a task, you know, you're driving your car and maybe you're paying attention to the truck you're crossing, but you're not paying attention to the feeling of the, of the pedal that you're pressing with your foot to do that very driving.

And you're not paying attention to the scratchiness of the clothes that you're wearing, or what have you really, the focus of current experience is determined by the focus of your current attentional span. And it's incredibly limited. In fact, attention is incredibly limited and that's, what's shown by all these change blindness demonstrations in the  end the    inattentional blindness experiments.

Right? So  yes, I think that that links into what what what you were saying, Paul namely that. that, one's experience is is very strongly determined by what one is attending to. And one's attention is very, very limited.

Questioner: [00:54:57] Hello everyone. I actually have two short questions. One of them is about other theories of consciousness  such as information integration of Tonini     and global working space. I was wondering to know a little bit about the relationship between Kevin's theory of consciousness with relationship to those.

And finally, also I wanted to know how this sensory  motor theory of consciousness pays attention to the economy of attention. So how would one be able to increase their consciousness in this theory?

Paul King: [00:55:36] So relationship to global workspace or IIT?

Kevin O'Regan: [00:55:40] So, as I said, global workspace, I think is a good theory about the functional properties  of, of cognitive access, right? So it tells you under what circumstances one is going to be able to report stimulation or under what circumstances it's going to influence your future behavior, etc. Okay. But  I think is a theory  that I think. Confuses different notions of consciousness. And it, it, starts from the hypothesis that there is such a thing as an essence of consciousness and it equates that essence of consciousness  to  to  integrated information. Right. And that's just an arbitrary attribution.

And as recently I read an interesting article by Mercker, Willie Ford and Rudrof in  BBS where they said that that they did  an analysis of, what kind of systems  would have sufficient integrated information to count has been conscious. And it said, that  power grids, gene expression networks, social networks among others would also be considered conscious.

I think for me to infer integrated information theory is, is, is not a good theory. And it, doesn't say it's just, it just presupposes that there is such a thing as consciousness, and it doesn't even tell us what consciousness is, what it's trying to explain. It just arbitrarily says, well, what I mean by consciousness is something that has integrated information.

And if, they mean that will then power grids and  and  gene expression words are also conscious.

Paul King: [00:57:28] Well, IIT I think would probably say it has such a low level of consciousness that it's not very interesting.  But it certainly has nothing to say about phenomenology that I can tell

Joscha Bach: [00:57:39] Oh, that's not really cut the world at the joints. I think  if somebody says that  everything in the world is conscious, you still need to make a distinction between conscious and unconscious people, right? If you say that everything in the universe is alive, it's a meaningless statement because you also want to have a difference between life, and dead people, right?

So you would not say that everything in the universe is conscious and alive, except for death and unconscious. people? Well, I think it just means that you have redefined the terms into something meaningless.  You want to distinguish the points, the person that is a sleepwalker, which means they're probably not a person in that state.

And that I think would be equivalent to the philosophical zombie and some body who creates an self report and the ability to report to the outside about a piece of model of what they are doing at any given moment that is required for sentience. And I don't think that for consciousness, you need to go beyond the ability to create such a consistent multimedia story of a person in the universe and report and act on that story.

I don't think that it, that it goes much beyond that, right? It's I think it's virtual it's inside of that dream. And so this notion of philosophical zombies doesn't make sense to me. I agree with Kevin that the unsatisfying Part of it is that it gets determined that phenomenology of consciousness almost right, but it does not provide, at least from my understanding, from my reading of what I've read a good understanding of what consciousness actually is and how it works.

Paul King: [00:59:16] Yeah. To me, IIT might be necessary for consciousness.  If you want a distributed system like the brain, to be able to, all the different brain areas to work together, to create a conscious experience, you need to have some kind of in, integrated information or communication between these brain areas that's constructive.

So it does seem like there might be a necessary foundation for consciousness, but it's not going to tell you anything about it that just tells you about a sort of a functioning brain. It doesn't tell you about  phenomenology or being present or expressing the world or having subjective experience.

 It doesn't seem to address any of that.  From what I can see.

Joscha Bach: [00:59:48] When IIT says it's a construct an information theoretical theory of that is anti-functionalist, this is preposterous.

Natesh Ganesh: [00:59:56] I mean, I mean, we should get Christof Koch or Tononi on here and they should be able to defend it. I mean, I actually agree with Joscha, I don't, I don't think  IIT is going anywhere even with respect to some of the easy problems. I think there are a lot of, a lot of issues there. I mean, I still don't understand why Scott Aaronson's critique of it in 2015, it's almost seven years now, wasn't like the end of it. I really don't get it. So I don't know how it keeps coming back, but, you know, I don't expect Paul to defend it.

Paul King: [01:00:26] Well, Chris Christof Koch is pretty respectable neuroscientists and he's climbed on board. So I think that's given it quite a bit of life

Rowshanak Hashemiyoon: [01:00:32] Also in the next few years, as I said in the last week's episode, you know, there is right now this competition between the global workspace theory and the integrated information theory.  And so probably in the next three or four years between this competition, between them with this $5 million grant we'll know the answer,

Natesh Ganesh: [01:00:51] But will we really know the answer? Because my understanding was that the experiments, and there were a lot of people who welcome adversarial collaboration, and I'm one of them, but it seems like the experiments are going to be such that it's not like if the experiment lands one way or the other, well, either side is going to give up working on their theories.

They're just going to you know, change it to account for the new piece of information. And we're gonna be kind of right where we started. Aren't we?

Paul King: [01:01:17] Well, if Thomas Kuhn is right, everyone, who's got some skin in the game with their theory, will keep adapting as for as long as they can keep it alive.

And  eventually  you know, the new generations pick the theories that work and abandon the ones that don't  there's this documentary that came out last month on, on Henry Markram in the blue brain project    a somewhat critical documentary  of  it describes it as somewhat of a boondoggle and  Christoph Koch is interviewed in there and, and the way he describes Markram is somewhat, I think, similar Joscha, to how you  describe Koch, which is    someone who is well intentioned, but sometimes just gets a little bit off base on some things.

 So anyway, it's interesting for those who haven't seen that documentary is worth checking out.

Chris Vallejos: [01:02:04] As someone who would like to see, shall I call it more spirited debates between Paul and Joscha, I would plus one, idea of a facilitator in the future, so that we could unleash Paul and yourself, and let's see the debates. I always enjoy. those.

Rowshanak Hashemiyoon: [01:02:23] Well Joscha has been unleashed, we just need to unleash Paul.

Paul King: [01:02:28] I'll set up, I'll set up a Joscha time and we'll  either have a take-down of Joscha's theory or he can take down mine.

Joscha Bach: [01:02:34] For instance, I would, think this would be very interesting for me to basically facilitate a conversation between Kevin and me about the things where we disagree. And so we can put the things that we most disagree in the highest possible contrast, because I think that would be most elucidated to the audience, not about who's right, but about the disagreement between different perspectives and  the potential to learn something from this and build your own theories.

I was at a conference where there was a session that deeply impressed me. And the topic of that panel was what would convince me that my theory was wrong. And it was a conversation between John Anderson and and a number of other important luminaries in cognitive science, who all had deep paradigmatic thoughts about how the mind works. And the question that I discussed was what kind of experiment would clinch it.  So I basically would give up my paradigm instead of just repairing it to or shift a little bit to accommodate the new insights and extend a little bit. And what we discovered is that it's very rare that people would ever give up their paradigms. And what you'll find is that you can, from the outside, and some degree measure, how much insight is being generated by an ongoing paradigm.

So how productive or fruitful is it to think in these terms, but once one, once you're in it, it's very hard to understand or to see, or to find out whether your paradigm is unproductive because you've learned everything that there is to learn about the world, or because there are better paradigms out there that you should be adopting.

And I think that's the reason why we sometimes say that progress in science happens one funeral at a time.

Sky Nelson-Isaacs: [01:04:15] I have a question which might distinguish between Kevin and Joscha's view

 Paul King: [01:04:20] Sky, Yeah, go ahead with your comment.

Sky Nelson-Isaacs: [01:04:22] Thank you so much. Yeah, this is a question in my work studying time and space from a quantum foundations perspective, there's a perspective called the relational theory of measurement, which is that  you know, all measurements are relative to two other observer to other elements  are correlations between things, but there's no objective nature to that measurement.

And what the supports I think is possibly and this is work that I go into in, my own writing, in a book that I wrote this year.  What,  supports is the, the, the virtual, like reality that we could live in that the notion that there is  you know, there are the histories of objects in the world are actually not fixed.

than that outside of perception of objects, there is not a definite state for an object. So this is somewhat related to like the metaphorically, to like object permanence and babies. When an object is not visible to a baby, they, they forget that it exists. So, the question is what is the state of something when it's not consciously represented in the brain or when it's not, it's not being observed.

And I think what this points to is the possibility of. a, Just like a virtual game where you're playing with people around the world, but there's no, there's no single central server playing the game. Each of us has our own server or in our own computer, that's representing the image of the game for us. And then these are connected in maintain consistency, but it, removes that the sense of objectivity about what we experience, and makes it subjective and mutually  consistent.

so I think from what I've heard of Joshua's perspective, that he might agree with this perspective and that Kevin might disagree. And Kevin might say that that perception is objective and it defines reality, but I would say maybe, my perception can be  relational and can define reality in that way.

Is this question, what both well posed for the two of you?

Rowshanak Hashemiyoon: [01:06:17] Wait, can I add something to that, please?

Paul King: [01:06:20] Go ahead.

Rowshanak Hashemiyoon: [01:06:21] It's the idea of neglect, you know  when you think about it, like the human, the experience of the human body and its position in space, and then those people who have some neurological damage, the way they, they classify their own bodily perception changes, right?

If you look at it with painters and the way they represent themselves, you know, that parts of them, aren't there anymore. They're not clearly defined. So I think that's sort of an extension of what Sky is saying is the idea of neglecting the body and the brain.

Paul King: [01:06:50] Yeah. So those things that well Kevin, do you have any comments on that?

  Kevin O'Regan: [01:06:55] Yeah. I was very interested in what Sky had to say, because in fact, I think, reality, there is no reality, real reality. We can't know what's out there. Our brains sit inside the bony cavity there in the dark, getting millions of inputs and sending out millions of outputs.

We don't know what's really out there. Okay. And all we can do is deduce from the sensory motor, invariants. from the interactions we have  which we get along these nerve fibers, millions of inputs and outputs, we can, we we did use the existence of some external space and  and we suppose in reality.

  I think that  I mean I used to be a physicist, right?  I started my career as a physicist and  I'd be very interested in a physicist having a look at my papers with Alexander Terakov, where we show how an agent merely by looking at the sensory motor structure of its inputs and outputs can deduce the notion of outside three dimensional space with the group group theoretic structure of that three dimensional space.

And, this kind of approach suggests that actually outside in the world  there may not be any three dimensional space three dimensional space and time may just be a construction of the human mind with its particular  limitations. So, so I disagree with his character realization of the sensory motor theory.

On the contrary, I don't think there is necessarily any reality  which looks like the way we think it looks. The fact that human beings are probably constructed. similarily. you know, we have similar anatomy and similar brains means that we construct similar realities. But whether that is the true reality, we can't tell, well, although perhaps as physicists, we can deduce maybe that they're actually live in dimensions and not four dimensions.

Joscha Bach: [01:08:52] I think that can be no continuous space outside of us. And that's  as good as I discovered  the languages that we would need to describe such a continuous space would run into contradictions, It means if you talk about continuity, you can only do so in a self contradictory language, which means that you cannot actually talk about it.

Right? You can talk about it in the limit. And within the limit, geometry is the dynamics of too many parts to count. And it turns out that the space of the electromagnetic interactions, where we have light and touch and so on, is a space that can is in the limit. three dimensional at the level at which we are entangled visit, but this doesn't mean that  is stuff in space it just means it's a certain model that we can construct construct, at a certain level of resolution over our sensory data.

And I suspect that this is objectively the case that we can construct such a model at this level of abstraction. This doesn't mean that we know what physics really is like, but we can discover the space of languages in which we can describe physics. And we can discover whether our models are predictive of the sensory data that we get.

And we need to put in a few little assumptions, like the fact that our mind is to something reliable and that we can access our own memories and that our representations, that we construct over the world are indeed representations over that, world in which we are. But  what we can probably do is to discover the entire space in which models can be formulated in the first place.

So, and that limits our ability and the system's ability to make models of reality, right? And that gives us an opportunity to construct models of reality from first principles. So the don't just have empiricism. We also have a rationalist foundation for our empiricism and  I think this gives hope to the epistemological project.

   Sky Nelson-Isaacs: [01:11:01] So a picture that comes out of my work is  the, the familiar image of a hologram where you have a piece of film, that if you look at a hologram in daylight, all you see is pretty much a dark sheet of film that's been exposed. that has a lot of interference patterns that are pretty microscopic, so you can't really see anything there that's meaningful.

it's only when you shine a laser through it. That there's an image that appears out of that, that film, and that's both a metaphor and an exact correlation between the mathematics that falls out of, I think quantum mechanics and possibly quantum field theory where you have  every time you shine the laser, there's an event that you can observe.

That's the the image of the thing that you see in the hologram, which is like the real, the real thing in life that you experience this, your model of reality josha. But then when you turn off the laser in between those observations, there's just this blank film. That is the background.  So that's one way to picture,

       Paul King: [01:11:56] Sky, so one of the things you also mentioned is you talked about object permanence.  And so, you know, in the infant you know, it really is out of sight, out of mind, and that does feel a lot, like the idea of the world has memory.  If it's not there, it's not there.

You can't sensory motor. There's nothing to see.  Now once object.  permanence emerges  when the object is you know, passed behind the back of the parent  that the child knows that it's still there and it still has a relationship to it, even though it's not it's not in its sensory experience.  Kevin, what would you say is going on there?

What's that change that happened when object permanence starts to become a thing?  Is there a representation in the brain? Is this a new type of contingency?

 Kevin O'Regan: [01:12:36] Yeah. What happens with the baby when it gets object permanence?  Well, I guess it just realizes that, that  it can better account for its universe by assuming that his brain learns that, that, that it can expect the object to reappear if it looks behind the back, you know, so it's just a form of learning  which it didn't, which it didn't have initially, which is, a form of learning of knowledge that it didn't have before to know about it.

And now learnt about it. So I don't think anything magical is happening. It's just another aspect of its learning about the world.

Paul King: [01:13:22] So a new type of higher level contingency, perhaps that the action of like looking behind or you know, pulling the hand out is going to produce the object.

Kevin O'Regan: [01:13:32] Exactly.

Joscha Bach: [01:13:34] There's an invariance that you discover in the variance of the data.  It basically means that  you can predict that when you look away and look again in this direction that the object is going to be still there. Right. So  in terms of an energy function that will describe  these models that you're making  this model  requires less energy, not in the physical sense that your brain will consume more or less energy, but  in in the sense where you  describe the entropy of the data, you will have a model that compresses better that is more predictive as fewer representational units.

And when you introduce object permanence, And I fully agree with Kevin, I think.

Kevin O'Regan: [01:14:15] I'd say I'd say it a little bit differently. I'd say essentially before the child learns that the object reappears behind the back  the child doesn't actually have a notion of object. It doesn't have a notion of objects at all, because what do we mean after all, by an object?

What we mean by an object is a thing such that  it is invariant under various transformations among them  going behind the back. So I think it's gradually abstracting the notion of object from all the sensorimotor or laws that it has observed about what happens when you push things around it's then actually extracted the notion of thing.

Joscha Bach: [01:14:58] The object is that what remains constant wants to prospective changes in all possible ways.

Kevin O'Regan: [01:15:04] Right. Right. Something like that.

Paul King: [01:15:06] Well, if you, if you look at  coin magic  making coins appear and disappear, that magicians do with a lot of sleight of hand, you know, the coin goes into one hand and then it closes around the coin, You can't see it, but you know, it's in a hand and then the magician might then pass it possibly to the other hand. And so you're tracking where that coin is, even though you're not seeing it, but you really feel like, you know, where it is. So something is that object has an identity in your, your projection of the environment, could we say? Or is that relying too much on representation.  Or are we, or is maybe this contingency, this new type of higher level contingency, no different than what we could call a  model of the environment. Just two ways of talking about the same thing.

Kevin O'Regan: [01:15:45] Yeah. I would say, I would say I don't want to completely reject the notion, I mean, as you say, if you want to call it the the implicit knowledge that the coin is in this hand or that hand a representation, fine. If you want to call it a representation, I think it doesn't mean that you have a little picture in your mind of the coin inside the hand, It just means, you know, that it's there knowledge doesn't have to have actual actually the pixels corresponding to the position of the coin in the hand.

Paul King: [01:16:17] I see  Aniket, you've been waiting a long time.  Thank you for waiting.  Did you have a comment or question?

Aniket Tekawade: [01:16:25] Thank you so much for allowing me to ask a question. So I guess  my questions have evolved through this discussion. Some of them already been answered but here's, here's what I want to ask  specifically Joscha, and maybe the rest of you.

So my background is in, computational imaging. And so I often find myself asking the question, what is the right representation of image data to feed into a neural net? So I've noticed that a lot, lot, lot, lot of work has been done  in image based  sort of  perception using con nets and  these things.

 So, but I see that  I see yourself pointed, the next steps would be to sort of look at images as 3D models of the world. And so  it seems like we have to get into the You love computational geometry. And so I was just wondering if we, if you could shed some more light on what needs more research needs to be done in terms of making a perception through possible or computer vision possible through computational geometry models.

So one of one just sort of triggered our discussion. One comment, I just had what one thought I just had was in terms of object permanence. Like if you were to think about it  in terms of computational geometry, you are, you are creating a 3d model of something in your brain. And then when  you close your eyes, you are probably converting compressing it into a latent representation to store it, Then you close your eyes to be able to imagine it, but then when you open your eyes  but, but you may not actually see the same object the way it was because there might be some compression loss. Right. So, so I mean, I was just wondering if my ideas make sense, and then if they do like what, what is the, what more resource needs to be done in computational geometry.

Paul King: [01:18:33] Well, one argument that I would make, and then I'll turn it over to Kevin and Joscha, is  given how bad most people are at three dimensional geometry.

 And  I mean, people really struggle with organic chemistry and making sense the three dimensional shapes.  I would wonder whether there really are three dimensional representations in the brain.  And David Marr talked about two and a half D sketch. I think he called it this idea that, sort of some kind of spatial layout plus depth. But   beyond that, I'd be more skeptical, but I know Kevin and joscha have both thought quite a bit about the geometry of space, probably in different ways.  Comments, maybe Kevin first.

Kevin O'Regan: [01:19:10] Yeah. So I sort of disagree with what you said about. Internal model. So I don't think at all that when we look at the outside world, will make it a little internal model of the geometry of the outside world in a way David Marr would have thought it, you know, where David Marr said a vision is an inverse problem  in vision is inverse optics or something.

 I disagree entirely with that. I think that it's more like Koenderink view. I don't know if you know this view where he says, what vision is, is accumulating knowledge about what would happen if you made actions upon the world? Like, looking at a cube is knowing all the things that would happen  if you would  turn the, turn, the cube around and things like that, but you don't, it's not knowing in terms of pixels, it's it's knowing in terms of abstract descriptions, which don't do not have the metric of the outside    physical world, nor even the metric on the retina.

It's just an abstract description, which explains why just tremendous distortions in the, in the, in the visual field, without people really being bothered about it.

Aniket Tekawade: [01:20:24] I just  want to clarify that pixels don't really have a place in computational geometry. In computational geometry, you're representing the space in terms of surfaces or objects.

So so it's sort of a different way of looking at it and it makes more sense to me because, because I, if you, if let's say I, you know, going back to Natesh's earlier example, if you are looking at a Teddy bear, you already have encoded some information about the Teddy bear surface, that it is squishable rather than a coffee cup surface, which are not squishable.

So, I mean, if we were to really develop like conscious robots, then we have to agree on some mathematical representation of the 3D universe.  Maybe I'm wrong, but I'm really looking to know why I'm wrong because at the end of the day, it has to be a, it has to be a mathematical formulation. If it were to be coded as a conscious AI.

Paul King: [01:21:25] Well, the conscious robot needs to be able to interact effectively with its environment. Maybe it's a it's claim that that would require 3D models.

Joscha Bach: [01:21:34] Well, yes, or 4D models, depending on the environment that the robot is in, it seems that  can learn to navigate 4D environments to some degree, but it's just happens to be the case that  the environment which you're born into is just not very four dimensional. that is  the level at which we interact with the world in which we perceive the world only gives rise towards three spatial dimensions and one temporal dimension. And so we cannot rotate between a space and time in the den in the area where we are.  So  it makes sense to see them as separate.

And this is the relationship that our mind discovers. and there are degrees of freedom in which is discoverable. And  I think that  some of the misunderstanding  between Kevin and Aniket comes from the fact that  Kevin  has not looked very much into computational rendering, and to, so The idea is not that vision is inverse optics. The idea is that vision is inverse rendering. So you have a game engine and the game engine is taking the minimum amount of data to produce a 2d projection on the screen, which then leads to a, to a 2D projection on your retina, the mind has to do the same.

thing In in the inverse direction. And the problem is that inverse rendering is an ill posed problem because you cannot deduce the data just by looking at the 2d things. You need to make subsequent assumptions to collapse the space of possible interpretations until you get that right. So basically you can make a simple feed forward model from the data to the projection in computer graphics engine, once you know, the data and the loss of objects and motion, you can produce the animation on the screen without trial and error.

So you don't have to backtrack in the interpretation, but your own mind in order to come to an interpretation of the image  needs a lot of backtracking. It needs to look at alternative interpretations until it collapses on a possible interpretation. That makes sense. And this  basically requires top down predictions, but it's still something like inverse rendering.

Now, More specifically, to Aniket's question, the things that we need to, change in terms of the paradigms that we currently have, right? The neural networks, what they're performing is linear algebra. They perform weighted sums over real numbers that are thrown against an activation function that provides  us with non linearity is basically events. and it's a little bit awkward because the space in which we describe  the  this  the world using these convolutional neural networks  for the most part converges  in, in the right way, but not always. So  get representations that are susceptible to adversarial adversarial examples, or sometimes it's difficult to do for instance, counting of objects as you will networks and things like that.

And that's because of the limitations of the representational architecture that we're using. And it's not an accident that the biggest breakthrough in the last few years didn't come from vision, even though there's a lot of research in vision, but from natural language processing the transformer algorithm.

And it turns out that the transformer algorithm is general enough to discover convolutional neural networks for low level vision. But  it's, it's more general than this because convolutions don't work that well, when you try to process a stream of language convolutions are basically filters that look at the adjacent elements and  envision, it turns out that adjacent pixels typically belong to the same object.

So it's good to look at adjacency as an indicator of relatedness, but in language, this doesn't work like this because  parts of a text that relate to each other can be very far apart and the things in between can refer to something different. And  that makes it very hard to use convolutional neural networks to interpret language, right?

So we need to look for more general learning paradigms and if we want to learn a graphics engine and you just take an existing graphics engine, like imagine you take as your vision paradigm, the unreal engine, and what you learn is the parametrization of the unreal engine to interpret the current digital scene, which I think would be a super interesting project to do.

And to my knowledge, nobody has done this extensively. There is some initial work by Nvidia and others in this direction, but it's limited.

Aniket Tekawade: [01:25:50] Are you referring to neural geometries, neural implicit geometries? 

Joscha Bach: [01:25:54] Yes, so there is some difference between the artifacts that you get in graphics engines and the artifacts that we get in our own vision, for instance, there is almost never surface clipping in our own vision, right?

Surface clipping happens a lot in graphics engines where objects are moving through each other, value, for instance, the skin of a character clips through the closing of the character, and this never happens in our own imagination or vision. Right?  So we have additional constraints about the computational geometry represented that we don't have in the low level representation of the graphics engine and that the graphics engine needs to correct for?

And I think that this is largely about the representation in time that is our own mind represents. The relationship between adjacent frames and the conservation of information between adjacent frames and time more efficiently than our graphics engines do. And I suspect that's because of the way in which programmers reason It's much harder for us to reason over time, properly than to reason over space. And  this is a constraint that doesn't happen  to apply for the more general learning algorithm in our neurons. So basically what our mental models of reality in court is certain conservation of volume, for instance, and of the relationship between  locations.

So in solid objects, the relationship between  locations an object is conserved and  liquid objects. The volume of the object is conserved and, elastic object. Something in between is conserved, right? And that is something that our graphics engines don't represent. Naturally. We need to use a secondary representation to correct the vision for the underlying  simple physical constants.

And probably want to have a representation that encompasses both of them at the same time.

Paul King: [01:27:50] Thanks, Aniket, for raising the interesting topic.  I think we should also sort of return to perception and consciousness.   

Kevin O'Regan: [01:27:56] One final point though, which is, I completely disagree with what Natesh, was in Natesh? And what  Joscha has said about computational geometry.

 Aniket Tekawade: [01:28:08] It was Aniket

Kevin O'Regan: [01:28:09] Aniket, sorry, because my change blindness studies and my stuff on  changes during eye saccades all suggest that the represented representation we have of the three dimensional objects is, is nothing like Geometry, it's not geometrical at all. It's semantic, It's just a list of, of words, kind of almost say, a red box on the left or behind a blue box, you know, it's that kind, it's not at all metric. It doesn't, it isn't metric. You can make enormous changes  without people noticing them unless they happen to be attending to precisely that particular metric aspect of the scene.

 Joscha Bach: [01:28:49] Basically, as a final thing, I would like to say it's beautiful that we get to some big disagreement that we want to highlight in the end, is maybe some takeaway of the discussion.

I don't think that  we, just have a high level of representation of things that are stable. I think that this high level representation exists at the level of the stream of consciousness. So our memories of the partial binding state that constitute these different situations are, a very low dimensional and the parametrizations that we store in our memory as the things that we have seen  insufficient to reconstruct the scene in a definite way to match our perceptual data for the most part.

So we only get an approximate representation. that you try to remember to reconstruct the world as it looked like  a moment ago. And that's simply because our working memory probably  just has something like a few kilobytes of data and maybe even less. So basically the number of parameters that we use to characterize the present situation  based on everything that you've learned about the regularities in the world is very small, but  what's large is our  of skills to represent the geometry of the world.

And I think that we are very sensitive.

Kevin O'Regan: [01:30:03] We don't even need to represent it. The world is outside. We don't need to. re-present it inside us.

Joscha Bach: [01:30:10] It's just the stimuli. We only get patterns from the outside, the interpretations of the patterns are what you spent months as a baby to learn.

Kevin O'Regan: [01:30:17] But all you need is interpretations that are semantic. All you have to do is say it's a letter or it's a box or it's red. We don't to have

Joscha Bach: [01:30:25] This is something that you have to learn. It's not there in the world.

Paul King: [01:30:28] Yes, yes, you have to learn it but but Joscha

Joscha Bach: [01:30:31] What a, looks like you cannot say that the letter A is there. The semantics are acquired. They're constructed by your own mind.

Paul King: [01:30:37] For sure.  So I think there is, soJoscha, those interpretations, yes, they're learned, but could those be transforms on the sensory input? Therefore they don't have to be stored in the brain.  The brain is able to do those transforms

Joscha Bach: [01:30:51] The irregularities have to be stored in the brain.

So the loss of optical transformations that you learn have to be represented in the brain that to be stored in the brain. So you can use them to interpret the sensory data. Without them you cannot make sense of the sensory data. And  the, it's quite distinct. So for instance, if I give you a two and a half D  projection, and you see basically an object rotating in two and a half dimensions only, you will notice the difference to a proper three dimensional representation.

You will notice that it looks like a cartoon. You will still see that it's meant to represent three dimensional rotation, but you will see the difference too, because it will look off, which means that your mind is representing way to work to determine what this looks like. this doesn't mean that you are able to generate to regenerate it.

You need to have a lot of training. In addition to that, there's basically the ability to produce a representation, without sensory stimulation and the ability to recognize it in the world and match it against an imperfect pattern. There are separate skills, I think.

Paul King: [01:31:56] Let's get Kevin's response.

Kevin O'Regan: [01:31:58] Yes, I agree that you need, you need a way of  recognizing the sensory motor law.

If you take a cup and you hold it in front of you and you turn the cup, the handle appears and disappears. Okay. And, and, and the, and the two dimensional image of the cup deforms in certain ways that your brain is able to confer Okay. But the fact, and, and, and, and, and recognize, but that does, as you just said, Joshua, it doesn't mean that it must be able to recreate it.

It doesn't mean, you know, it doesn't mean that it, that it has to have the appropriate  inverse model. Right. All it has to do is be able to confirm that that's a cup and then it just stores the fact that it is a cup in schematic form. It doesn't have to store the surfaces, but it's in recognition that it is a cup and that it is obeying the laws of couplers is done implicitly and does not require any, any storing of the, of the 3D surfaces  that correspond to the cup in the brain.

Joscha Bach: [01:33:06] It's slightly disagree with this because it's possible to make that step. And it's not that hard. You basically, when you focus a little bit, you can generate a nice 3D cup in your mind and interact with it and manipulate it.

Well, visual imagination is geometric imagination for the most part. And  which may or may not encompass the texture layer. and you represent the structure and colors of surfaces only on the texture layer. You don't need it, and you can also take the geometry away, and then you only have a conceptual structural representation.

 And it seems to me that Kevin does not properly distinguish. them. But they're all integrated and the identity over the geometry and    textures and the surface structures and  the  high level semantics of the object is only constructed in your own mind using the implementation of the brain.

It's not out in reality. There is no identity. In reality, if you had the ability, if you really constructed in your brain, the actual surfaces, you, 'dit would be very easy to draw things from you. You will be able to draw pictures of faces and objects easily. The fact is it's very difficult to learn, to draw.

Actually, it's very easy to learn to draw. It's not harder than to learn, to recognize it. Just don't do it for many of us, don't because we are not difficult. It requires about as much time and attention to learn how to see to, our right. You need to do this in the opposite direction again.

But still people have to learn. They can't, that they can't figure out how to get the paper to look like what they see.

basically painting is not harder than  than learning how to recognize. You just need to do it. In addition, it's about the same amount of time. I suspect that you need to invest before you can  draw properly because you basically need to create a secondary feedback loop.

So the fact that you will have the ability to certain things to construct representations based on some stimulation  doesn't mean that you are able to do this in the absence of the stimulation, and then translate this into motor sequences. that's a separate skill, but it's it's not that hard to learn.

It just means you have to do it in addition. And you're not usually that motivated because  you can survive without doing it.

Rowshanak Hashemiyoon: [01:35:27] But that's exactly the stuff that that I was talking about with neglect. Olaf Blanci has shown and supports exactly what Joscha is saying for those people who have the neurological damage and the neglect they don't represent.

It's exactly what you're saying. Well, if Blanci showed this years ago,

Paul King: [01:35:42] I mean, that would seem to almost agree with Kevin's view that because people who have neglect don't know they have it, they don't see a gap. They just respond to what's there.

Rowshanak Hashemiyoon: [01:35:52] That's exactly right.

Paul King: [01:35:54] We should probably let Kevin go if he wants, I'm happy to continue the conversation with the folks here. I know there are  more folks that wanted to make comments.

Joscha Bach: [01:36:00] I will say goodbye. And thank you very, very much, Paul for organizing this and Kevin for a wonderful and spirited discussion. I think that's it got close to the point of disagreement and it's about  which level you draw the boundary between mind and environment, I think, or between in the observer and the environment.  Not sure if this cuts our disagreement at the joints, but I suspect it's mostly in this direction.

Kevin O'Regan: [01:36:29] Well, I think maybe we can have another session to try and clear this up Joscha. and It was a great pleasure to, to  for me to be in this Clubhouse meeting. And I really thank you all. And in particular, Paul, for his amazing  ability to run the run the discussion smoothly and, and it's been a great pleasure.  Thank you very much.

Paul King: [01:36:52] Well, thank you Kevin, for joining us. I think it's probably midnight  Paris time. So thank you for staying up late to be with us and sharing your views and engaging in such a vigorous debate.

Kevin O'Regan: [01:37:02] Thank you.