 So consciousness is a complete mystery to neuroscience. Why we subjectively experience anything like the feeling of pain or the redness of red, whenever there are certain kinds of activation in the brain, is a problem we don't even know how to think about yet. Current large-scale brain initiatives tell us a lot about the brain, but seem so far unable to make progress in this matter. It is as if knowledge of the brain does not include knowledge of the mind. So even if we knew every cell in every thinkable brain process, we would still not suddenly discover consciousness as a physical object inside the brain. Neuroscience today typically offers you one of two kinds of explanations for almost all kinds of questions. Some seem to prefer to try and explain mental phenomena based on physical localization in the brain, while others prefer connectionist models inspired by the way computers work, but neither seem really able to bridge this kind of mind-brain gap. Surprisingly, none of those two models are very good at understanding brain injury either. It's still a paradox to connectionist models why localization of injury seems so important, yet it's a paradox to localizationist models how functions can be lost and recovered without a regrowth of the same lost tissue. Now, of course, any kind of theory of the brain able to handle and support an understanding of complex matters such as subjective experience or neuro-rehabilitation will be of great value to many different disciplines and may revolutionize artificial intelligence and robotics and possibly also ethics, as mentioned a moment ago. So, if our current models of the brain find consciousness to be this big mystery and if they can't really guide clinical neuro-rehabilitation either, then we probably need completely new models. And those should probably not be extensions of the traditional ones, but rather try to do something completely new. So let us begin with some simple observations. Introspectively, it appears that everything that we experience is information that we have access to. It is somehow available for flexible actions. You might not act on it, but it is available to you. Just look for yourself. Everything that you experience is something that you can act on or report or talk about. Now, the brain, however, doesn't seem to perceive the world directly as it is, but seems to come up with the best reason or cause for why it is in its own current state. I believe that this kind of process makes information available to us and that the outcome of that, say, action outcome, in return modifies this analysis of causation through backpropagation. And I believe this kind of fundamental reciprocal relationship is fundamental in order to understand the mind. I would say that consciousness depends on it. I wouldn't say that consciousness is identical to it or reducible to it. That would be different kinds of thinking. So this kind of reciprocal model or function can be described at several different levels of description. And my model begins with the surface manifestation level, if you want, or the level of the mind rather than from your components, simply because it's a possibility that the mind can be realized in a number of different ways that is different strategies that the brain is able to employ. And those strategies should be thought of as a kind of orchestration of very simple functions of elementary functions in the brain that are truly localized. They're orchestrated in global or large-scale networks perhaps as strategies in order to realize this kind of manifestation level. And every time we succeed in doing something or even perceive something, these networks of elementary functions will be strengthened. Now, should an injury occur to an elementary function, a different kind of strategy which may be markedly different, which may be as effective or less effective, so you may be able to take over this realization, and this can only be possible due to a constant reorganization of the level of elementary functions. So please note that this idea involves a very important change of perspectives. The upper level here seems to reorganize the lower level of description. That is cognition alters brain architecture. You might say that you need to understand the mind in order to be able to understand the brain. In this way, I would argue that the brain constantly optimizes its own functions given resource limitations, not by analyzing loads of data, but by backpropagations from action and cognition. So in this way, a function can be the same and different before and after a brain injury, depending on the level of description. And in my model, the upper level or the surface level seems to have two aspects. A functional one and a subjectively conscious one, and neither can be reduced to the other set before. And if that is right, then neuroscience can never completely explain consciousness, however the relation between the two can be understood and described. So it is probably soon time that we discuss what we can do and what we should not do once we have this knowledge. Therefore, my question to you is also how we might apply insights into consciousness for various clinical and practical purposes.