 And you're good to go. Thank you, my friend. OK, you're welcome. How's your Serova? How are you? Thank you. It's a lot more to go to when you're back. Thank you. Perfect. Hello, everybody. Welcome to this analysis seminar. It's a pleasure to be here with us, our associate, Joe Bonio, who's coming here for his first associate visit from Kenya. Hope that there will be many more. Joe will talk to us today about the composition semi-group on some analytics places of the upper half plane. Joe, it's yours. What an opportunity. Thank you very much. You're welcome. I will give a talk on composition semi-group on some analytics places of the upper half plane. As a sort of control, I'm Joe Bonio. From Kenya, my university is an intermediate university of Kenya in Nairobi. Part of it is mathematics. So I will begin by giving some introduction and probably setting up the notation from my talk. Definitions that are necessary for us to be able to analyze our semi-group. So first of all, we have the disk. That's block D. It's the unit disk. So the unit disk is complex number. It's a set of complex numbers. So if you have the complex plane, then we are certainly talking about, that is what we call the unit disk, which is open. So the boundary of the unit circle here is not part of the disk. Then we have the upper half plane, which is denoted again by u, which blocks u. Again, it is the set of complex numbers for which the imaginary part is strictly positive. And if I would sketch that, then I would be talking about the upper of this complex plane. And remember, it's also open. So the line boundary of the upper half plane, which is the x-axis, it's not included. Now these two domains, I will call them domains, subsets of the complex plane, have an interesting relation between them. There exists an invertible map between u and d given by that, so that we can always map and form. For instance, point 0 on the disk is mapped the point i on the upper half plane. So I will also define the measure, just given different rotations, so that I have the big measure on the upper half plane, which is the mu. And then I'll measure on the unit disk, which is the m. I will use that so that we don't confuse when I use it, because I will always be moving back and forth sometimes through the two domains. So now I want to define spaces of analytic functions. We consider, I will use the omega here. I will use the omega to denote an open subset. So omega can be d, basically, and with the disk for the upper half plane. So because I don't want to define for age, I will just use the omega for general travel. And then that h of omega will be denoted in the space of analytic functions on omega. So those functions, actually we expect them to map in our open subset of c to c itself. Then we have Batman spaces. Now these are very famous spaces of analytic functions. And the Batman space in just a usual LP space, but that will consist only of analytic functions. That's why we define the Batman space of the upper half plane. It's the LP space, the usual rebate space. You would accept that with the force we select only analytic functions. But the Batman space on the disk, then it's just the usual LP space. And those analytic functions in those spaces. Of course, they have their respective norms. On the LP, it carries over to the norm that we have. So I'm assuming that we have a standard LP space. So we know the norm, because these are intangible functions. But let me allow you just to state the norm. So if we have a function on m, get you on the Batman space, and use mu, we will allow you to use notation a little bit. Use mu. Let's say I raise it to k. That it's just the h equal for the upper half plane of f, let's say, of z. So that is the norm defined on the liberal space, I mean the Batman space of the upper half plane. And when I change it to d, I mean, it's the same. I mean, except now that the domain changes. Then we have another space called the highly space. And remember, we are not dealing with the infinity cases where we have the top of the bounded analytic functions. We are basically considering one infinity, one inclusive. Now, highly spaces now will be defined by this norm. If you look at this, let's go back. Now, if you look at the norm on the highly space of the upper half plane, it's given by that, while the norm on that space of the u-disk is given by that. All these analytic functions are obsessed over analytic functions. Interestingly, for highly space, we have a characterization on the norms of the highly space that we can actually identify the functions on the high space by their boundary verbs so that we can redefine the norm for the case of the upper half plane and just define it on the free line, the x-axis. Yes, while for the u-disk, we can define all the boundaries of the u-disk, which is just a unit circle, so it is 2 0 and 2 y. Then another analytic space that we are going to be interested in is the original space of the u-disk and of the upper half plane, of course. Now, u-disk, I mean, for the original space, it is only defined for the case p equals 2. So we only talk about the original space for the case of 2 so that you can see for the original space of the d-disk. And this is the norm. We expect this quality here to be finite, set of all those analytic functions for which this quality is finite. The work we are defining as using the norm on d0 is a semi-norm on the original space of the disk in this case. And we can make it a norm by adding this constant. So that this becomes a norm on the original space given by this. Similarly, for the original space of the upper half plane, the same happens. Now we are adding f i, remember 0 is not to i on the upper half plane. And then we have this. You can see that there is a relation between the d-disk space and the Bergman space in the sense that here is the derivative, on the other side, we just talk of f. Now, so those are the three spaces that are of interest to us. There are many more analytic spaces, which probably will be a subject of another day. We define a composition operator. First of all, we have a self-analytic map on the domain that we consider. So we have a self-analytic map on Omethan. Then that self-analytic map will indeed use a composition operator on a space-analytic function given by this. Now we can have a family of self-analytic maps that constitute a one-parameter symbol, where our parameter is t, even a. So this family will be a semi-group if they are analytic functions, and then they satisfy this. Now the same way a self-analytic map will indeed use a composition operator. The same way a semi-group of self-analytic maps will also indeed use a semi-group of a position operator. This is what is in the same definition. These are one-parameter symbols. Now sometimes when we are dealing with composition operators, we are probably adding some kind of a weight so that we talk of probably weighted composition, semi-groups of composition operators. So this is how we define the semi-group of weighted composition. We shall see later on that this can be a group because all we need to have is the inverse. Remember if it is t, it's a semi-group. Then when we consider the negative part, then we talk of t, then we talk of a group. So I will be using these terms groups and semi-groups interchangeably. But what we know that our semi-groups in this case are also groups. In general, a semi-group of bounded linear operators on a given burn-out space. But remember here, all those other spaces have burn-out spaces, respect to those ones. And the cases when people are actually here about spaces, I didn't mean that it is important. Another thing is that all these functions are bounded, but we have a growth condition for each and every space. Now we say that a semi-group t sub t is strongly continuous. If it satisfies this property here, the limit of this node will be z. Then it is strongly continuous. And if it is strongly continuous, we can talk about this infinitesimal generator, which is basically given by this limit. And it's the same as working out this derivative. And I'm putting a t in this here. And of course, the domain will be those points for which the limit exists. Now, why the upper half-plane? I didn't mention this, but I think it's important to check not that of the fact that most of the studies on composition operators or composition semi-groups are based on analytic spaces of the unit. And we actually expect, because if we have a lot on this analytic spaces of the disk, then we know that we have an invertible map between the two. Then some would expect that, of course, these properties would be carried over from the disk to the upper half-plane, terrier. But that is not what we realize. And we find out that the composition operators exhibit chaotic behaviors, totally different behaviors in the two settings. There is none of them. That is why maybe you can see some instances, like a case on every composition operator. It's not to be bounded. This put ourselves to some principle. I think it is the Haddy-Huddy principle. On the Haddy space of the unit disk, and as well as the Bergman space of the unit disk, they are bounded. But on the upper half-plane, these spaces on the upper half-plane, I mean, all of this is a big question. Because not every composition operator is bounded on the Haddy space and Bergman's spaces of the upper half-plane. Again, another something, again, that is coming out. We know that every composition operator is compact on these three spaces of the unit. But for the case of the upper half-plane, they do not serve out any compact composition operator, which means there will not exist a compact composition operator on the Haddy space of the upper half-plane, as well as the Bergman space. The case of the deletion space is even different. It's even complicated, because for the deletion space, even for the units on the greatest space of the unit disk, it is not true that, I mean, there are composition operators which are not bounded. And recently, as we've proved that there exists now, again, a compact composition operator on the deletion space of the upper half-plane. Because of this, I think it is worthwhile to look at, in detail, these operators in the new setting of the upper half-plane. And this is the focus in recently, probably from the rest of the years. Starting from probably 2013 or something like that, there's been a lot of work now going on to try and start composition operators, as well as composition same groups, in the new setting of the upper half-plane. So what I will consider a specific move is that we were able to classify all these groups, the same groups of self-analytic maps, which we can also call them, because these are all the groups of the upper half-plane. Even in my work in the future, I was able to characterize or classify all those groups, all those self-analytic maps, all those groups into three distinct groups. This is one of the groups. And I don't want to go through all these groups, but I will just take this simple case of... Of course, this is a semi-group of analysis maps of the upper half-plane. It will introduce a composition operator, but I want to consider awaiting one. My weight there is... If you look at my original designation, my weight there is one. I have a reason for picking that. That will be clear later on. And I also want to consider it just on L2. There is no problem considering it in LP, but to simplify our work in this, I want to consider this. Now this... This is now a group, of course, of weighted composition of atoms, which is now defined on L2. I want to study these properties of this. So what do you mean when we talk about selling the properties of the group? It will involve... involve probably looking at checking whether this group is strongly continuous. Because remember, we are talking about seen-out semi-groups. Strongly continuous semi-groups. So strong continuity is a very important concept in semi-group theory of operators. So when I do that, I want to talk about this theorem, which is now... because of all giving us the fact that this group here is strongly continuous. And again, there are so many issues. When I know an isometry, then I look at... I know something about the spectral theory on the spectrum of that. If I have a strongly continuous group, then I can be interested in its generator, the infinitesimal generator. Then the infinitesimal generator, generally, is an unbounded property, an unbounded operator. I would be interested in selling its spectral properties. I mean, in this case, the point spectrum, which is the set of incident problems, as well as the spectrum. If I'm able to get a spectrum, then I'm able to analyze the resolvents at those points in the resolvents set. And this resolvents comes out as non-integral operators' literature. Then we can go ahead and study the spectrum of the resulting the sum of them. And even the norm, as well as the spectral radius, and then must be the actuate. That completes the analysis of this specific group. Now, I will demonstrate these proofs, the proofs of this, just a sketch, so that we can put that on that. So the first thing is isometry. All right, for isometry, we simply use that power for this specimen. I mean, our power one specimen. On the other half, we are needed by this, exactly. We remember our group, which is up here. Then I do change the variables. Change the variables I do there, in this case. If I just fake, maybe W or only that to be keras-2-t, see. So that we use our measure, we will follow, we will be given by the, called as the Jacobian. So it will be the derivative of that square. So that is keras-2-t, the view of z. So when we change variables there, and we want to change the measure, that is 1, 2. That is substitute that gives us exactly the norm of life. Therefore, it has high isometry. It's still here, it should be square. There's a square missing there, sorry about that. Then what about strong continuity? For strong continuity, you know, we need to prove, in the long run, we need to prove that it's turning to zero, it's turning to zero. From the positive, because we are assuming that t is greater than or equal to zero. So it can only have positive, even from that direction. So how do we do that here? We take our function on L2, and we suppose that we have this t, n, turning to zero, in R. Then we let our fn to be, yes, given by this. And definitely, we know that fn will be turning to f, and because of the isometry, the norms are equal. Then we use this. This is a standard, we look at the red theory. You realize this, basically this is making use of some very important inequality. Every time you study context, it is, I mean, this inequality comes in. So that is what we use here. What our gn, given in that form, where you can see what our a is and what our b is. So definitely, this is what makes a gn to be positive, to be greater than or equal to zero, negative. And then at the same time, this will turn to close. But it turns to that. So if you are turning to, I mean, a sign goes to infinity, this goes to zero, and the other one goes to f, so we have that. So there are many things, just to be used for trust manor. It gives us the fact that the name soup is zero, and that would mean that the limit is zero. I have a lot of details left out there. That is technically the proof of strong continuity. Then we compute the infinitesimal generator. We just do the derivative, substitute at zero, use as that. And that automatically tells us the domain is contained in that, because the internal part of the vital part in the generator is done, because f is already in health. Now the reverse inequality for the domain, also we apply for the mental theorem of our class. And using the fact that this is going to be strongly continuous, we are able to show the reverse inequality, so that indeed the domain is equal to that of seven. Going spectrum, we just evaluate the eigenvalues. The eigenvalue, eigenvector equation gives us, leads us to this differential equation here. And if you work it out, you get that. Now, it is known that this kind of, I mean this kind of polynomial does not belong at all. This is known as something that is spoken in literature that helps us know that as long as you have just seen, pressed through anything, that will not be a member of this. With that, we end up with this function being not being there, for n zero to zero. Otherwise, if you see zero, then you are perfectly equal to zero. That cannot be an eigenvector. That is why we conclude that indeed the eigenvector, there is this operator, the generator, does not have an eigenvector. I mean, does not have an eigenvalue. Do you know about the spectrum? Now the spectrum now is, we have proved that the spectrum, I mean the base hour group is an isometric. So the fact that it's a group, it means it is also invertible, whereby the inverse of t sub t is just t minus, that's why it is a group now. Now, since it is an invertible isometric, then no, it's spectrum. We know that the spectrum now is contained in the unit sub, it's a subset. It's contained inside the unit sub. But we find that the spectral mapping theorem for strongly continuous groups also tells us that e raised to, e raised to t, the spectrum of the generator, is also a subset of the spectrum of the group. That is how, that is the statement that comes from the spectral mapping theorem of strongly continuous groups. Of course, if that is the case, telling us that this must be inside the, I mean the unit circle, it means that if I pick any lambda in the spectrum, then e raised to t lambda must be possible. If I do that, then sorry, if I do that, then I find that that is only possible when the real part of lambda is zero. If the real part is zero, then technically the spectrum, I mean, if you imagine it, it's contained there. So that is how we get it.