 Okay, so welcome to this course on advanced complex analysis, so what we intend to do is to give us given this course a selection of topics from advanced complex analysis. So of course we assume that you have already done a first course in complex analysis basically you know covering the notion of an analytic function and then Cauchy's theorem and then the idea of Taylor series, Laurent series, the idea of singularities in the residue theorem okay. So of course we have chosen for the topics to be presented certain important theorems certain landmark theorems which are usually not stressed upon in a first course in complex analysis and whose proofs are also not all that easy okay but they are very interesting theorems and they are of very geometric nature and that is what we will try to cover. So of course so let me start with what we will be doing in the first few lectures and that is about trying to look at zeros of analytic functions okay, so basically you know so we are interested in zeros of analytic functions. So this is a broad topic for the first few lectures alright and of course what I am going to do is state some important theorems connected with this theme okay. So of course let me first of all remind you and when I say analytic function I think of a function which is defined on a domain in the complex plane okay, so a domain is an open connected set okay. So the fact that it is open means that given a point in the set there is a small disc surrounding that point which is contained in that set, so the fact that set is open is being is the same as saying that the set is a union of discs okay and of course you know we always work with open sets because if you want to study the properties of function at a point especially if you want to take a limit at a point then you should be able to approach that point from all directions and so you must have a nice disc surrounding that point where the function is defined so you can actually take the limit okay. So we always study only functions at points where in a neighbourhood of which the function is defined okay and that is the reason why we always study functions defined on open sets and of course we also study functions defined on connected sets because I mean if a set is not connected then it falls into two pieces and essentially a function on such a set is a different function on each piece okay, so you can reduce the study of functions to just studying functions on a single piece okay and that is why we always study functions defined on open connected sets which are domains alright. Now so we take a function defined on a domain in the complex plane and we of course assume the function takes complex values so again the core domain of the function is complex numbers and if you remember from the first course in complex analysis there are several ways of trying to define when the function is analytic at a point in the domain. So of course the simplest definition is that of and it is the most common definition it is that the function should be differentiable not only at that point but in a small disc surrounding that point okay and we also use the word holomorphic function instead of the word analytic function that is common in the literature and we say a function is holomorphic or analytic on the whole domain if it is analytic at every point of course the function being analytic at a point can also be described in several other ways one way is the way that I told you that it is differentiable in neighbourhood of the point that is the first derivative exists in the neighbourhood of the point. The other way of defining function to be analytic at a point is saying that the function can be expressed as a convergent power series centered at that point in a small disc surrounding that point and this should and if this happens for every point then we could call that function an analytic function so basically there is one definition of analytic function which says that the function is differentiable once in the neighbourhood in a small neighbourhood of the point there is another definition which says that it is represented by a convergent power series centered at that point and the relationship between these two definitions is that they are equivalent and that is the great thing about complex value functions because power series if you would have learnt in the first course in complex analysis is infinitely differentiable within it is region of convergence the region of convergence of a power series is always a is a disc centered at the point and then probably some points of boundary may or may not be included but within the disc the power series always represents an analytic function and not only it is once differentiable it is differentiable infinitely many times so this is one of the striking features that differentiate real valued differentiable functions and complex valued differentiable functions ok. So if you assume a real valued function on a subset of the real line see on an interval open interval is differentiable throughout the interval there is no reason that other higher derivatives exist in fact there is no even there is no reason even that the derivatives continues whereas if you assume a complex analytic function of a complex variable on at a point is differentiable in neighbourhood of the point the amazing thing is that it becomes infinitely differentiable which means all the derivatives of all orders exist in they are all continuous this is the greatness this is the amount of power that one time differentiability gives you infinite differentiability in the neighbourhood ok and that is the characteristic feature of studying analytic functions and of course the power series if you take the coefficients of the power series they are going to be related to the they are going to be related to the Taylor coefficients ok and these Taylor coefficients can be gotten by using Cauchy integral formula ok. So you have this notion of an analytic function either you define it as something that is locally given by a convergent power series or something that is a function that is differentiable everywhere ok differentiable once everywhere and of course the usual way of checking that a function is analytic is the is by checking the so called Cauchy Riemann equations so what you do is that you check you take the real and imaginary parts of the function this is how you try to check a function is analytic usually you take the real and imaginary parts of the function and then you write down the Cauchy Riemann equations and then you check the Cauchy Riemann equations are satisfied and then you also probably check that the first partial differentials are continuous and then you conclude that the function is analytic. Now so there is a way of checking that a function is analytic using Cauchy Riemann equations as well but nevertheless the point is that we are interested in zeros of analytic functions and the first important fact that all of you should have studied in a first course in complex analysis is it the zeros of an analytic function or isolated ok so that is the first important fact that means given a zero of an analytic function there is always a small disc surrounding that zero where there are no other zeros ok so this is called so if you have different zeros they can be separated from each other by small open discs centred at those zeros and this is what we say this is what we say this is what we mean and we say that the zeros are isolated so zeros of an analytic function are isolated ok. Now you see so then of course that comes the question when you what is the problem with looking at the zero of an analytic function well you take a well you take a function which is having a zero at a certain point which is analytic at that point if you take a small neighbourhood you know there is a small neighbourhood where there is no other zero because zeros are isolated now if you invert the function in that disc then you know the reciprocal of the function is defined except for the zero there ok and then this gives rise to a pole at that point and this is one example of what is called a singularity ok. So analytic functions there are points on the boundary of the region where the function is analytic where the analytic functions are supposed to have singularities so there is so called singular points and again you would have studied about singular points in the first course in complex analysis so basically one is always one always worries about so called isolated singularities because one does not want to the case of non-isolated singularities is far more complicated to analyse. So for example if you take the function log z then you know it has several branches you have to define various branches of the logarithm but to define a branch of the logarithm you will have to make a slit in the on the plane for example you have to slit the plane along the negative real axis and then you can define a branch of the logarithm and then the whole negative real axis becomes points of singularity for this function. So this tells you that the singularities are not isolated because they are they continuously lie on the negative real axis but of course these are not the kind of singularities we are interested in one always studies isolated singularities and isolated singularities basically are of three types if you recall the first one is called the removable singularity and the removable singularity is essentially the is a non-singularity okay for example a function like sin z over z there is 1 by z times sin z if you look at z equal to 0 if you try to directly substitute in the function you will get 0 by 0 which is not a defined value but of course you know limit as z tends to 0 sin z by z is 1. So if you define the function to be to take the value 1 at z equal to 0 this gives rise to an analytic function and therefore the point z equal to 0 is what is called a removable singularity for the function f of z is equal to sin z by 1 by z into sin z and how this is reflected is of course it is reflected by looking at the power series expansion if you take the power series expansion for sin z and divide that by out by z you see that you essentially do not get any negative powers of z and that tells you that essentially this is a Taylor series not a Laurent series and therefore this is not really a singularity and therefore and you will also see that if you take the power series for sin z at the origin and divide by z and put z equal to 0 you will get 1 and that will tell you that 1 should be the value that you should define for the function to become analytic at the point at the origin okay. So this is what is called an isolated removable singularity then of course come the so called poles of a function and the poles are well they are supposed to be thought of as zeros of the denominator okay. So I mean the simplest example is you take if you are given a point z0 then you look at the function 1 by z minus z0 to the power of n where n is a positive integer and then you know z0 is a 0 of z of this function of the denominator of this function which is z minus z0 to the power of n it is a 0 of order n. So 1 by z minus z0 to the n has a pole of order n at z0 okay so the pole is basically a 0 of the denominator that is how you should think of it and well so a pole is really a singularity it is something that you cannot tinker with to make the function analytic at that point okay and in a way a versatile kind of singularity is called an essential singularity and that is a singularity for example you take e power you take exponential of 1 by z at z equal to 0 that is an essential singularity and both poles and essential singularities are really the bona fide singularities the removal singularities are actually non-singularities because you can always get rid of them you can get rid of removal singularities by redefining the function at the point but you cannot get rid of a pole or an essential singularity at a given point okay and of course you would have also learnt how do you distinguish between a pole and an essential singularity and there is a so called Laurent theorem which is an analog of the or you may even call it an extension of the Taylor theorem. So the Taylor theorem is a theorem that if a function is analytic at a point then you can express it as a convergent power series around that point that is you can find a convergent power series centered at that point which point wise converges to your given function in a good neighbourhood of the point this is Taylor's theorem and this is the theorem that actually tells you that once differentiability implies infinite differentiability this is what gives you the equivalence of the seemingly weaker definition of analyticity being once differentiable and the stronger definition of and the stronger you know implication that once differentiability implies infinitely many times differentiable so that is the Taylor theorem. But the Laurent theorem is kind of extension of the Taylor theorem it tells you that if you also include negative powers then you can get a series involving also negative powers in a deleted neighbourhood of the point and that is called the Laurent series and for all you know the Laurent series may have negative powers of arbitrary order ok. So you know so if you look at if z equal to z not is an well isolated singularity of f of z of course I am assuming f is an analytic function and z not is a singular point it is an isolated singularity then we have a Laurent expansion so well f of z is equal to let me write like this a not plus a 1 z minus z not plus a 2 z minus z not squared and so on this is the this is what is called the analytic part of the Laurent expansion and then you get the negative powers you get a minus 1 by z minus z not then you get a minus 2 by so this is a subscript minus 1 it is not a minus 1 ok and this is a subscript minus 2 z minus z not squared and so on. So this is called a Laurent series centered at z not and the function converges to this equality means that this Laurent series converges if you plug in a value of z in a small disc surrounding z not then this series converges to a value which is equal to the function value at the at that point and of course the point should not be z not because you cannot substitute z not here because you will be dividing by 0 all these negative terms and the fact that if z equal to z not is a removable singularity then all these negative coefficients will be 0. So your Laurent expansion will actually be a Taylor expansion that is exactly what happens when you look at 1 by z times sine z for example z equal to 0 ok and ok so this is so called Laurent expansion and the point is that among the important singularities namely the poles and the essential singularities you can also distinguish the type of singularity by looking at the Laurent expansion if you get infinitely many of these negative powers ok then the singularities and isolated essential singularities for example exponential of 1 by z at z equal to 0 and if you get only finitely many of these negative terms then it is a pole ok and the order of the pole will be equal to the minus of the largest negative subscript you get here ok. So that is another that is one way of you know trying to distinguish between a pole and an essential singularity there is one more way of distinguishing between a pole and essential singularity and that is by taking limits ok if you take the limit of the function as the point tends to the singularity and if the limit exists and is equal to infinity from all directions so of course this means that you have to make sense of what limit equal to infinity means I mean the limit of a complex quantity you say it is equal to infinity if the modulus of the quantity becomes arbitrarily large ok no matter how you approach the limiting point. So you know if the singularity z0 is such that as z tends to z0 in no matter in whatever direction the mod of f of z goes to infinity then you say the limit of f of z as z tends to z0 is infinity and this is a situation exactly when z0 is a pole ok and if there could be at what happens in the case of an essential singularity the limit will not exist in the sense that you might get limits differently limits as you approach in different directions for example you can take exponential of 1 by z and try to calculate the limit from by approaching the point z equal to 0 from the positive axis from the negative axis real axis positive real axis you will see you will get different values the fact that you get different values from different directions tells you the limit does not exist and that is precisely that is precisely the condition that tells you that it is an essential singularity ok. So the most important thing about singularities is what is called the residue of the function at an isolated singularity and that is supposed to be the value of this coefficient A sub-1 which is a coefficient of 1 by z-z0 ok. So you know residue of f of z at z equal to z0 is A-1 ok and this is a very very important value for the function because it is connected with the residue theorem ok which tells you that this is what you will get if you try to integrate if you integrate the function over a curve surrounding over a simple over a close curve that surrounds this point simply for example a circle a circular or a circle that surrounds this point if you integrate the function what you will get is 2 pi i times this this quantity. So for example you know you know you can take a very very simple example take g of z to be 1 by z-z0 I mean this is the simplest thing you can think of ok and you well here is z0 on the complex plane and then if you want draw a circle look at the circle mod z-z0 is equal to rho is the circle and obviously this is an analytic function except for the point z equal to z0 everywhere else the denominator never vanishes z0 is the 0 of order 1 so it is a simple pole ok a pole of order 1 is called a simple pole and if it is order is greater than 1 it is called a multiple pole and well if you try to integrate if you calculate 1 by 2 pi i integral over if I call this circle as gamma f of gz I will end up with 1 ok. So in fact you know if I if you want I can even put a lambda here where lambda is any complex number ok and if I integrate it what I will end up with is well if you would have done this several times. So if you want to integrate over a contour the method is that you first parameterize that contour and then you make a change of variable mind you whenever you integrate something when you integrate a function over a contour you must understand that the variable lies on the contour ok. So that means that you should write an equation for the contour and that is called a parameterization of the contour. So the parameterization of this contour is z equal to z0 plus rho e to the i theta where theta varies from 0 to 2 pi so this integral becomes well if you write it down perhaps you have already done it you certainly have done it but let me just recall quickly you will just get lambda. So you will get 1 by 2 pi i integral theta from 0 to 2 pi I am going to get g of z is z0 plus rho e to the i theta of course I have forgotten to write dz there which is the variable of integration so I will get dt z0 plus rho e to the i theta and this will turn out to be 1 by 2 pi i integral 0 to 2 pi what you will get here is 1 by z minus z0 if I substitute this I will get rho e to the i theta and if I differentiate this I will get i rho e to the i theta d theta ok and what will I end up with I will end up with my rho to the i theta cancels my i cancels I get 1 by 2 pi i integral 0 to 2 pi I think I forgot a lambda there so there is a lambda on top so I will get lambda t theta and that is just lambda ok. So the moral of the story is that you see if I look at this function and integrate it over a small over the small circle surrounding this point which is a simple pole I pick up this question and you know actually about this point if you try to write the laura expansion this is the laura expansion the laura expansion for lambda by z minus z0 is lambda by z minus z0 ok it is already the laura expansion and a minus 1 is the coefficient of 1 by z minus z0 and that is lambda and that is what shows up when you calculate 1 by 2 pi i and that is the residue for the function. So this is the simplest illustration of the philosophy behind the residue theorem the residue theorem says that if you integrate over you know the point which is isolated singularity and assume that there are no other singularities then what you will get is the residue and you will get I mean 1 by 2 pi so let me write that so to be more precise you have the residue theorem which is the residue theorem is the starting point at least for our discussion. So you know so basically you have let us assume that you have a nice contour like this and you have a function f defined on this on a domain which contains this contour and the interior of the contour and assume that you know there are well several isolated singularities z1, z2 and so on zn if you integrate the function if you write 1 by 2 pi i integrate over this contour of fz dz what you will get is some summation i equal to 1 to n residue of f of z at zi this is the residue theorem. So what I have done here is I have taken I have simply taken the function to have only 1 singularity isolated singularity and I do this integration and what I end up is the residue at that point but if you have several and of course you should assume that there are no singularities on the contour over which you are integrating okay. So you know the assumption for this is that the function is analytic in the interior and also on the boundary which means that to say the function is analytic on the boundary means actually it means that is analytic in a small disc surrounding every point on the boundary which means it is actually analytic in a bigger open set a bigger domain which actually contains this boundary and the interior okay. So then this is the so called residue theorem and in the simplest case it reduces to this and you can also see that you know if you take this function and you instead of taking lambda by z minus z0 suppose I took this power series I mean if I am not power series if I take this normal series and I integrate it okay around a contour like this then of course the first thing is that the integral of this whole series is the same as integrating term by term that is you can integrate term by term and then take the resulting series and this is correct because you can interchange integration and summation provided the series of functions converges uniformly okay and it is a theorem that if you take a Laurent series then within in the region where the Laurent series is defined if you take a closed disc in that region then the Laurent series will converge uniformly okay and of course whenever I say Laurent series should be it should be a deleted neighborhood you should not include the point of course because you cannot substitute the point because you will be dividing by 0 for the negative terms but of course there is a similar theorem for power series which says that whenever you have power series which is converging in a disc then you take any closed disc inside that disc the power series will converge there in fact you absolutely and uniformly okay. So because of this uniform convergence the integral if I calculate this integral for this function I can actually integrate term by term and you know if I integrate term by term from here onwards each term will give me 0 because it is Cauchy's theorem, Cauchy's theorem says that if you integrate an analytic function over a simply closed curve there is the I mean the integral is going to vanish you are not going to get anything okay and so the integral of all these terms will go away okay the integral of this term will give you a-1 of course okay and the integrals of all these terms will also go away because 1 by z-z0 to the power of for example 1 by z-z0 the whole square has an anti derivative which is just 1 by minus 1 by z-z0 to the power of 1 okay. So all these negative terms from power 2 onwards they all have anti derivatives and it is a version of the fundamental theorem of calculus that whenever a function has an anti derivative then the integral is just the anti derivative evaluated at the final point minus the anti derivative evaluated at the initial point but then this is a closed curve the final point is same as the initial point therefore you get 0. So if you integrate this term by term the only thing you will pick up is a-1 and that is the proof of the residue theorem if you had a single if you had a single singularity okay and then if you have several singularities this follows because of Cauchy's theorem because what Cauchy's theorem will tell you is that Cauchy's theorem tells you basically that if you take an analytic function and integrate it over a simple closed curve the integral is 0 okay but this is the so-called simply connected version of Cauchy's theorem which is over a simple closed curve where your domain which means the region inside the curve has no holes okay but then there is a different version of Cauchy's theorem which says that you know if your domain has an outer curve like this and you have inner curves you have inner curves and of course you know in all these issues the orientation of the curve is very important we always take the curves to be oriented in the anticlockwise sense and that is called the positive orientation okay and if you change the orientation the sign of the integral will change that is how it goes and if you apply Cauchy's theorem to the region which is this the interior of this curve and the exterior of all these little curves that the function is of course analytic and therefore you will get that the integral is 0 but that will amount to say that the integral over the outer region the outer curve is the sum of the integral over the inner curves okay and but then sum of each but each integral will give you the residue at that point as I explained here and therefore you get the residue theorem basically you get the residue theorem from an application of Cauchy's theorem and literally this kind of argument okay fine so you have the residue theorem now the you see the so let me having told you so far let me also tell you what kind of theorems we are going to prove okay I think I do not know how many lectures it may take but probably a few lectures so you see the kind of theorems we want to prove are actually theorems about zeros of analytic functions and well so yeah so glimpse of the you would like to prove so probably some of you who have done a little bit of further reading beyond the first course might have seen two of these theorems but this is where I would like to start the course so the first theorem is so called argument principle and the this is so called argument principle and the and what is the argument principle so let me state the theorem that I want to prove one is argument principle probably it is just some kind of corollary to the residue theorem if you look at the logarithmic integral if you remember it from the first course but anyway I will recall it the second thing is using the argument principle or even otherwise you can prove the so called Cauchy's theorem okay then we would like to study we would like to prove Hurwitz theorem and of course and thereafter one would like to study you know the open mapping theorem prove the open mapping theorem and of course also one would like to prove the inverse function theorem so these are the I mean the first set of theorems you would like to prove probably you would have seen the first and the second maybe but anyway it is they are the starting point so I will make it a point to recall them okay so let me explain let me explain what these what these theorems are so the first one is the argument principle so what it so what does it say so I will briefly describe what the statements of these theorems are and you will see that they are actually connected with I mean they are the right theorems that will come out of this topic trying to study zeros of analytic functions okay so the argument principle so the argument principle is well you know 1 by 2 pi I integral over a simple closed curve so when I say simple closed curve let me explain what that means first of all a curve is said to be closed if its initial point is the same as a terminal point okay of course by a curve generally we mean the image of an interval a closed interval if you want the closed interval 0, 1 on the real line a continuous image of that on the complex plane it is called a curve for example the circle of the curve because it is the image of the interval 0, 2 pi under this function theta going to if you want z0 to the i theta okay so it is a continuous image of the interval and it is a closed curve if the initial point is the same as the final point the fact that there is an initial point and there is a final point tells you that the curve is already oriented okay that means there is a direction for the curve and that is that direction is given by the direction of the that is given by the direction of increase of the parameter the variable that is used to write the equation of the curve okay and when I say simple curve it means that the curve is does not cross itself does not intersect itself so it is not something like a figure 8 or more complicated curves it cross themselves one segment of the curve twists and turns and comes back and hits itself at some point again crosses itself okay there are no such self crossings so such a curve is called simple curve and since you are going to do integration okay the curves that we will always deal with will be piece by smooth that means the curves if you write down the parameters parameterization for the curve okay then the parameterization will always come will be defined over some interval and the fact that you can divide this interval into sub intervals in each of which the function you write down is actually differentiable it is differentiable and continuous okay so this is what is called a piece by smooth curve. So for example here the function theta going to z0 plus rho e to the i theta where theta lies from 0 to 2 pi is of course a continuously differentiable function of theta which is the parameter okay but more generally a curve need not be given by a single parameterization it could just break down into several pieces and each piece may have a different parameterization okay one piece may be say part of a circle another piece may be part of a parabola the third piece may be part of a line but it does not matter the point is piece wise it has to be it has to be smooth so whenever I say simple closed contour whenever I say contour it is always something that is piece wise smooth okay. So the argument principle basically tells you that if you are looking at a function which is a function defined on a domain like this okay with the property that this the function is analytic on the on this on a domain which contains this whole region okay except at finitely many points which line the interior which are only poles okay you assume that they are only poles okay and the function should not have the function should not have any zeros on the on the boundary of the contour okay then the 1 by 2 pi i integral over gamma of d log fz it is called the logarithmic integral okay which is will give you the number of zeros minus the number of poles inside the inside region okay. So this is the this is basically the argument principle and of course d log fz of course means d log fz means f dash of z by fz what you must understand is that because f is analytic wherever f is analytic f dash is also analytic because as I told you a function that is analytic is infinitely different so and this is a quotient of analytic function it will be analytic there the only problem is the denominator might vanish. So wherever you have zeros of f f dash by f the logarithmic the so called logarithmic derivative of f will have a pole wherever f has a zero and of course if f has a pole then f dash have a pole okay so the only poles for this function we assume the only poles for this function are some zeros of f inside the zeros should not lie on the boundary and some poles of f inside and they also should not lie on the boundary. So the boundary should be free from both zeros and poles and there are only finitely many zeros and poles inside your domain okay and so this is the argument principle so computing the log if you integrate the logarithmic derivative the argument principle tells you that you get the difference between the number of zeros and number of poles so that is the so called argument principle okay and then let me quickly tell you about what these other theorems have to say so I am just now giving an overview of results but then we will go into them more deeply so what is Roush's theorem so Roush's theorem is so as I told you this whole exercise is to somehow study zeros of analytic functions okay and basically for example you want to count the number of zeros inside a region which is bounded by a let us say a curve a simple closed curve so the point is that you do not get in general a zero of f will of course be a pole for f dash by f so you cannot avoid considering poles also okay so that is the reason the argument principle gives you zeros and poles also so in particular if there are no poles then you will be counting the number of zeros and of course when you count number of zeros mind you every zero has to be counted with multiplicity so for example you know if you take the function lambda by z minus z not this has zero of order one at z not so the number of zeros will be one if you take a simple closed curve enclosing z not if it is not going to enclose z not the number of zeros will be zero okay but if I replace this by lambda by z minus z not to the power of m then the number of zeros will be m though physically there is only one zero at z not but it is order is m so it is also the order of vanishing okay it is a number of times the factor z minus z not z minus zero appears so whenever you say zeros or poles you have to count them with multiplicity zeros also have to be counted with multiplicity for example if I take the function z minus z not to the power of m where m is positive then z not is a zero of that function but it is it should be counted as m zeros so zeros and poles have to be counted with multiplicities and then this formula holds so this is the kind of counting principle okay that is one thing then Roush's theorem is something more the philosophy of Roush's theorem is that you know you take a you take an analytic function in a domain and suppose you are interested in the zeros inside the region the domain that is you know that you get the interior of a simple closed curve okay then Roush's theorem says that you know if you perturb the analytic function little bit okay even after perturbations the number of zeros will not change okay that is if you add to the analytic function another analytic function which is small enough that means you add to the analytic function as smaller analytic function of course you know there is nothing called smaller or bigger in complex numbers because complex numbers are not ordered but then whenever we say smaller or bigger we always refer to the modulus so you know what Roush's theorem says is that you take a function f of z which is analytic in a in say a in say a bounded region surrounded by a simple closed curve then the number of zeros will be the same for f of z and f of z plus gz where gz is a smaller function smaller on the boundary okay so that is Roush's theorem and you think of adding g to f as a small perturbation okay so I will just write it in words the number of zeros of zeros is not affected inside a simple closed contour is invariant is invariant means it does not change is invariant under small perturbations. So you take the analytic function and add to it a smaller analytic function function that is smaller than this function on the boundary on the boundary contour then even after adding it the number of zeros is not going to change okay so the addition of another analytic function which is dominated by the given analytic function on the boundary is called perturbation if you want okay and it is a small perturbation because what you are adding in modulus is strictly less than the modulus given function on the boundary okay. So this is Roush's theorem right so one version of Roush's theorem will tell you that suppose you want you have two analytic functions f and g okay how can you conclude that they have the same number of zeros okay so the answer to that is you calculate you know if you calculate mod of f plus g triangle inequality will always give you mod of f plus g is less than or equal to mod f plus mod g okay now the question is on the boundary if you get strict inequality if you get mod of f plus g is strictly less than mod f plus mod g on the boundary then both f and g will have the same number of zeros inside that is another avatar of this Roush's theorem so it helps you to also to compare the it tells you that two analytic functions will have same number of zeros if the sum of their moduli is strictly greater than dominates the modulus of their sum on the boundary okay that is another avatar of this okay so it helps you to compare number of zeros. Then the third one is Hurwitz's theorem so this Hurwitz's theorem is again you know it is again a very beautiful theorem what it says is that you know if you have sequence of analytic functions which is converging to a given function okay in a domain and assume that the convergence is going to be uniform on every close disc in that domain so this is called uniform convergence on compact subsets okay the other word that is used in the literature is called normal convergence okay so if you have normal convergence okay which means uniform convergence on compact subsets for example the convergence is uniform on every close disc in your domain. So if this is a normal convergence and if f has a zero of order n at z0 then what happens I will draw a diagram so z0 is a point where the limit function f of z has zero okay now some fundamental complex analysis will tell you that because the convergence is uniform and since each of these functions is already analytic f will also turn out to be analytic okay that is again an exercise that you can easily try to do and in fact the derivatives of all these will also converge the derivatives of f okay and all this is just because of normal convergence uniform convergence over complex subsets and of course in that case the integrals will also converge because the moment you have uniform convergence integrals derivatives everything will behave well with respect to limits okay. So suppose the limit function will be analytic and if it has a zero of order n at the point z0 what will happen is you can find a small enough disc surrounding z0 small enough radius such that beyond a certain stage all the fn's they will all if you take each fn it will have a n zeros inside this each fn will have n zeros with multiplicity which means that some of the zeros may be multiple zeros and the beautiful thing is as you if you plot those n zeros okay so I was so you know if you plot those n's those capital n zeros and if you make this small n become larger and larger then these various zeros will slowly come they will coalesce they will all tend to the point z0 okay. So what it tells you is that when you take a nice limit of analytic functions then the zero of the limit comes from zeros is a limit of zeros of the same number of zeros of the functions in the sequence okay so if f has a zero of order n it is z0 then you know then let me write it out somewhere here so then there exists a row greater than zero such that for large n fn has n zeros in mod z-z0 fn row which converge to z0 as n tends to infinity. So what Hurwitz theorem says is that the zero of the limit comes by you know you take zero beyond a certain stage it is the zeros of the functions that are giving the limit zeros of the function that you are taking limit of it is those zeros that slowly you know coalesce together and give you the zero of your function. So this is Hurwitz theorem and then I will quickly tell you what the open mapping in the inverse function theorems are the open mapping theorem is a very beautiful theorem it tells you that if you take an analytic function and you take a point where the function is non-zero I mean the derivative of the function is non-zero then there is a neighbourhood of that point where the function is an open mapping which means that you will map open sets to open sets and this is a very deep result okay because it is very rarely interpolish that you get open maps okay and for example a bijective continuous map need not be a topological isomorphism but if it is bijective continuous and open then it is an homomorphism because the openness of the map along with the injectivity will tell you that the inverse map is contiguous so you know so what it tells you is that an open map is as good as an isomorphism except that you need to know that it is injective if you know it is injective then it is an isomorphism okay so you can invert the map and all this is true analytically complex analytically also so the only condition is that you know the derivative should not vanish at a point and then in neighbourhood of the point everything is beautiful it is a local isomorphism okay that is essentially the open mapping theorem and let me quickly tell you about the inverse function theorem the inverse function theorem is that again in a sense you can think of it as another variant of the open mapping theorem what it says is that whenever the derivative is non-zero okay at a point then there is a small neighbourhood where you can invert the map okay and the inverse function can be written again using a Cauchy integral okay there is an integral formula to write out the inverse so you can invert the function there is an explicit formula okay so this is the so I am not writing more details we will go into them in the succeeding lectures so the point about these two theorems is that you must be essentially looking at well there is a more general version of open mapping theorem will tell you that all you do not even need the derivative to be non vanishing essentially you need a non constant analytic function okay so you need a non constant analytic function and it will always map open sets to open sets okay and in particular if the derivative is non-zero then in a neighbourhood of that point where the derivative is non-zero the function is actually a holomorphic isomorphism it is an analytic isomorphism which means it is an isomorphism it is injective onto its image which is open and if you take the inverse function that is also holomorphism that is also an analytic and that is given by an Ix formula integral formula that is what the inverse function theorem is okay. Now all these are somehow connected with 0s of analytic functions and they can all be derived starting from the argument principle which essentially is I should say the residue theorem applied to not to f but its residue theorem applied to the logarithmic derivative of f so the root for everything is the residue theorem okay so we will do this in the forthcoming lecture okay so let me stop here.