 Good afternoon everybody. Welcome to our series that we're calling Celebration of Faculty Careers that we instituted back in 2013 as part of the strategic planning for the college where we have full professors after seven years or every seven years come and talk about their experiences and their research teaching etc. interests and then you know so this allows faculty, colleagues, students etc. to find out about their journey and then once they get to do this then they get a opportunity to meet with the dean and the department head to talk about their plans for the next seven years. And so we're very privileged today to have Professor Okan Ersoy to give our colloquium and just very briefly Professor Ersoy completed his PhD at UCLA and before coming to Purdue he was an associate professor at the electrical engineering department of Basfra's University in Istanbul, Turkey. My alma mater by the way and a researcher at the Center for Industrial Research in Oslo, Norway and a visiting scientist at the University of California San Diego. Since the fall of 85 he's been at Purdue as a faculty and he's currently a professor in computer and electrical engineering. So his research interests include diffractive optics, machine learning and pattern recognition, decision analytics, digital signal image processing, transform and time frequency methods and their applications in various technologies. And we're looking forward to hear more from Professor Ersoy. Thank you very much. Thanks for coming. As you see from the title I hope broad scope of things to discuss. I'll start with diffractive optics and try to show how it connects to other areas and I need to do a quick job as well because there are 125 pages to cover. Diffractive optics is currently quite a popular direction of research. Here what we try to do is that there are very good theories. We try to design diffractive optical elements to do a particular task for example a lens or multiple lenses and then we implement them with devices such as scanning electron microscope. So the implementation issues are heavily tied to cell-state lamps. And there are many topics, many applications here, imaging, super resolution, various types of tomography, confocal microscopy and so on. So it's really a very rich area. Here we will talk about diffraction and diffraction used to be a nuisance but these days it became an integral part of technology actually through such devices. And there are newer techniques such as optical cryptography and I'll also say a few words about that later on. And I'll try to concentrate on topics that are really feeding from each other in diffractive optics, optimization, signal image processing, machine learning and transforms. Actually transforms happen to be the core which relate all these areas together. Well we start with wave equations. I'll try to avoid showing you any equations whatsoever. I'll just breeze through them and you end up with solutions which involve plane waves or spherical waves. These are the direct counterpart in signal processing. For example, plane wave solution. If you teach a circuit course as I'm doing now, you would exactly have something like this. A plane wave involves k times rk is the wave number vector and r is the spatial vector emulating x, y, z. So you get this for plane wave and for spherical wave it's somewhat different but in signal processing that would remind you of a chirp signal. And diffraction is simply what happens when, for example, light or any other wave passes through obstacles and it scatters and forms special patterns. There's a scalar diffraction theory that explains all this. I'll show you only that part which relates to the Fourier transform. At the bottom you're seeing the equation and that happens to be a, it can simplify to directly connect to the Fourier transform. In doing such studies there are typically three regions, planar region, round hover region or the very near field which is very close to the source. And it is a very heavy current interest now because people are doing, you know, trying to develop new, new devices. And there actually the theory I'm, I'm discussing breaks down. You can no longer simply use Fourier transform. And these can be explained again, I mean wave phenomena can be explained through angular spectrum of plane waves. This is simply the Fourier transform representation of how a wave can be written with the scalar part factoring out each to j omega t, which is again what we do in, for example, circuits. And we just, we just look at the space variation by using the most equation leading to this result. And this now can be studied by fast Fourier transform. And this also generalizes into what is called beam propagation method, which is very heavily used in areas in which material is changing. Index of refraction is no longer constant. And you can do this time in small steps. In, in each step you assume that the medium is constant. So that's also a heavy worker in non-monogamous media. The theory here corresponds really directly to experiments. If you do both experiments and look into theory you can do simulations and you can be sure that they would also be experimentally almost the same. For example, you can get such behavior like a sink pattern. And Gaussian beam or laser beam would be like this and then its propagation gives you a result like this and so on. So this area is very rich in simulations which are very realistic. And there is a counterpart here, a ray theory of waves and plane waves, spherical waves that correspond to also rays. And there is a borderline in which wave theory and ray theory actually get together. And that's actually heavily done in, for example, waves used in oil exploration. Well typically we have lenses. Lenses are very important. Now you see, you know them as imaging elements. But from signal point of view they are transformers from a plane wave to a spherical wave and vice versa. So here is my favorite, my topic in which I spent many years of research and development. We want to design the practical optical elements to do certain tasks. So we, for example, here's a Fourier transform arrangement. A lens is also a Fourier transformer in real time. So we want, suppose we have a desired output plane, what should be the DOEB. Now we are going to manufacture the DOE, therefore it has to satisfy the manufacturing constraints. A typical constraint would be, it should be binary, for example, binary phase. That would be actually similar to what happens when you read the newspaper. You will see, it looks continuous with what it actually is, binary patterns. If you look under the microscope you will see it. And guess what? Our brains are Fourier transformers. So they fuse those results and we see a very nice image. So this and our brains are somewhat similar. So these are things we use, a scanning electron microscope system, a reactive ion etching system for generating phase variations and for measuring depth. Depth corresponds to phase control. Very important. And these are cell-state lab devices which are used here. And this is the way you want to control depth and in light control phase is almost everything. Okay, and then you can do multiple layers like this. So these things are pretty much part of technology now. These are done in various companies. And you're going to do this, for example, in an area like 2 by 2 millimeter. And you have to design, organize how you're going to do it. And so on. Those are the experimental conditions. Diffraction gratings can be done, for example. And we typically quantize them. We can do it continuously. Okay, that brings me to this topic. This is the first method I developed which was very simple but then it evolved into other things. I'm talking about 1985 or so. In those years, these things were not studied yet. And scanning electron microscope was never used for fractal optical element implementation because of considerable errors in localization of bits and so on. So I developed this method. Again, I'll just breeze through the equations. I targeted zeros of phase, zero crossings. That's what people call them in signal processing. Zero crossings mean you have, you can make mistakes but since you are targeting the very zero of the phase, your errors are not, are quite often tolerable. Actually, I'll even claim, even if you big errors on the average, you are close to the zero crossings, it will probably still work. So that was really awfully interesting because by using this technique, I was able to make a hologram. Fractal optical elements are also known as computer generated holograms. So this was the very first result. It's kind of, it's not showing very well but on the left, you're seeing 11 points on a line that was generated after implementing this hologram with a scanning electron microscope. So this was really neat. It showed that you can do this with a scanning electron microscope. And I believe this is the very first result doing that. Today, it's routine to talk about scanning electron microscopes doing DOEs. Okay, then I need to show 3D. How do you show 3D? Well, I create the word love with four letters. You see that they are, the letters are approaching each other because you're looking, taking the image from a distance and you're seeing the perspective image. So that was a proof that it is indeed 3D. And then this was a slow algorithm. So we developed fast methods and so on. Here is a hologram generator. This is your binary, you see. And here's another example actually generated here. Concentric circles. And it was working just fine. You see, in these, these are sampled points. Whenever you sample, you get higher harmonics. In this type of work, you get harmonics due to sample, harmonic images, for example, due to sampling. But you also get harmonic images due to non, non-linear coding of the information. And that is a nuisance because it means you have restricted part of the space and the rest you can't use. And I came with the idea that if I use a reference wave, it's like a module carrier wave in communications. If you use something like a spherical wave, very close hologram, which means the chirp signal is changing very fast, the symmetry properties are destroyed so that you can get in one image only. And indeed, that turned out to be the case. And here it is. This was the first experiment showing that when I did this, there was only letter 3 and that was it. No other image. So you can wonder what happened to all the energy going into other harmonics and so on. They turn into noise, which are not effective because this is in 3D in space and where this forms by focusing, the noise is hardly important. And that actually goes against the sampling theorem. The sampling theorem is based on regular sampling. This essentially corresponds to your regular sampling destroying harmonics. Then I developed one more method and this is kind of interesting, virtual holography. You see, we were using an expensive scanning electron microscope and so on. The turnaround time was slow and I want to be able to do fast experiments. So I asked the question, how about making a hologram large actually, image it to something very small, virtual hologram, which satisfies all the properties of size so that you can use diffraction images. The question was, in the virtual hologram space, you're not recording anything. So does light behave just like it does when it scatters from a real hologram through diffraction? So for that, I used a telescopic system, one large lens and the lens tool, typically a microscope objective with very short focal length. And then you get a de-magnification through this, let's say M. The very interesting thing here is that you're magnifying by M, but in the Z direction, the magnification is M squared. And when you do that, you get this kind of geometry and so it's the virtual hologram for which everything is designed, but you're just imaging to the real hologram which can be large. When you do that and I figured if this is going to work, I can take a really large version of a hologram and see what happens. So I luckily found a collimator, a large collimator like this in the lab, which was from World War II, interestingly, and I used it plus that I used it as a collimator, so lightening the whole page and then lens and then a microscope objective for the telescopic system. And this was from a book copied directly on a transparency and that was this. Kinoform means DOE implemented with only face control, amplitude is assumed to be constant. And here is the result. So you're supposed to see a B and that's what you are seeing here. And indeed, this was exciting because it means I could do all sorts of things. I went back to the scanning from microscope. Here is my first such result. And then here's another one part of a cube. And then I generated the hologram side by side showing different parts. And here's the whole cube. So it was working just fine. And this really meant quite a lot to me at the time. Because you see in this arrangement, if you focus on virtual hologram, the dimensions there can be anything you like. For example, one millimeter on a side in the lateral direction, and much, much smaller in the vertical in the z direction, you can do anything you like. If you do that within a material, you just can. Today we are living in the nano space and then all technology. And if you get into the what happens to the material material properties material effects with very small dimensions. That's a huge concern in research today. Here we could do anything we like in size wise nano micro whatever. And we could have apertures because you're imaging they would overlap like this. And normally diffraction theory just breaks down then here it doesn't because everything is virtual. They don't affect each other. It just works. So this this was really exciting. It also meant you can have very simple equipment, do fancy experiments and publish papers. Did I do that? No, I'm already stopped here, but it's certainly possible. Then you see the design has a lot to do with optimization. So for example, you are allowing only binary numbers to take the DOE one or zero or one or minus one. And when the light hits this and propagates is supposed to do its job. So it really is an optimization problem. So we came up with this technique here. The iterative interlacing approach. Instead of trying to design the whole thing at one shot, we divided into two crosses and circles. We design we optimize with half of it. Get result there's error and then we switch roles we design with the other to improve the error and we keep switching back and forth. This is somewhat similar to in optimization theory coordinate descent, for example. But typically, those things are discussed with just one one variable changing and the other is being kept constant or small number of variables changing. Here I'm talking about 1000 by 1000 matrix, for example, and half of it is changing and the other one is waiting. So we go back and forth like this and something also becomes very important. This is one pattern. This is another pattern and one can think of all sorts of possibilities and they do affect the results. And this also work very well. Here is a binary DOE. And we were working at the time with a cat brain image. And these are the these are the results. These were seem these were actually experimental results. One problem with laser light is that there's a lot of speckles so you get noisy results. And so with this approach, we will get achieving considerably better reconstruction accuracy than otherwise. And it actually hits upon local optimization problems, which I was doing later on. Then we developed this further in a much faster method. We called it audit fit. And I'll show you some results. Then a image reconstruction. You see, you're always getting a second image that's brought due to the property of the symmetry properties of the space. Okay. And then we use the same thing in dense wavelength division multiplexing in optical communications. In dense wavelength division multiplexing, you have different wavelengths essentially being focused to different locations. But the harmonic she is again there. You have a one focus beam here, it repeats, they all repeat. As a result, you are restricted in terms of space, how many channels you can have, in other words, how many wavelengths you can have. So here we use the one image only concept. Now we are into integrated optics, you have to design these properly. So we were using called it regular sample zero crossings. And here are the computer aided design schematics. So this was a serious effort. And indeed, we generated 180 channels and without any problem. Regularly you're restricted to maybe 60. And this can be done also in 3D. And interestingly, it reminds of virtual holography, which means you can do this in materials with any kind of constraints by using virtual holography concept. And that brought brings us to another technology, which is volume friendly zone plates. Zone plates are lenses done with circles like this. We used to have them. We might still have them here. They are large like this and they are essentially refractive optical lenses. So if you do this in volume, it would be very interesting. At that time, he came upon an SF proposal so that I had access to femtosecond laser beams in the lab and that's what we did. So this was a PhD effort. So a lot of experimental work here. And so the idea was to do this within glass in 3D. Again, the uncertainties are serious. Can you really do it in what the volume? So we did a lot of studies of how index of reaction changed when the focus beam hits a point in the glass. And here you're seeing different layers generating zone plates in a glass. And so again, we worked out the theory and here are some zone plates. Again, we were targeting the zero phase so that errors don't kill us. And indeed, we did efficiency measurements, real experiments. And it did work very well. And as you make these zone plates after each other in a volume, efficiency shoots up even though we have restricted amount of focus points. It goes up like to 78%. And in volume holography, it's known that you can achieve 100% accuracy if everything is done well. And here, a similar phenomenon we observed within glass. So there are these layers. You see, in the seminar, one thing I'm trying to tell you is you can pick up these ideas and apply it other places. For me, in the past 18 years or so, that area has been machine learning. And I'm going to show some structures. And the kind of things I was thinking, probably motivated by these volume effects, namely, having many layers and that kind of thing actually became quite popular through what is called deep learning today. So I'm going to come back to that later. Okay, so that worked very well. And actually, we've already got a patent on this and we got some publications. It was again, our first time something like this was done within glass. Okay, that brings me to my studies on my colleague studies on Fourier related transforms. Throughout this optical work, I constantly use Fourier transform. So I was getting more and more curious. So I started looking into it more seriously. And typically, Fourier related Fourier transforms we deal with is the complex formulation. And that's it. But actually, you can talk about Israel formulation versus complex formulation. And and then you can generalize it. And this really what I started looking into. But my motivation here was kind of different. I'll talk to you about two stage representation of the discrete Fourier transform matrix from which I was trying to develop some fast algorithms. And that was interesting, I can come back to it later. But then it turned into more simpler things, at level of almost sophomore, sophomore year of studies that we do here. Here is the real Fourier transform. So you can actually write the real Fourier transform as a real, real, real or ta or the normal transform like this. Here, I'm showing you the continuous version, the discrete version is similar. At this point, you can say so what I mean, this is perhaps not a new thing. But to write this as a transform with or to normal properties, somehow it's not stress in the literature. So, so what we can say I mean, so what, but it seems it turns out to be interesting because this evolves into all sorts of real transforms such as discrete cosine transform, which is heavily used in industry today. So from that point of view, it's also interesting. Here is you're seeing an example of how, for example, in Coiseum monochromatic waves, you can write the expressions of Fourier representation of the wave field. And you use analytic signal, that should be a J here, analytic signal is complex. And the Hilbert V is the Hilbert transform, which can be written as another real Fourier transform. It's very concise. If you do the same thing with the complex version of Fourier transform, it is considerably more complicated, but they both do the same thing. But my selling point to you will be the fast, fast algorithms. Fast version of FFTs, which are routinely used, they can be developed with real discrete Fourier transform. That has no redundant operations, whereas complex FFT has twice as many in one D and four times as many in two D and so on. You can get around that, but you have to use special algorithms, and most engineers don't bother. Okay. And here is the complex Fourier transform as we typically use today. And complex Fourier transform, I mean, if you're teaching a circuit course like I'm doing now, you would really be irritated, because it's so powerful. You have phasors that factors out the exponential j omega t term, and it leaves you with the phasor, and then you're in algebra. Everything is algebra. What we do with impedance and so on is all algebra. You define power, reactive power, average power. It takes care of all the power considerations in resistors, capacitors, inductors, and so on. Extremely powerful. Okay. My concern here is not to destroy this, revolt against this. It would be crazy. But from a computational point of view, I'll show you some things, and some additional things to be able to talk about the real version. So there's a discrete version. So I published in 1985, a very simple paper on real version of DFT. And it is like this. And it turns out there are four generalized types. If you analyze this, everything is done with integers now. There are four generalized types with these additional one half. And that this actually goes into some other real transforms, which are typically used in today's technology. You can start with this discrete artist transform. Ronald Bracewell, very famous man. He did a lot of work in Fourier theory, and he's very well known in Fourier slice theorem, which is typically used in tomography, astronomy, and so on. And he came up with this at Stanford. Discrete Hartley transform. They said, Hey, this is real versus discrete phrase transform, which is complex. They got a patent on it. And Stanford was saying, DFT is dead, long live Hartley and so on. And I met him at a conference and we were discussing this. And I said, Well, there is also discrete, real discrete phrase transform. He was silent. And that was it. But Ron Bracewell was I'm quite sure effective. When I was going to my promotion procedure. Okay, he gave I think he gave me a good recommendation. But anyway, discrete Hartley transform. So what you can say because there is real discrete phrase transform. In the continuous domain, the same is true. Continuous Hartley transform versus continuous Fourier transform. So that's the story behind this. But then there is discrete cosine transform. This is a very heavily utilized transform in image compression. JPEG images, for example, are typically based on this. They make VLSI devices with these. And in order to make VLSI devices, you need to simplify this as much as you can. Actually, these days, I'm told that cosines are being approximated by some integer version. And that's a very interesting thing integers getting in here. And then there is scrambled real discrete Fourier transform, which we developed here. This is simpler than this DCT in terms of operations, but about the same performance in image compression, and discrete cosine three transform. Actually, this worked better in image coding than the DCT. You published a paper about that's about it. And then we have the complex discrete Fourier transform. You see, by moving like this, you can incorporate all of these things in one common framework. What makes these transforms important, convolution and so on, that you are essentially diagonalizing matrices. And there are important matrices, correlation matrices, matrices coming from solving partial differential equations and so on. Here are some most important ones. And a transform will be important if it is the eigenvector solution. And for these important matrices, you're seeing the corresponding transforms which diagonalize them or give the eigenvalue solution. F1, for example, would stand for DFT, complex DFT, that does circular convolution and circular convolution is what you do when you want to compute linear convolution fast. Then you use F1. But then is it necessary? That's a question I'll come up with. FFTs are very important. A building block is a complex multiplication which can be written like this called butterfly operation or given plane rotation. Okay. So you factorize and at any stage there are in over two plane rotations. And that's it. But in the literature, you would typically see complex FFT algorithms heavily described. There is a corresponding real FFT for all cases. Therefore, we can do it with real FFT without any redundant operations, as you would do with complex FFT with double the number of operations. I'll come up with two more, two more reasoning. Actually, I did this over the week and I had not thought of it before. Suppose we have a linear system with real signals. So we transform x into y. And if I write it in terms of Fourier representation as x of t equal to that integral, then I can compute the amplitude and phase by our RFD, real Fourier transform, and then get into the phasors. So this is one component. System is linear. And I write it as a phaser. And we know that that leads to another phaser with amplitude and phase. This phaser, but then we move from the phasors to the real world by doing this and then we integrate to get the output. You see, there is no complex Fourier transform here. Thereby, you don't need complex DFT. I hope it's correct. I did this last night. Okay. And similarly, this is true with linear convolution. With linear convolution, we typically use circular convolution, because if you have linear convolution, you can double the size with zero padded and then do circular convolution. And it is done by DFT, complex DFT. Circular correlation, on the other hand, is slightly different. In that case, the diagonalizing transform becomes our DFT with a special angle that you're seeing here. But we can always take circular convolution and convert them into circular correlation by inverting the signal x. And then we are back to circular correlation, which can be done by our DFT. So if this is correct, I did last night, again, you can do the whole thing with real fast Fourier transform. Okay. So I'm out of it now. That was the controversial part. Fourier transforms are unbelievable. They get into integers, they get into polynomials. For a while, polynomials were popular in terms of fast computation with images, for example, when you're doing Fourier transforms and the like. Turns out that you can do this with generalized RDFT. So we had a paper on this, but I won't dwell on it. I'm just trying to show you that polynomials are part of, part of Fourier theory provided that you compute things modulo another polynomial. With integers, same thing. You do things with integers modulo prime integer. Then they have integers have trigonometric functional properties like signs and cosines and so on. For a while, those were popular and I just want to mention to you. Okay. Then I was trying to get into machine learning. How do you jump into something without knowing anything about it? Well, I knew in something about fast transforms. I showed you that FFTs are based on based on some stages with butterfly or rotation operations. And there are P stands for permutations, there are permutations. So how can we generalize this so that we can generate fast transforms plus maybe learn new transforms? So for a while, that was an effort and here's an example. This is supposed to do the RDFT with size eight with rotational operations. This looks just like an FFT. But guess what? This is found by the machine and it's like none of the well-known FFT methods. Machine finding things. Of course, that's popular right now. Robots they're all languages and so on. So here's my first such observation. Okay. All the things I mentioned to you was not my major motivation. This was my major motivation. While I was working with holography, I hit upon this represent signs and cosines in terms of the composition like this. These are very simple functions with cosine and sine properties. I'll show you some examples. And these are such that it constitutes some circular convolutions. But to get here, I had to invert an infinite series. And I could and I couldn't do it. One night, I couldn't sleep at all. Towards morning, I found it. And it led to this. I was so excited. Sometime later, I discovered that it was maybe you see version of number theory. You know, several hundred years old. But anyway, it served the purpose. And here is the here is the representation. The preprocessing stage which correspond to those Z's can be extremely simple. This is the complex version. We have also the real version. So you do this. So it's a transformed by itself. And it does qualify as a transformed. It can be orthogonal. It can. Some version is not orthogonal, but it turned out to be very useful in vision applications. And so on. And the following stage is made up of circular convolutions. It can be computed very fast. So then DFTs are represented in terms of a preprocessing transform times these convolutional block matrices. For example, we try this in vision computer vision. We were trying to recognize which object it is. And regular DFT versus what we call discrete rectangular wave transform, which is what that matrix I showed you with minor elements. It really worked better. It worked better in vision as feature extractor. Here's another version. This version, for example, preprocessing stage. This is a orthonormal. So you can generate orthonormal transforms out of this. That would remind you perhaps how to transform our transform. It's none of those, but it's directly related to the discrete Fourier transform. Okay, I'm really rushing. I think I'm doing well in that sense because I reached machine learning. I started noticing that the kind of things I was doing, or we were doing in practical optical elements as well as signal processing had to do with neural networks. For example, in those early days, I'm talking about 1988 cents to one. And so in the past 18 years, I heavily got into machine learning. I developed two courses, interaction between neural networks and another one, which is the undergraduate level. It's a main master course actually in study abroad program, as one course from us called introduction to machine learning and pattern recognition. And that course is this, we were able to do this for 10, 10 years. And now this coming may hopefully be repeated in Rome. If all things go well. And it's been doing well, getting more popular every year. And so that's my sophomore level course. And so we started developing algorithms. And I believe some of the algorithms we developed and which work really well, are really motivated by what I knew from diffractive optics. For example, in this first case, it's called parallels are full rising hierarchical. What is happening here is that you're generating one stage for some very top stage, you're doing some error checking and then that error is tried to be corrected by the second stage and so on. Now I'd like to remind you, IIT model for diffractive optics or method. Same idea. Well, you can very well say, are these aren't these common knowledge where in some areas they are, but for me, they were kind of new. But I was getting my concept from it. And in contrast to this, this is a multi stage network. For example, backpropagation is the learned by backpropagation algorithm. So you can see you can contrast the two. But in the top one, you could actually make each layer also like this. There's nothing against that. So you can grow both in the horizontal direction and the vertical direction. And I can tell you that that's really what's being done now in deep learning studies, growing both this way and that way is what Google research is advocating. Okay. And that the deep learning, I mean, we're working in the, moving this way is to me, it reminds me of volume effects I saw in holography. Okay, and we also developed a consensual version of this. You suppose you go to your sake, you go to several doctors and then you consensus about the make consensus about your condition, same concept here, we generate a number of modules and then to reach the final result to a consensus, again, consensual networks, machine learning algorithms and so on. Even in early stages here about today, they are well-known. Okay. And then, you know, he got a price here, seven years ago, you might remember his name, right? He came to me and we work together. And this was the paper we published in July 1990. And here, we did the same thing. Yeah, what we applied it to remote sensing. In 1990, this was a no-no. No machine learning in remote sensing. So we did, we published today, it's very mainstream to do machine learning in remote sensing research. So this worked well and human edicts own is the proof he really skyrocketed. He went back to his home in Iceland. Now he's the president of the university there. And he was, he probably still is chief editor of IEEE transactions on remote sensing. And this was the work done in relation to that. Okay. So I talked about 18 years. I never left one thing behind. I mean, this my strength as well as weakness, I can't just leave things behind. My work with diffracto-optics actually continue, continue all the way. Every now and then, I have been teaching a course thanks to Purdue on that topic and that machine learning signal processing and transforms. I kept going along with them. But in machine learning in the last 18 years, we did quite a lot of work. And for some sport, sport vector machines, we developed some new algorithms with them. I work with the School of Management with Professor Moskovich especially and Alton Kemmer. And the next three papers you're seeing is a result of that. And then you're going to see papers quite heavily related to remote sensing. And these are some of the methods and applications that I chose to show you. So today, machine learning has become common phrase almost with anybody. Companies like Google, Microsoft and so on are heavily relying on it to sell their next generation, next generation products, especially Google does that through its telephone nexus and so does actually Apple, iPhone. And so is it because these methods are so powerful? I think the reason not necessarily what is happening is that these companies have huge resources in the cloud. And the power of learning is that you can do it nonstop, machines don't sleep. So all the information is fed into the cloud, there are tremendous power in terms of servers, they constantly learn and feedback the information to your phone. So your phones are getting smarter and smarter. That's really the combination of cloud plus learning algorithms, which keep on improving all the time nonstop. We can't do that, right? Machines can. Okay, that's really major reason why we are seeing these days all these new products, which are intelligent and they say artificial intelligence, remote sensing, machine learning and so on. That's really what is happening. It is only interesting, but it's also frightening, of course. Okay, so there are some more things here. I think I'm coming close to finishing. I moved faster than I thought. We also did some work on evolutionary algorithms. Evolutionary algorithms are optimization algorithms with nature in the background, for example, particle swarm optimization. Some researchers at the Indianapolis IUPUI, one from engineering and one from psychology, I think they got together and they said, oh, why do birds flock together? And they analyzed that and they came up with a optimization algorithm, which is extremely simple, but very powerful. And evolutionary algorithms are like that. The starting point was probably the genetic algorithm developed at the University of Michigan, and that takes, that looks at chromosomes and genetics and how we, how chromosomes change as a function of mutations, crossovers and so on. And it's all based on competition and outcomes are, we are a result of that. And so in genetic algorithms, typically it's like this, there's a mother and a father, they get together and they make children, two children. And then there's a competition with respect to some function to be optimized. And population can't grow. So you need to get rid of some of them. So you choose the very best. And you choose the very best and repeat the cycle. This goes on. It works very well. It does global optimization. When it first came up, people were laughing, nobody is laughing now. They solve very fancy problems with this type of approach. And then I had a postdoc student, not student, but a professor, he came to us for a year as a visitor. He had an interesting idea. Why make two children? Nature is not like that. Typically you make more. Okay. But if you make more children, but you can't let the population grow. If you're going to make more children, you have to kill more people too. That's exactly what he did. And guess what? The algorithm was working considerably better. He published a number of papers on that, based on more children and killing more people. That sounds very good, right? For real life. And that contrasts with my previous work in 1975, called our physical model of population growth. In those years, there was a lot of concern about how population is going up and there still is. So I want to study that, come up with a model. Of course, it's not my area at all. What I did know at the time was lasers. So I used up some laser theory to develop this model. It grows exponentially first, and then it becomes linear. And then I applied to a number of countries. And those predictions to hold the whole today is interesting. But I wasn't generating more children, I was advocating the opposite. So I want to tell you that that was also motivated by laser work. That's pretty much it. And here is my conclusion. That we liked. It showed me the way whenever I was short of understanding things. Okay, thank you very much. Pardon me. I'm sure you'll answer some questions. Certainly, certainly. I would love to catch brain. Can you do that with the human brain? Sure. As long as you have the image. Trump's brain. Definitely. What would you get out of it? Probably for short much. I don't want to insult the president. No, no, I wasn't thinking about what could you get from a human brain? You would get the image. These are cross sections. I mean, you can easily do with a human with any anything. And that we were only trying to show that we can make diffractive optical elements that will generate that image. It can be any image. Okay, it can be 3d. This is truly 3d. What is told about three 3d television and so on. Those are fakes. They are giving you two images. Your brain synthesized them to look like it's 3d. It's not 3d. Holography or diffractive optical elements generate three truly 3d images. But to do that in technology is a tough business. But I'm hearing that people are moving on. I mean, 10 years from now, you probably have truly holographic systems doing that. And this does show up in, for example, when planes, when captain is moving the plane, he sees the head up display is called. There's a diffractive optical element generating the image directly into your eyes, for example. So it's coming up in all sorts of ways. And I was intrigued by what do you use? Quantized. What does that mean? Well, you see, that is the reason why we have revolutionized information technology. Think of our waveform, which is changing infinitely many possibilities, right? We do two things. One, we sample in time, for example, for space. And then we take the value and quantize it into a number of quantized values, which means, for example, 256 values, that's it. So infinitely many possibilities become only 256 possibilities. That is huge data compression. And then we represent only those in terms of integers. We have numbers from 0 to 255 as a result. We convert them into bits, 0 to 1, and then send it some other place. Or, I mean, that's the way modern technology works. Always quantized. Quantized. It's not quantified. No, quantized. Quantized means reduce the number of possibilities into a number of brackets. Think of your fast food. How many possibilities are there? Same thing. They don't make thousands of different dishes, maybe five, six, right? Hamburg cheeseburger. So, I mean, modern life quantization is essential for good and bad. Any other questions? Okay, thank you very much.