 And it's a delight to be here this year and to see how the new folks who are participating in this school are enthusiastic, they are engaged, and I'm really happy that I'm able to be here with you all for this school. And to present to you some things that are very recent, some of the things that I'll be talking about in our lab. But they also connect back to one of my interests that started developing maybe more than 35 years ago when I first learned about chaos and started to study the papers at that time about determinism and deterministic chaos and randomness, which was always an interest of mine. I did my thesis research on looking at coherence in light. And coherence means randomness or order. Where are you on that scale? And trying to figure out how coherent light is has been something that is very close to my heart. So when I learned about chaos, I was really surprised to find lasers that were deterministic in their dynamics and you had deterministic models. But I always suspected that somehow everything in nature is a mixture of determinism and randomness together. And it's that kind of mixture and transition from one extreme to the other that I want to talk to you today about. So thanks so much for being here and listening to me and make notes about all the things that I do wrong. About how to present lectures and talks and things like that. So you will be handing me after my talk all the criticisms, right? Very good. And you won't make those same mistakes in your talks. Okay, let's get started and see if I can tell you about randomness. What is randomness? Okay, and the colloquial understanding of randomness is that it's something arbitrary and unpredictable. So if I just look at a string of bits, zeros and ones, I cannot predict when the next zero will come or when the next one will come. Very well at all. And that I would think of as something fairly random. So the chance of getting a one or a zero is a half, okay? And if that chance on this particular trial does not depend on any of the previous trials, then there are uncorrelated trials. And this is how we normally think about randomness for a string of bits. Now one of the questions that we always ask is whether noise is random or not, okay? So in theory, as you can see, noise can have infinite bandwidths, but in reality all systems that produce noise have a restricting bandwidth or constraint on them. And here's a picture of a noisy voltage that's fluctuating in time. It could have been generated by noisy light falling on a photo detector and the voltage fluctuating up and down, okay? Now the idea here is that I really do not know if this is biased or not, if there are equal numbers of values which will be above the mean or below the mean, I have to measure that and observe and quantify that somehow. The speed with which this is changing, note that I haven't specified an x-axis. Which is one of the cardinal rules of presenting figures is that it should have an axis on there, right? So how fast do you think this noise is fluctuating? This was captured off of a light source that was fluctuating on the nanosecond time scale, okay? So we can have fluctuations on nanosecond or picosecond time scales of light sources and we try to determine the characteristics, the statistical characteristics of this noise source and that's part of the work that people do to generate random numbers. Where are random numbers used? You see, people started to worry about randomness and random numbers when they wanted to play games of choice, a chance, okay? And so lotteries, slot machines, video poker, roulette, all of these are examples where people want to generate random numbers to actually make these games function. And Monte Carlo simulations came into being in the late 40s, early 50s for the first time, with computers being able to generate random numbers and do simulations of systems where there's an inherently unpredictable or random nature to what is going on. But surprisingly, they also applied it to things like determining multiple integrals in high dimensional spaces and sampling the volume over which they were integrating by putting, taking random choices in that volume. Random numbers can be used for things like that. They can be used for establishing random initial conditions for deterministic systems if you like, okay? And one of the questions that's very important is how precisely can you make the initial conditions of a deterministic chaotic system? These are very interesting things we learn about in Monte Carlo simulations and in the use of computers with random numbers. Encryption and security are big business for random numbers. So if one were to plot the number of random bits that are used in the world today, starting with early days, maybe 100 or 200 years ago, you would see a curve that goes up like that, okay? It's increasing exponentially or even super exponentially sometimes because of all these applications. Every encrypted message that you send has somehow to do with random number generation and key generation that keeps your message secure, okay? Financial transactions are all done with encrypted secure communication techniques. Now, not that long ago, this was how one generated random numbers. One looked up a book, okay? In this book, A Million Random Digits, you had a list of random numbers. And you can see on Amazon, this is the review of this book, it is a promising reference concept, but the execution is somewhat sloppy. Whatever algorithm they used was not fully tested, the reader says. The bulk of each page seems random enough. However, at the lower left and lower light, right of alternate pages, the number is found to increment directly. That means the page is numbered properly, right? So you can see 1284 people really liked this evaluation of this book. Which will very rapidly put anyone to sleep when you start reading these numbers, okay? That's its major application. But in those days, it was a valuable source of random bits, okay? Very good. So people generated by hand random numbers, put them in a book so that you could sample random numbers by arbitrarily opening different pages, right? And using those for your Monte Carlo simulation to do this. So this was actually funded by the Rand Corporation and many different applications came about for this book. But today, random number generators are everywhere, okay? So each little laptop, each phone, everything has random number generators in them. And these are typically pseudo random number generators, they are called PRNGs, okay? And the funny thing is that they actually use deterministic algorithms with parameters that you can change or initial conditions that you can change. And sometimes these are called seeds for the random number generators, okay? And you can generate random numbers on your laptop with a simple call like RND or something like that, right? And generate random numbers very easily. But are these random numbers pseudo random number generators which use deterministic algorithms for generation really random? So one of the very interesting things is that they generate a sequence of bits starting from a finite length seed. They rely on one way functions. And the problem is that if the seed and the algorithm are known or somebody finds out what they are, you can actually reproduce the sequence in the future or in the past from hacking the random number generator algorithm, okay? Yes, yes, absolutely. So it's not always a bad thing at all. It's a wonderful thing if you want to regenerate data using the same string of random numbers that you used once. You use it a second time and you'll get the same result. Of course, that isn't telling you how good those random numbers are, but it's a reproducible process. And so you can see what the period of repetition for this well-known algorithm, the Morissette Twister is, for example. But the person who first started to worry about pseudo random number generators was Von Neumann. And he made this rather interesting statement at a time in 51. Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. This was his thought about pseudo random numbers. He didn't really like it. But you can see that Microsoft has had a lot of trouble. And this is from 2007 where people hacked into Microsoft's random number generator. And I'll give you an example of a more recent one. This is from 2012 about random keys and use of the RSA. The RSA algorithm is one of the most popular algorithms for public key cryptography, okay? And people were able to hack into all sorts of things using that. But let's look back a little bit at history and physical random number generation. So this is the gentleman who started it all. And with a paper in nature, dice for statistical experiments. So he was really engaged in making dice, okay? And throwing dice, dice is the traditional sort of thing. He says, as an instrument for selecting a random, I have found nothing superior to dice. It is most tedious to shuffle cards thoroughly between each successive draw. And the method of mixing and stirring up marked balls in a bag is more tedious still. And so on, he prefers throwing dice, okay? And so you can imagine that generating random numbers is not that fast if you're doing it that way. But this was Sir Francis Galton who was one of the founders of the science of statistics, okay? And I think he was related to Charles Darwin in some way, a cousin or somebody. And he also had very interesting projects such as trying to prove that prayer doesn't actually work, okay? And so what he said was, which is the family that's most prayed for by people, the royal family in Britain, okay? Let's look at the, and people pray for them to have long lives. So let's look at their lifetimes. And he found that in fact, they didn't live that long. So he concluded that their prayers must be totally ineffective in influencing the longevity of the royal family of Great Britain, okay? So very interesting idea and think carefully about his organization of that research and how he went about it, right? Problems with that too, okay? But now, in more modern times, people generate photons. And send them at a beam splitter. So you have a light source sending photons towards a beam splitter. And there are two detectors, the zero detector and the one detector. If the beam splitter sends that photon that way, then you count a zero. If it goes straight through, you count a one, okay? And these numbers, these strings of zeros and ones are supposed to be the most certifiably random string of bits. So there's a company called Edie Quantique, which makes random number generators based on this method. And they sell them all around the world, okay? Photon counting generators of random numbers. Another method of doing things randomly is to do counts on a Geiger counter of decay of radioactive atoms. And you detect a charged particle that's released when an atom decays. And you count zeros and ones in certain time intervals, okay? Which are short enough. And my colleague, Tom Murphy, advertises Arduino. And this is a Geiger counter with Arduino functioning. Now this is a silly little cartoon, but it's a wonderful illustration. Over here, we have our random number generator, and it's generating 99999. Are you sure that's random? And the point is, you can never be sure with any finite string of numbers, whether it's actually random or not, okay? You have to analyze greater and greater and greater numbers of these random numbers in order to determine how random things are. And so this is really illustrating a very profound point, I think, about randomness and its generation. Now the National Institute of Standards and Technology, NIST, in the USA, publishes a comprehensive battery of statistical tests to look for patterns in bit sequences and say whether you have really random numbers or not produced by whatever means you want. So you supply these 1,001 million bit sequences and they will pass it through a battery of tests. You can do this on the web online, okay? And test the random bits that you have. Now, one of the really important things to note about all of this is that these tests do not tell you what was the source of those numbers. And the source of those numbers and the production of entropy in a physical fluctuating source is very important. So the entropy considerations are really important. How much entropy is generated by a dynamical system, okay? Is a very important question that one should ask. And the NIST tests really don't look at that. And NIST is actually in the habit of modifying these tests as people try them on different techniques and discover flaws and problems with them. And currently NIST is actually focusing on entropy considerations for random number generation and it's testing. Let me show you how fast you can generate random numbers, the glimpse into that. So one of the very interesting things is that photon counting cannot be done terribly fast. You can count photons, but each photon counting event makes this photon counter have a dead time. That means a refractory period or dead time when it won't work again. So if two photons come too close together, you can't count them effectively. If you've counted one, you miss the other, okay? So one of the very important things is that photon counters typically work up to 100 megabits per second, probably somewhere around that. And with electrical noise, you can measure the noise up to gigabits per second, okay? With analog to digital converters, you can do that very fast nowadays. And we'll be taking a look at one of these simple methods of generating random numbers that Intel uses. So I have to tell you one of my stories about hands on schools is that some of the students who work with me, graduate students, they now work for Intel. And their expertise on different aspects of determinism and stochasticity and microscopy and optics were all developed in these hands on schools, okay? So the hands on schools have had a very beneficial effect on some of my students who are doing now some of the highest resolution microscopy and defect detection at Intel, okay? In their nanolithography groups. Now this kind of system is a very simple system. It's just two inverters connected like this with a clock over there. And all it's going to look at is what is the voltage at these two points that you're going to measure at node A and node B. And you start it from random initial conditions and it'll flip one way or the other. It's like a seesaw which is sort of balanced in the middle, right? Right over here. And I have to predict which way it's going to fall, okay? If I can really very precisely balance it, it'll fall one way or the other, okay? When I use this system, I can generate, these are called IV bridge processors. They operate about three gigabits per second. And continuously reseed the pseudo random number generator. This is one of the ways in which Intel does random number generation nowadays. But one of the things that ties random number generation to chaos and dynamical systems came from this paper, which was done by Atsushi Uchida. Who worked with me, he was a scientist from Japan. I met him first when he was a graduate student. He came and worked with us for three years at Maryland. And he went back to Japan, had his own laboratory over there. And in 2008, he published this very seminal paper on fast, physical random bit generation with chaotic semiconductor lasers, which came out in one of the volumes of Nature Photonics that was a journal young at that time. And what he did was take two chaotic lasers, two photo detectors. And the question is, what does he mean by a chaotic laser? So these little lasers, these are all tabletop experiments, right? You take a laser, semiconductor laser, you couple the light through this branch, okay? And it's a variable reflector, so the light can actually go back and reflect straight back into the laser. And split off a little bit of light this way. And this is an optical isolator which does not allow light to go backwards, okay? So the only reflection is from here. And when you reflect light back with a time delay in a semiconductor laser, it's one of the beautiful and elegant examples of a time delayed, non-linear, chaotic, dynamical system. So he built two of these things. And using these photo detectors and one bit A to D converters, ADC is A to D analog to digital converter. And an exclusive OR gate over here, he was able to generate random bits at gigabits per second, close to two gigabits per second. And you can see these are the time traces from these two lasers. They're sampled at these points, okay? And you can see the exclusive OR being taken at those points and producing random bits using that clock to time things so that you get different sequences. Now here's a whole string of ones, then a zero, then a one, zero, and so on, okay? So this is a physical random number generator that uses chaotic lasers for this purpose. And Natsushi created a sort of avalanche of papers. At that time, there have been hundreds of papers using chaotic lasers since then. And other techniques that I'll tell you a little bit more about that produce random bits very fast. And so here's a plot of random bits. This is one string of zero ones, and then it goes back and starts again, the second string of zeros and ones. So this is a raster plot showing you zeros and ones in this pattern. When this paper came out, Tom Murphy, my colleague in electrical engineering at Maryland, and I wrote a short news and views. And we called it the world's fastest dice, okay? Because this was at that time by far the fastest method for generating random bits, okay? And we made a remark, which I will show you on the next page. And we said that the present experiment can thus be regarded as an example of partnership between quantum fluctuations, spontaneous emission, which is always present in the laser. And chaotic dynamics at the macroscopic level, it illustrates how large easily detected fluctuations can emerge from the truly random quantum fluctuations connected by the bridge of a nonlinear dynamical system, okay? And at this time, it was absolutely the fastest generator of random numbers. Since then, things have really changed. Now, spontaneous emission actually is one of the processes that was very fundamental to the understanding of optics. So who postulated spontaneous emission, stimulated emission, and absorption, the three basic processes of optics, right? Who postulated this? Anyone know? Einstein, in 1909 or some time like that, a long time back, right? Not that long ago, actually, when you think about it. He postulated these processes, and he says this is completely quantum mechanical in origin, he didn't say that. But when quantum mechanics came into being in the 20s, people started to say such things. And it occurs in all optical lasers and amplifiers, okay? And so I'll tell you a little bit about some work with Katelyn Williams, Adam Cohen, Julia Sullivan was an undergraduate in our group working at that time. She's now doing a fluid dynamics PhD at Yale with Nicollet. And Xiaowen Li was a visitor from China in our lab from Beijing Normal University. I know there are people here in the audience from there, right? And Tom Murphy is my colleague at UMD. So what we decided was that since quantum mechanics produces spontaneous emission in nature, or nature produces spontaneous emission. And we tried to understand it using quantum mechanics. Here's a source of amplified spontaneous emission. It's an erbium-iterbium double rare earth doped into optical fiber, amplifier. Here's a erbium doped fiber amplifier just to amplify that laser, that light, and produce amplified spontaneous emission, okay? So we generated, we didn't operate this as a laser, just operated as an amplifier, used a second amplifier to amplify it again. And then we split this up into two polarizations of light, TE and TM. Transverse electric and transverse magnetic polarizations, and detected them using two photodiodes. And then did a clocking over here. A bit error rate tester does the thresholding and produces a zero or one depending on the size of the signal. So using standard telecommunications modular equipment that's bought off the shelf, we could produce at this time when we did this 12.5 gigabits per second, okay? From amplified spontaneous emission. So I'm not using a laser, I'm just using the spontaneous emission and doing an experiment. And this is Katelyn Williams with a different system. Joe actually recognizes this system because he's working on this particular optoelectronic oscillator system that I'll tell you about in a short while. But this was our experiment at that time. And we were able to generate using these optical techniques, very fast bits, random bits, which how in Lee, we decided to try a different thing. We took a super luminescent diode, which is a semiconductor device of broad bandwidth emission, okay? So it's emission is very broad, tens of nanometers. And we took a narrow slice from that, which is just 14.5 gigahertz in width. And from that slice, we generated random numbers, okay? What we also did was to take two slices simultaneously from two different parts. And we showed that you can actually generate independent streams of random bits from two different spectral slices simultaneously. And if you look at how many of these thin slices you can fit in over here, you could probably generate 40 or 50 random bit streams at very high speed. So this was our sort of 4A into random bit generation using those methods. And the electrical signals which came out and which we actually measured had very nice characteristics. We could actually look at their probability density functions by sampling them in different ways. And these have macroscopic voltage levels. And we can actually do very good statistical observations. And so to show you the NIST tests that I mentioned earlier, you apply this whole battery of tests from the National Institute of Standards and Technology and it passes all the NIST tests, this physical random number generator. And you do have to do a little bit of post processing because the zeros and ones are not absolutely perfect half ratio, okay? You have to do a little bit of post processing as you have to do with most random number generators, almost all of that I know. And you do that to suppress correlations in your sources. And oftentimes this is done by taking two sources and then that exclusive or operation at the end. So here it is, in 1949 you were generating random numbers, roughly a digit per second. And in 2015, easily this 20 billion bits per second. And if you use these multiplex together, you can go to terabits per second, okay? So this is now routine in the random number generation world. Now, I've been telling you about optoelectronic oscillators a little bit. I alluded to them earlier and I want to show you how we built them, okay? So here's a system, I work with light and optics. So here's a laser diode that's emitting light that goes into an optical modulator, goes through a fiber optic cable to a photo detector. And if I want to send information to my friend who's here, all I have to do is put an electronic amplifier on this set of electrodes. And this is called an integrated optical modulator. It's just a tiny little thing, I'll show you the size in another picture next. But these things are now integrated optics, standard manufacturers. And we bought dozens of them on eBay for a few hundred dollars each, okay? So these are called Mach-Zender optical modulators. And what they do is they split the light into two paths. As the light goes down these two different paths, the voltage that's applied between ground and this electrode over here changes the refractive index of the lithium niobate crystal that makes that path. And by changing the refractive index ever so slightly, you can create a phase shift between these two light paths. And when the light recombines over here, it has constructive or destructive interference. So you can actually modulate the intensity of the light going through from zero to full transmission, okay? At gigahertz speeds. Very, very inexpensive and wonderfully effective modulator, okay? In only about 35 years, these have come into being. And these integrated optical versions and they're everywhere now. So the fiber optic cable takes the light to a photo detector. If the data that you put in consists of bits, zeros and ones, right? All messages are made up of that. You can detect that over here. And you get your message. How does this modulator work? This is what it looks like, 10 centimeters roughly. And this device has this beautiful characteristic of transmitted power versus applied input voltage. And it goes up and down, as you can see with some arbitrary phase shift, find not over here. But as you apply different V voltages over there, you modulate according to this cosine squared rule between zero and maximum transmission, okay? So it's a very, very effective, beautiful cosine squared non-linearity that you've created in this system. This voltage that you need to go from zero transmission to maximum transmission is often called the V pi voltage of this modulator system. In our system, it's of the order of, you can measure over here and see that it's about three volts, okay? So you don't need a huge voltage to modulate from minimum to maximum transmission. Now, these folks played a trick, a very special trick. So this is Lorraine Larger, Peracolette, Yane Chambocomot, who was here at one of our earlier hands-on schools in Cameroon that we did. And he was here earlier over here at ICTP. And he works in France at one of their labs over there. It's called the Femto lab over there. So he works with optics. He's a theorist who became an experimentalist, okay? And Peracolette happened to be a scientist from Spain. He's a leading theoretician in statistical mechanics now. And he was a post-doc with me in Georgia Tech when I used to be there in 93, 94. And worked on many interesting problems. But Yane was Peracolette's graduate student. And Lorraine was the lab. So Peracolette is a theorist, Lorraine is an experimentalist. And he was his mentor for learning hands-on optics, okay? And what they did was to take this output and instead of just looking at the messages, they folded this thing back in in a feedback loop. And the feedback loop has a time delay tau, which depends on the elements of how much propagation time you have, plus the delays involved in each of those devices. When you do that, you find that you can measure the feedback strength of that loop by a simple expression. R is the responsibility of the photodiode. G is the gain of the transimpedance amplifier in the system in the loop. P0 is the amount of light that you're injecting into the Mach-Sender modulator. And V phi is that voltage that I showed you earlier. This measures the feedback strength. And as I change the feedback strength, if I look at what this system does, according to the dynamical equations of motion, I can actually calculate the time series. I won't go into the details of this, but this was presented in their paper. Very elegant derivation in that paper, okay? By Yanni and colleagues. And what we found in our system was a beautiful correspondence with periodic oscillations, quasi periodicity, chaos, all sorts of things happening like that. Of course, we plotted the normalized voltage going into the modulator. This X of t doesn't have any units. And this was the graduate students that first came with me to the hands-on school. We took the prototype of this system with us to India to the Institute for Plasma Research in our first hands-on school that we did over there in 2008. And this was part of Adam and Pargava's thesis, this paper, on looking at the chaos here. And in particular, trying to synchronize a model with an experiment. Yes. So you can do both positive and negative feedback. And we have looked at both positive and negative feedback for the beta in some systems, but not fully explored the implications of all of that, okay? So it's a good question to ask about positive and negative feedback in these systems. And one of the interesting things was that when we went in 2008, Bhoomika Thakur, who is at IPR, the Institute of Plasma Research in Ahmedabad now, she started to build these systems at IPR. And this time, we did a hands-on school that I will tell you more about in my talk tomorrow. Bhoomika turned it from a theorist into an experimentalist. And Joe and I witnessed how she took groups of students and told them everything about this optoelectronic feedback loop. And she is now doing research with those systems, okay? Looking at dynamics on nanosecond timescales with coupled systems there. So this is our first hands-on school. You can see in Ahmedabad at the IPR. This is Adam Cohen. And this is me in where? Saapalo. And really interesting is that one of the people who was there at that conference in Saapalo came up to me at a Dynamics Days Conference in Germany last fall and said, do you remember me? I said, of course I do. I don't forget any of you, right? So anyway, he's a wonderful mathematical physicist. He's working at Imperial College now. And so he talked to me about a problem. And we started collaborating. And when I came back, Joe did the experiment that we collaborated on with Thiago Pereira is his name, and Jan-Philippe Pade in Berlin. And we have just submitted a paper a short while back on that. So the hands-on schools really come back to you with a time-delayed feedback. And this is at Buéa in Cameroon. And this is Bhargava Ravuri telling people about our hands-on up-to-electronic systems over there, okay? And then you can see that we had a school in Shanghai at Jiao Tong University in 2012. And since then, we've been doing one here every year. And it's really thanks to the support of ICTP that we've been able to do any of these schools. So what is this system like? What do the equations look like? This is an electrical engineer's version with A, B, and C being filter matrices, defining the kind of filtering that you are doing, the band-pass filtering. Time delay comes in here. There's a phase shift because of DC bias in there. And this system of equations actually describes time delay, non-miniarity, and filtering together describe the dynamics of these systems very nicely. And that dynamics in the previous pictures schematic occurs on nanosecond time scales. Now, Tom Murphy had a brilliant idea, which was to slow the dynamics down. Thousands of times, tens of thousands of times by putting a digital signal processing board here to implement the filtering and the time delay in this thing. This is like a sound card that you use on your computers. I can't do anything much with it, but Joe is an expert on using these systems. And Tom introduced us to them. So this is a very comprehensive review of those battery search. But this system allows us to create a feedback loop and measure and convert the fluctuations into sound at audible frequencies. So one of the things that I often try to do, and sometimes it works and sometimes it doesn't, depending on the throw of the dice, is let me see if I can actually find my pointer and bring it here and see if I can play. There's a periodic window there. Oh my gosh. OK, that's enough. So this is a bifurcation diagram. The shading, the light colors, tell you about how much time the system spends in different places. And these are periodic windows. This is robustly chaotic dynamics. This is a periodic window bifurcating into period four and so on. If you look at this bifurcation diagram, and this is really experimental data that Adam Cohen and Bhargava Ravuri produced. And I've been playing it ever since then. I'm so fascinated to hear these sounds of chaos. Now this kind of system, let me see if I can actually go to the next slide with a little work. Look at that. I cannot do it. So let's see if I can. Yes. Now I've lost the thing over here. So I will see if I can get it back. Yes. OK. What good are these optoelectronic feedback loops? One of the things that they've been used for, and this was published in 2005 in a nature paper by a European consortium, was to actually encode messages in the chaos and decode them at the other end, transmit them in the optical fiber communication loop in Athens, Greece, 120 kilometer loop, and then decode them and get the message out at the other end. And Atsushi's book, Optical Communication with Chaotic Lasers, is over 600 pages and really a very thorough review and beautiful exposition of all the relevant techniques, theory, and experiment on these systems. OK. Now we come to something very recent that is not yet published. And so I will tell you about something that we're really interested in doing in the last 10 minutes or so that I have. And here what I will tell you is that we converted our system that you saw earlier with one important change. By putting a photon counting detector over there, these are now available quite easily and using a field programmable gate array, a PGA unit, to control the time delay and filtering instead of the digital processing sound card that we were using previously. And we will count photons individually from the light source now and look at the dynamics of this system. So this system is the work of Aaron Hagerström, who is a PhD student who's just finishing. He has done three of these hands-on schools in Shanghai and two of them here before. And he couldn't come this time. Of course, he's writing his thesis right now. And hopefully, we'll have a finished version of it ready for me as soon as I get back. But Aaron's doing these experiments. And here is some data. Here are the number of counts received in a certain counting time interval. This is the time in units of round trips. And this is the number of photons counted in a particular time interval. And you can see that when there are thousands of photons incident on the photo detector during a round trip, you can actually count lots of photons during the round trip that during each of these intervals. And you can join these photon counts with lines and see the semblance of a deterministic dynamical system appear at high photon rates. The blue curve shows the distribution of photons in the time bins. For the system we just described, the green curve shows the Poisson distribution that corresponds to the same average number of photons. Now we are going to turn down the light. And as we turn down the light, you see less and less counts in each of those time intervals. We're counting single photons now. And you're seeing more discreteness and graininess appear in the dynamics that we are presenting. And here's a comparison of the distribution functions now. So this is the probability density function, the blue one for the actual light source. This is the Poisson distribution for the same average number of photons per round trip. We do that now with 125 photons. And it's getting grainier and grainier. And you can see that the blue curve, which is the experimentally measured distribution, is starting to converge onto a Poisson distribution now. So the light that's falling on the system is now really resembling a Poisson distribution. And if I cut this down further, you would see it even more. But let's go now and look at how we can do time delay embedding with these photon counts. So we look at the number of photon counts, time over here. And we do a time delay embedding on these two axes. This is a two-dimensional time delay embedding. And you can see that when you have fairly large numbers of photons arriving, you get a time delay embedding plot, which looks pretty continuous, like a chaotic system. And then it starts to grow grainy as the number of photons per round trip decreases. And by the time you arrive at about 12 photons per round trip, it's really gotten very grainy. And I'm going to show you, hopefully, the Poincaré sections for this progression of photon numbers going from 12.5 photons per round trip to 200 to this one has 3200. And this is the deterministic model that I showed you with the nonlinearity, the filter, and the time delay. And this really starts to resemble that when you go up in photon number. So this is the emergence of deterministic chaos from very stochastic dynamics. And I want to end my talk by showing you, hopefully, a movie that Aaron Hagerstrom made of 7 photons per delay time. It started from 4. And now we are doing time delay embeddings and rotating the attractor, which is a fairly low-dimensional attractor. And hopefully, you'll start to see the attractor emerge as the number of photons goes up. You can see the number of photons per delay time growing. This is in beta equals about 6. This is the robust chaotic regime. And now you can start to see the attractor forming. Can you really see the whirls and shapes of the attractor? I can play this over and over and just watch it and try to understand what's happening in this transition, so it keeps going backwards now. Now, how much data did Aaron take to actually produce this? It's about 15 gigabytes of data that he processed to make this movie. But he can take lots of data. He can make simulations of this system with lots of data. And we have been looking at this now for the past year in some detail. Here's the electronics that's used in this system. This is a photon counting unit. This is the FPGA board. This is the modelator. This is the laser at 852 nanometers. DFB means Distributed Feedback Laser. FPGA is Field Programming Gatorade. SIAPD means Silicon Avalanche Photodiode. And so this simple system that Aaron put together can be operated in the lab on our tabletop and take a look. Here he is. This is the entire system wrapped up in black cloth that Aaron is doing the experiments with. And that's it. It's not a very complicated system. It's very robust. He can reproduce this kind of chaos pretty well at will. And we've been learning a lot about this. So I want to just mention some of the papers that have resulted from the hands-on schools very directly in one way or the other. All of these things are in bold that I referred to in this talk. And this paper has yet to appear, but there's a pre-print version on Archive that you're welcome to read. And it's just going to come out on this transition from noise to chaos that we've been able to quantify. I didn't go into the details here deliberately, but we used a very interesting algorithm by Cohen and Prokaccia to look at distinguishing noise from chaos by making two very important observations. One is that we have a time resolution that we can do in our system. The measurements, how closely in time do we resolve them? We can make that large or small. We have the resolution of the voltage that we can do. And that can be done in terms of numbers of photons that are counted per time interval. And we can make that. So it's called the epsilon tau method for resolving measurements. And depending on how we adjust epsilon and tau, we can actually see the graininess of the stochastic photons coming in. Or we can see the collective dynamics in the deterministic sense. And we have been looking at the entropy production by these systems. That's all outlined in that paper. So feel free to look at that, ask questions about it. But thank you all so much for your attention, and we'll go on to the hands-on session soon. Thank you.