 It's a Hilbert space that has a dimension of more than one or probably more than two if we like. That doesn't necessarily tell us a lot because we could have many photons. We could have a high-dimensional encoding on our photons. We could have many spatial modes. So just the mention of I'm working with a high-dimensional system doesn't tell me a lot about what system I'm actually working with. However, whatever sort of high-dimensional system I want to build, there might be some buzzwords coming to mind that I should at least consider before starting to build that system. The gray ones are not really interesting for what we're doing in the lab, but let's think 10 years ahead. We've all founded our quantum technology startup companies, and we want to saturate the market with high-dimensional systems for anything you want to do with quantum. In that case, you will have to think about how costly is your system. Can you build it in a comparably cheap way? How simple is it? Can a person without a PhD and four years of experience in the lab operate your system or not? What kind of encoding alphabet would you be using? Is it something that's compatible with existing technologies? Is it something completely different? Is it something that you can sell to a customer or not? But as I said, these are questions that don't interest us so much at the moment, because we're still working in research laboratories, and these are not our main concerns. What we're a bit more concerned with is how many resources do I need to build my system? If I do need 10 TICEFs to run my system, it's probably not something that my PI will actually say I can do. Compatibility is my idea, is my approach compatible with the infrastructure in the laboratory, or do I really have to start from scratch? If it's not compatible with anything I'm doing, then again, I probably should think about if I can find a cleverer solution that is compatible with what I'm doing. Two biggest points, at least for me, are the questions of scalability and reconfigurability. Scalability meaning I want to go multidimensional, so of course I'm starting with three. Three is more than two, everybody's doing two. If I do three, I'm multidimensional, I'm golden, and I can write my papers from that. Eventually I might want to do four. I have a moment to do five. At some point, I'm incredibly adventurous, and I'm dreaming of doing 10 or 15. Is my approach scalable? Can I go from three to four to 10, maybe to 20, without having to rebuild my experiment every single time? If I cannot do that, then I should really think about if that's the best possible approach I can take. The other one is reconfigurability, and that's sort of going in the same direction. If my experiment's set up for three dimensions, can I quickly, actively reconfigure it to operate on a five-dimensional space, on a 10-dimensional space, or is there something where I do have to change hardware components? It could be something very simple. It could be that I just have to take out one of the components, have to put in another component, no realignment whatsoever, but that still sort of requires somebody to be in the lab. If I can do it all remote just via software control, my experiment can do its thing, it can run over the weekend, and everything would be fine. Now, coming back to this question of multidimensionality, depending on what you want to do, you will have different notions of multidimensionality. Say you want to build a full-blown, linear-optic quantum computer, working with photons, so discrete variables. Typically, you will be happy with a two-dimensional encoding, but you will need many paths and many photons in your system. Say you want to build a quantum simulator based on quantum walks, something that's not quite a quantum computer, but still can do interesting tasks. In that case, you probably want something like a few-dimensional coin to control the evolution of your walk. You still need many possessions, but probably can live with only a few photons. Say you want to do quantum communications in high dimensions. You need a high-dimensional alphabet to encode your information, but for most proof of concept demonstrations, you only need one single channel, and you only need a few photons. And you don't even need them at the same time, you can live with one photon, one after the other. Well, that already tells us that our encoding bases in multidimensional systems ranges from two-dimensional over few-dimensional to really high-dimensional. And the number of modes that we need to have in our system, the number of paths that we need to have in our system, reaches from one channel up to many, many, many, many paths for a quantum computer. So really, depending on what application you're targeting, multidimensional has a very different meaning for you. And for the remainder of the talk, I will be focusing on the case down here. So I will be looking at one spatial mode. I will be looking at a high-dimensional encoding alphabet, and I will be working in the few-fotone regime because to be very frank, at the moment, I can't be bothered to try and generate five photons at once. I'm happy if I get a single one. The outline is this following. I will start talking about optical modes because we're using a very specific kind of optical mode as encoding alphabet, and I just want to make sure that you get an idea of what I'm actually talking about there. We'll then go into detail about parametric downconversion and what that has to do with the optical modes we're using. We'll then talk about the quantum pulse gate, which is a device that allows us to read out these optical modes. If I do communication, I need an encoding and a decoding stage. Encoding is my source. Decoding is my pulse gate. And then I will show a few applications just flesh them towards the end. Optical modes, by definition, are eigenfunctions of wave equations, including boundary conditions. For instance, being in a resonato inside a waveguide. Optical modes are orthogonal, meaning that two dissimilar modes do not talk to each other. They do not interfere with each other. And inside one mode, let's just say that light is coherent and does interfere. We have two general classes of modes. We have temporal modes. These are modes in the direction propagation of our light beam. They live in time and frequency. And we have spatial modes. These are transverse modes, perpendicular to the direction of a light beam, and they live in the space momentum space. Position momentum space. Examples are here. These are longitudinal modes inside a resonator. So these are just standing waves. These would be our temporal modes. These are spatial modes, and these are just transverse fields, as you can have in resonators, which you will know from your classical optics or laser physics or applied optics lectures. If you want to do high-dimensional information encoding, we can't use polarization. Polarization is inherently two-dimensional. It's good for many things, but it's totally useless for doing something that's more than two-dimensional. So polarization is out of the game. If you want to use two or three, four, five, and so on and so on for our encoding. Why do we want that in the first place? Well, if you use high-dimensional encoding, we can encode more information per photon. Photons are valuable resources, so we want them to pack as much information as possible. We can build protocols that are more resilient to noise and loss. And we can study high-dimensional entanglement, which I'm not doing, but which Mihul is doing, and we will talk about that later. Now, one way to do that is spatial encoding. This is an example of orbital angular momentum states. You can essentially associate a letter in your alphabet with each orbital angular momentum state. That's something that's done very successfully in many, many groups. Here's an example of other spatial modes that you can use. These are Laguergauss modes. These are Hermitgauss modes. Essentially, Laguergauss modes are Hermitgauss modes, just in cylindrical coordinates. And I'm pretty sure that you will all agree that these are different modes. So that's a mode in a resonator. That is a mode in a resonator. However, there's something in there implicitly that's very important for the next few slides of my talk. And that's the following. Assume a plane wave. Plane wave is described and defined by one single k-vector. A plane wave has an infinite x-turn, so it's not something that you will actually encounter in real life, but it's a nice tool to use in lectures and in mathematics and to describe systems. Now, one of these Gaussian modes, a Gaussian beam, actually has a finite x-turn. Or mathematically not, it still has an infinite x-turn, but for all we care about, it has a finite x-turn. That means it's described by a coherent superposition of k-vectors. And we're totally happy with calling that a mode. So we're happy calling a coherent superposition of k-vectors a mode. We don't have any problems with that. Right. Now, what about this other degree of freedom? Time-frequency encoding degree of freedoms. Well, we can look at two extrema, at least in theory land. We can look at time-bends and frequency-bends, and we can say, let's reduce the width of that bin to just be one single instant in time. It would be something like a plane wave, right? One single k-vector, now we have one single instant in time. We call that one mode. Do exactly the same in frequency. Let's say we have one single frequency. That's a monochromatic mode. We all know that from quantum optics lectures we have all suffered through the quantization of fields and finding these monochromatic modes. Now, I can do the same thing as I do in the spatial domain where I'm looking at a coherent superposition of k-vectors. I can look at a coherent superposition of times of frequencies. That would just be a light pulse. And just as in the spatial domain, these guys are also modes. They are modes that now comprise of many frequencies and many arrival times, but since it's a coherent superposition, they are still valid modes. And in time-frequency space, they would now basically be areas and the width would grow. Temporal modes are actually quite nice things because, well, they have envelopes that overlap in frequency and time. So I can just start packing them one above the other. That's something that GIF down there is doing, which you can take your eyes off because something is moving. They have overlapping intensities, but they are still field orthogonal. So I can't use a spectrometer to distinguish them, but if I can use field measurements, amplitude measurements, something that's very sensitive, I can still perfectly separate them. They're naturally compatible with waveguides and fibers. They live in one spatial mode. So I can send them down single mode fibers and telecommunications is using light pulses and broad wavelength spectra to encode information. So they are being used for integrated networks. And the width scales are something like square root of the mode order, which isn't too bad a scaling. In fact, you can show if you have a limited area in time-frequency space, these Hermit Gauss modes will allow you the densest possible packing of information in space. So if you want to squeeze as much information as possible into a given area in time-frequency space, you can use these modes and they will do the trick for you. Now comes a different question. We've talked about modes, which is good. Now we have to ask a self, what then is a photon? And that's something that's sometimes confused in literature and publications, also in presentations. And I want to make a very clear distinction between modes and photons. Now, you could say a photon is a quantum, you can say a photon is something that has a photon number distribution that looks like that. You can be a bit more formal and say, well, let's take our electric field mode. Let's do the quantization procedure. Let's identify a plane wave with harmonic oscillators. We find discrete energy levels. We define annihilation creation operators. And then we can say that one photon is exactly one excitation inside that field mode. And if I add another excitation, then I have two photons inside my field mode. I get my photon number states or FOX states. And I can use these to describe every possible quantum state. So photons are excitations in field modes. What does that mean for these pulsed modes I had been talking about? Well, let's go back to something we know from textbooks, monochromatic modes. Monochromatic modes are modes with a single frequency, if an infinite extent and time. Operators obey bosonic commutation relations, meaning if you're looking at different frequencies, they commute. If you're looking at the same frequency, they don't. You can write down a state by just operating a creation operator at a given frequency on the vacuum. And exactly the same is true for these pulsed modes. The slight difference that you have to redefine your operators. You're now not acting on a single frequency, but you're acting on a broad spectrum of frequencies given by these f functions. And these could be just Hermite Gaussian functions. Hermite Gaussian functions are nice because they are eigenfunctions of a Fourier transform. So a Gauss spectrum will be a Gaussian temporal envelope, first order, first order, and so on. Again, these operators you will find obey bosonic commutation relations. And you can just write down single photon state in the same way. Just operating one of these creation operators onto a vacuum will give you a single photon in that mode. Essentially what we've created now is an optical pulse that contains exactly one single photon. These are the objects we are looking for. These are the objects we want to work with. And now we want to think about how can we change the pulse shape of single photons. This brings us to parametric down conversion. And I've already told you it's a second order non-linear process. Pump photons decay into pairs of photons. You can investigate your source by looking at coincidence clicks, single clicks. You can get all sorts of efficiencies, source performance benchmarks. You can look at theory. You can write down simplified interaction Hamiltonian, which comprises of a coherent amplitude for your pump field. We assume the pump is strong and it doesn't matter if one photon gets lost. If you have 10 to the 12 photons, we wouldn't see if one is converted. And we're generating photon pairs for signal and idler. And the parametric down conversion stayed in the photon number basis basically given by perfect photon number correlations. If I have n photons in my signal mode, I will have n photons in my idler mode. And I don't always generate only one pair. Most of the time I generate nothing, sometimes one pair. In few instances I might generate two pairs or even three pairs and so on. Again, we have to look at energy conservation and momentum conservation. However, if we use a wave guide, this is a four millimeter KTP chip which we used back in the days in Arlangen. If we use such a source, then the direction of the spatial wave vectors is defined by the wave guide. So they will all point in the same direction because they have to propagate along the wave guides no matter if they want to or not. Coming from this idea, we can write on an ideal two-photon state. Where we know we can detect one of these photons and the other one will just be a nice pure photon. We can generate that by creating one photon in signal mode, one photon in the idler mode and we expand these at pulse mode operators. We identify the spectra of our photons and we get our state. However, things are not always so simple because typically photon pairs from parametric down conversion are correlated in energy, meaning in frequency and in transverse wave vector. Again, we don't have to worry about the wave vector when using wave guides but there are still frequency correlations. Why would they hurt us? Well, heuristic picture for that is the following. So you want to detect one photon to herald the presence of the other one because all you can do with PDC is making use of the fact that if there's one photon in signal, there's one in idler. If I detect one here, I know there must be one here which I can use an application. If I have frequency correlations between signal and idler and I have non-frequency resolving detection in my heralding arm, typically photon detectors are not frequency resolving, they just give you a click whenever a photon hits them. That means that my heralded photon collapses into a classical mixture of all possible spectra that are allowed by this amplitude. These are these gray shaded spectra down here. That means that generally, my heralded photon will be in a mixed state. However, if I have a parametric down conversion source without any spectral correlations and no matter what frequency my heralded photon is, the heralded photon will always be in the same spectrum or pulse mode if you like. Meaning that I actually do herald a pure single photon state. Why is this important? It's because pure photons do hormonal interference. Mixed photons do not. If you have mixed photons, you send them on a beam splitter. You will not see the bunching that you expect from hormonal. So you need pure photons in photonic networks. For instance, for quantum computation, but also for every other thing that you could think of for photons need to be at a beam splitter. This can be in metrology, in face sensing. This can be in temporal measurements. The original home paper was on the very precise measurement of sub-pecosecond timings. So whenever you need something like that, you want to have pure photons. Right. Let's have a look at where these spectral correlations come from because if we understand where they come from, we might come up with a way of how to actually get rid of them again. A guided wave parametric down conversion, we use pump pulses. They propagate through our waveguide. It's pole. We generate signal idler. With energy conservation and face matching, you've seen that before. It's just the same plots again all over. Energy and face matching lead to that these photons are generated in this joint spectrum. They are not separable anymore. How can we understand that? How can we visualize that? Well, we typically use a two-dimensional representation. We use a space of signal and idler frequencies where energy conservation is oriented along minus 45 degrees. The reason for that is that signal and idler frequency have to add up to the pump frequency. And the transverse profile of this function is given by the spectrum of the pump pulse that we use. So this direction is the spectrum of our pump pulse. Face matching can be a function with an arbitrary orientation in that space. However, the transverse profile, if you don't work really hard, will be a sink. Why is that? It's because in a typical waveguide where you don't work hard, light enters the waveguide. The nonlinearity is on. You propagate through your waveguide. You leave it the nonlinearity is off. This is a rectangular function. If you fully transform that, you get a sink profile. That's why you have this transverse sink profile. The product of both is what we call the joint spectral amplitude. And this is the spectral correlations in your photon pair. You can use the joint spectral amplitude function to write your parametric down conversion state, which is done in frequency integrals, these functions, and the generation of signal and idler photons at the corresponding frequencies. Note that in general, this is not an ideal separable bifotum state. Now, can we somehow quantify the amount of frequency correlations in that state? We can. There is a tool that's called the Schmitt decomposition. The Schmitt decomposition does, or rather what you would numerically do is a singular value decomposition. Takes a function of two variables and it expands it into a sum of products of separable functions with weighting coefficients. What does that mean graphically? Well, let me show you the following picture. On the left here will be the resulting sum, so that will be the joint spectrum. On the right, you will see the terms of the Schmitt decomposition. These black bars are the weighting coefficients. So as we add the second term of a Schmitt decomposition, our joint spectrum gets a bit more correlated. We add the third term, it gets even more correlated. We keep adding terms to our spectrum and at some point we recover the joint spectral amplitude that we measured or calculated. We can then write the parametric down conversion state in the following form. We just replace the joint spectrum by that expression. We then collapse the integrals to our broadband operators. And what that means physically is that we are now interpreting our parametric down conversion in the following way. We're not generating photons in this weird correlated joint spectrum. Rather, we're generating photons in pairs of pulse modes with a probability that's given by the squared expansion coefficient. In a sense, it's nice because it tells us if my signal photon has a Gaussian mode, my idler will have well. If my signal is a second-order Hermite Gauss, my idler will be as well. So I have a direct one-to-one correspondence between the pulse mode of the signal and the pulse mode of the idler. I can then... That should be correlation. I didn't change that. I will. We can then quantify the amount, the effective number of modes in that state with a quantity called the Schmidt number. That will be one if the state is single mode and go to infinity for perfect correlations. So essentially, the more correlation you have, the bigger the number gets. It's quite nice property. And all of these properties are encoded in the joint spectral amplitude. And the question we have to ask now, if I do a typical PDC with a correlated spectrum, these coefficients will be following a thermal distribution. So I will have an infinite number of coefficients, some of them stronger, some of them weaker. That's not a very well-controlled quantum state. That is incredibly high-dimensional. In fact, this has a dimensionality of infinity, which is about as useful as a dimensionality of one if you want to do something in an experiment. Because infinity is way too much that you could ever hope to control it. So you have to ask a question. How can you actually engineer the distribution to build a state with a controlled dimensionality? How do you do that? The answer to that is source engineering. Source engineering basically answers the question, which knobs do I have to turn to make my experiment do what I wanted to do instead of doing what nature tells it to do. To see that, we have to go back to the definition of the joint spectral amplitude, which is the product of the pump function and the face matching function. We also have to look at it not only in the frequency domain, but in the time domain. Parametric down conversion of a short wavelength pump and long wavelength photons. Typically, dispersion in a material tells me that long wavelengths propagate at a higher velocity and short wavelength. So in a typical parametric down conversion, I would expect my signal and idler photons to precede the pump pulse at the end of the interaction. I find that the angle of this face matching function is given or somehow defined by the inverse group velocities of signal and pump and idler pump, or rather the differences. Let's look at that for KTP, potassium titanol phosphate. These are the group velocities for potassium titanol phosphate for Y and Z polarization if we are propagating along the X axis of the crystal. Standard PDC would be 400 to 800 nanometers. A lot of groups are using KTP 400 to 800 nanometers to do experiments in the labs. And you see that the pump is slower than signal and idler. We have exactly the same situation that I had been talking about. Let's change wavelengths. Let's go to 775 and 1550. Photons at 1550 are much nicer than photons at 800. All of a sudden we see that the pump velocity lies between the signal and idler velocities. That leads to the fact that our face matching now has a positive slope. It's exactly 45 degree if the pump is exactly between signal and idler. It's around 60 degree in the sources we are using. Now I can play. I can change the width and the shape of my pump spectrum. And I can change the angle of my face matching and I can also change the shape or pattern of my face matching function, which is not something that we are doing. There are other groups that are doing that. For instance, Alessandro Fidrizi's group at what university is doing that very successfully. They have a beautiful source where they shape the face matching to be a first order Hermit Gaussian. However, we are not doing that. We are focusing on the pump. We've built such a source. We've been developing such a source to be honest for by now roughly 15 years. We're still working on making it better. The setup is pretty straightforward as goes for quantum optics experiments. We have a ties of oscillator pumping and OPO. Part of the OPO's frequency double in a second harmonic generation. We use pulse picking to go down to rep rate of a megahertz. We use a spectrometer to shape the width of the pump. We couple to a wave guide. We use a long pass filter to get rid of the pump, use the photons, we separate the metapolarizing beam splitter, we detect them. The rest of the OPO goes as a reference beam also to the detection. We get coupling efficiencies from 70 to 80% into single-mert fibers. And back in the days, because we were using crappy In-Gas APDs, this corresponded to 15 to 20% Clish-Go efficiencies. To date, we're using some of these sweet superconducting nano wires with efficiencies exceeding 90%. Then these efficiencies are roughly these numbers. It's just source performance benchmarks. This is what it looked like in the lab back in the days. And here's a measurement of these joint spectrum. Now, these are joint spectral intensities. So the amplitude squared of joint spectral amplitudes, we can't do phase sensitive measurements in a spectrometer. Anyway, if you have a narrow pump spectrum, something that's narrower than your face matching, the face matching is now oriented in that direction. You have something that's frequency correlated. We've seen that increase the width of your pump spectrum. You match your face matching. You get this nice round blob. Let's decorrelate it. That's a perfect single mode source. That's quite nice. That's something that's controlled. That's something we can work with. If your pump is too broad again, you go multi-mode again, you have correlations. However, these states are still interesting for some timing measurements. There are timing measurements where you want to use exactly these states because they give you the best possible precision. And note that we can switch between these regimes by just changing the width of our pump poles. We don't have to change components. We don't change the measurements. We just changed the width of the pump poles and we continue that. For the moment, let's consider the single mode case. One thing you always do when you do parametric down conversion, you do a hormonal interference between signal and idler, which tells you something about how indistinguishable are the two photons. So can I only use signal photons or could I use both in a network? Turns out with that source, we have a indistinguishability of about 95%. You can then do a hormonal interference between one photon and a weak coherent reference state. You can learn from those, exactly from the width of these, you can learn something about the spectral purity of your photons, which in our case is something like 85%. Which is not the best you could do possibly, but it's a good start. You can then look at the brightness of your source. How many photons do you generate as a function of your pump power? You would expect the blue line for a single mode source and the dash black line for a very multi-mode source. You can see that the points nicely fall along those lines. So we have clean single mode behavior. You also have a high brightness. This is for the experts in the audience. We can generate up to 80 photons per pump pulse. If we calculate back on the generated two mode squeezing in the language of continuous variables, this corresponds to 25 dB of generated squeezing. We have no way of measuring that, but it's a very bright source given that there's no resonator around. That there's just one pump pulse going through that source to get 25 dB of squeezing generated. Right. What else can we do? I mean we can do one-dimensional states. One-dimensional states are not very interesting for high-dimensional applications. We can add additional pump pulse shaping. Pump spectrum gauze, we get this blob, which in that case is not fully round, so the measurements here do not fully correspond to the plots down here. It's more of a schematic. But we're in the single mode configuration. Shaping our pump pulse to a first-order hermite gauze, we generate a state that looks like that. This is a two-mode. This has a dimensionality of two where a hermite gauze and signal is linked to a gauze in the idler and vice versa. That is a state something like a zero one minus one zero. That is a belt state. It's a maximally entangled state, but now in the basis of temporal pulse modes. Let's go to even higher-order pump modes. We get a state with a dimensionality of three, however slightly unbalanced coefficients. Slightly change the shape of our pulse, use so-called cosine kernel modes. We get a state with a dimensionality of three with equal weights. We can generate user-defined temporal mode states with that system, where we can tune the dimensionality, relative weight of the components. We get rid of this tail, so we have exactly a dimensionality of three and not higher. With the current setups we have in the lab, we can go roughly to a dimensionality of 20, well-limited by resolutions of pulse shapers, which we hope to replace within the next half year, which should get us to something like a dimensionality of 50. Just not massive as in you could do 10,000, but 50 is much more than three already, so we're pretty happy with that. We can switch between these states on the fly without changing elements, without changing hardware. We think this approach is scalable and reconfigurable, and that's the thing we're interested in. Let me list a few words on the quantum pulse gate. What would you like to have if you want to measure these temporal modes? We want a device that accepts many temporal modes at the input, selects one of them, converts it to different frequency, different shape, different polarization, whatever, and transmits all the rest. We want to be able to choose which mode we want to select, and we want to be able to measure in superpositions of modes. That would be something like a state tomography, which Mehul will also be talking about, which I'm not. You could think about that some frequency in your generation maybe could work for you, why is that? Well, if you're looking at that picture, it looks a bit like a beam splitter. Like it takes one mode and reflects it, it transmits all the others. The Hamiltonian of a beam splitter is something like an A and a C dagger. Some frequency generation does sort of like the same, it generates a photon at an input frequency, it creates a photon at a different frequency. So, kind of from a very abstract point of view, some frequency generation should be something like a beam splitter that operates in frequencies instead of spatial modes. Then you can run all theory done for parametric down conversion. You can define your, let's call it joint spectral amplitude. We typically speak of the transfer function here. Then we can ask ourselves, does this operate on one single pulse mode? How do we define pulse modes in the first place? Then we do exactly the same as we did for PDC. We start with the operation of the sum frequency generation. We do some math, we end up with a result that's analogous to the Schmitt mode decomposition in parametric down conversion. We get these broadband operators. We destroy a photon in one broadband pulse mode. We generate a photon in one broadband pulse mode with some coefficient. Now, the interpretation of that is the following. We can say our sum frequency generation is a concatenation of beam splitters that operates on pulse modes with different reflectivities. Now we have this multi-mode state where we have this huge range of reflectivities and we want to turn that into a single mode operation. But we know how we can do that from parametric down conversion. In parametric down conversion we use this idea of group velocity matching. Exactly the same works in some frequency generation. Only now we have to make sure that our input and pump pulse propagate through the waveguide at the same velocity. If they do that, we get a situation that's depicted here. We have our pump envelope function plus 45 degree because sum frequency. We have a very narrow horizontal phase matching function. Horizontal because pump and input travel at the same velocity. That gives us a two-dimensional transfer function that has no frequency correlations. And we've learned from parametric down conversion no frequency correlations mean single mode in this pulse mode basis. So we can build that in the lab. We can note that the sum frequency conversion efficiency is proportional to the overlap integral between the spectrum of the pump and the temporal modes of the signal. It's essentially because they don't walk off from each other. The pump will only convert that part of the signal that has a field overlap. All other things are transmitted. And if you look at the waveguide we're doing in lithium niobate. We need four and a half micrometer polling period. It's incidentally the waveguide that I showed you in the first lecture. Our input signal is at 1550 nanometers. That's where we want it to be because telecom is nice and our parametric down conversion source works there. And then luckily, luckily, luckily, thank you nature. You've been kind one time. The pump wavelength is in the ties of range. Which is something that you can easily have available in a laboratory. If you use potassium titanil phosphate, your pump is at 630 nanometers, your jaw drops and you just scrap the project. Lithium niobate is like ties up and everything is like ties up. Right. So a bit of a closer look at that operation. Just to again draw the parallels to PDC. We have a transfer function that's a product of the pump and the phase matching. We can use the Schmitt decomposition formula. Now we find that the selected mode at the input is given by the spectrum of the pump and the converted mode is given by the phase matching function. That means if our pump is a Gaussian, we'll select a Gaussian mode from the signal converted to some mode that's given by the phase matching. Our pump is a Hermite Gauss. We will select a Hermite Gauss, convert it to the same output mode and so on. We can write our pulse skate as a special beam splitter that now acts on one single pulse mode with a reflectivity that we can tune by changing the energy of the pump pulse. More pump pulse energy means more conversion, means more reflectivity. Good. That was the first demonstration in 2014 where we were using weak coherent light. We were looking at a Gaussian spectrum for the signal at telecoms and we were using matched pump pulse and orthogonal pump pulse and we saw a drop in the converted signal by about 80%. These days, we routinely get a contrast of about 500 to 1, which is sufficient to actually do quantum optics with that. The pulse skate has led to a lot of additional work. After that demonstration, we've looked at timing measurements, bandwidth compression, photon tomography and everything you can think of along those lines. Now, very briefly, just flesh a few applications. This is really just to show you that we can do things with that. First is time frequency manipulation. We're running our parametric down conversion in a nicely single mode regime. We take one of the photons. We either measure its spectrum directly or we send it to the pulse skate converted and measure the output spectrum. From that picture, you already can assume that a broad spectrum gets mapped to a narrow spectrum. That's what we see. Our input spectrum has a width of about a terahertz. The output spectrum is about 130 gigahertz. With the newest sample that we have, that's a long pulse skate. It's about almost 80 millimeters long. We expect an output bandwidth on the order of somewhere around 10 to 20 gigahertz. Now we are getting to a regime which becomes interesting because these are bandwidths that come close to what broadband quantum memories can do and what quantum dots can do. We also check that we don't introduce any noise. In the process, the pulse skate process is noiseless. We don't get any noise counts. Probably the interesting thing here was that we could have used, of course, you can just get a narrow spectrum by placing a spectral filter. You can just filter the strut out if you filter and you all have a narrow spectrum. If you do that for that factor, you get a transmission of 13 percent. In the pulse skate, we have an internal efficiency of 60 percent. Every single component in the whole beam path was sold somewhere around 17 percent. So we're doing better than just spectral filters. Next application is tomography of photons. We're looking at the following. We're measuring the joint spectral intensity of our PDC source. We're then looking at the density matrix of one of these photons in the basis of Hermite Gauss modes. What's important here is that we expect that PDC to be single mode and we see that in our photon there's essentially only one single mode. You can extract a purity from the density matrix. We can measure it with different methods and they match pretty well. We can apply a face chop to our pump pulse. Faces are nasty because you don't see them in intensity measurements. However, faces also introduce correlations. So although that spectrum is still round, there are correlations between signal and idler which show up in the tomography with the pulse skate. And your purity decreases. We can go to this two-dimensional case where we expect exactly two modes populated and we see with the pulse skate, yes, in fact, with a two-dimensional state with two modes populated. Finally, and that's for a slightly different tag what you can do with a pulse skate. Assume you have two light pulses and they have a timing difference and you want to measure the timing difference. You can use an autocorrelator that works perfectly well if your pulses are separated. If your pulses come closer and closer and closer together, the error in your estimation becomes larger and larger. This is this blue line. That's the separation of a function of pulse bandwidth. If you're below the bandwidth of your pulses, the error in your estimation basically diverges. Why is that? It's because you can't resolve them anymore. It's called the resolution limit. Turns out quantum mechanics allows you in principle to do measurements that have perfect resolution and integration. Then turns out that all you need to do is take a quantum pulse skate project onto the first few temporal pulse modes and you're done. This has been done in a spatial domain by different groups. All of these techniques rely on interferometry, so every single of these techniques is face-sensitive. We've done it in the time domain and we've looked at a slightly extended scenario where the two pulses not only have a timing offset, but have equal intensities. The idea is you want to do distance measurements with light. You generate a light pulse, you tap off a bit, you detect it directly, that's your start signal. You send the rest to a distant object, light is scattered back and you detect the back-scattered light. That back-scattered light will have a much lower intensity than your reference pulse and you still need to measure that separation. You want to ideally also measure the total timing offset of your pulses, so three parameters in one shot. You can see here the following. Looking at the separation, the orange line is what we set. We set the separation to be 0.1, we expect to measure 0.1 and this is for pulses having equal intensities and getting more and more imbalanced. And back here, this is the case that's relevant for applications. These gray-shaded areas are the errors you would expect from a direct measurement just using an autocorrelator for your diet. These blue points are what we measure for the quantum pulse gate. That doesn't help us a lot because we need to look at error bars. These are the error bars. They are much smaller than the classical errors. In fact, they are by about a factor of 10,000 smaller than the classical errors. Even more, we can show that these measurements, also for the other parameters, satisfy or saturate the quantum limit. Meaning that we're doing the best possible measurement that quantum mechanics allows us to estimate these three parameters. You can't do better than that measurement. You can only do as good as that measurement does. That was pretty exciting when we got that data. So, with that, let me come to the summary in just in time. I've talked about optical modes. I've tried to convince you that light pulses are valid optical modes and that we want to use these pulsed modes as a basis for high dimensional information encoding. And then said that well you need sources that can generate these states and you need measurements that can detect these states. Those were the two devices. We looked at parametric down conversion with which we can generate high dimensional states with a user chosen dimensionality without changing the hardware of the setup. I've shown you the pulse gate which can measure these high dimensional states without changing the hardware of the setup. All is done by pulse shaping that's setting face masks and spatial light modulator so it's all software based. Finally I've touched upon some applications notably among those the tomography of single photons where we could verify that yes if we dial up a two dimensional state we actually have a two dimensional state. That I want to thank you again for your attention. If you have more questions feel free to come and find me in the coffee break. I have to leave after lunch unfortunately. Thank you a lot.