 OK. So well, thank you very much for the invitation, for the introduction, and for the opportunity to talk about quantum optics and quantum information with superconducting circuits. In this lecture today, I want to give you an introduction on what these quantum circuits are, how we can think of them, how we can quantize them, how we can slowly build up systems that we can then use for quantum information processing in quantum optics. Maybe just as a short introduction what I'm working on. So in Innsbruck, we are working on experiments towards using these superconducting circuits for quantum simulation to build large structures which we can use for amplifiers and also try and couple them to micromechanical systems in the end, which can be used for sensing applications or also to test quantum physics. So you'll get these slides so you can download them from the home page. I've already have a version up there. This one is a slightly modified one. I xed out some typos and so on. I'll send the modified one, absolutely. So just as a starter, here's some very nice references. And this is by no means a complete list. So up top you have lecture notes by Woodard Groes and Archim Malz from the Waldemeister Institute, which are really great if you want to know more about low temperature physics and superconductivity in general. Then there's a set, essentially, of lecture notes from Lizzouge Lectures, one by Steve Gurvin and the other one by Michel de Verre, which has been recently redone and put on the archive on superconducting circuits, how to quantize them, how to build qubits, and so on. And then there is these three references down there which tell you more about state of the art of superconducting qubits, circuits, how to do quantum information with them, how to build up interesting systems. One thing I really want to also mention is this last IBM quantum experience. Who knows about it already? OK, quite some. So who doesn't know it? Check it out. It's your chance to run your algorithms on a real superconducting quantum computer. And I think at the moment they're up to 15 qubits. You can't control or something like that. So this is really nice. And it's really well implemented. So check it out. OK, now as we are going to talk about superconducting circuits, I just very, very briefly want to talk about superconductivity and just a little bit also I have to talk about Joseph's effect, because that's the main ingredient we need to build up these circuits. So we want to have superconducting material, superconducting elements, because we want to in the end build qubits, which have very long coherence times, which don't lose any energy. So superconducting is great, because if I cool this material below a certain temperature, it will become superconducting. It will lose all electrical resistance, among other properties. And there's essentially no dissipation there. So now superconductivity comes about, because my electrons essentially form cupopairs. So these electrons, they can interact with the lattice of my material. They can exchange bosons, and they can form this pair. Now, these four-nometer interactions are typically on the range of about 100 nanometers. So if you think about sort of your whole lattice, your whole material, you'll have a lot of cupopairs there. So the volume of a cupopair is about 100 nanometers cubed. Depends a little bit on the superconductor. And what is important about these electron pairs is that their momentum anti-correlated. That's what makes them a cupopair. Just to give you some numbers, so typically per cupopair volume, we have about 10 to the 6 electrons around. So in this 100 nanometers squared, so there's quite a couple of electrons there. And because now these cupopairs are effectively bosons, so they can condense into a coherent quantum state. And I can describe them with a macroscopic wave function, where this, here's a point. Does that actually work? Maybe not. So you can have this macroscopic wave function where this psi 0, or actually psi 0 squared, just tells you about the number density. So how many cupopairs are around? And then you have this additional phase factor there, which can vary across the material. But it's a really quantum description of this whole system. And it's a really macroscopic wave function. So this is sort of the physics behind it that this really allows us to have these cupopairs and have sort of dissipationless current flowing. So maybe this doesn't really explain why there is no dissipation anymore. So I think a very nice picture is the following. So think about sort of a Fermi surface. So on the left, sort of this is just a cut, sort of a two dimensional cut through the Fermi sphere. And I apply a current. That means that my whole sort of Fermi surface is displaced sort of towards one direction. Now for simple electrons, they can still scatter with my lattice, with the atoms that are around. And what will happen is they will sort of slowly relax to the left. And that will sort of make my current die out over time. But what happens now for cupopairs is because they are correlated. So they have this strong momentum anti-correlation, I should say, is that whenever one of those electrons scatter, and for example, get scattered to the left, the other one has to be scattered to the right. That means that this whole displaced Fermi surface cannot relax back to the ground state. So really what makes sort of this super current live forever is this very strong momentum correlation. So this is really a quantum effect we are seeing here that keeps superconductivity working. OK, so we have these cupopairs. They can flow completely dissipationless through our material. Another sort of very important thing we will need in the following in later lectures is the so-called Josephson junction. Now this junction is essentially a sandwich of a superconductor, a very thin insulating barrier, typically something like a nanometer, and another superconductor on the right. Now the superconductor on the left and the superconductor on the right, I have to describe again with this macroscopic wave function. So I will have cupopairs, and in both of those superconductors, and it turns out they can actually tunnel through this barrier. So this barrier is typically an oxide or something. So normally this would be like a resistance. But it's so thin that actually my cupopairs can completely dissipationless tunnel across this barrier. And Brian Josephson actually then sort of came up with those Josephson equations essentially describing sort of the current flow through his junction. And what he found is that the current through the junction, so I of phi, is given by some parameter Ic. So this is just a constant that depends on the material I use. What's the actual size of the junction? How thick is that barrier? Times the sine of phi. And the phi is actually the phase difference between the macroscopic wave function of the cupopair condensate on the left and on the right. So it's again sort of a quantum interference effect between those two wave functions I have there. The second Josephson equation actually says that if I apply a voltage, then actually my phase will start to continuously evolve. So which in the end means I get an oscillating current. So those are the two Josephson equations. From that, I can actually write down voltage current relation. And what you find is that the voltage across this device actually depends on the time derivative of the current with some strange prefect I'll talk about in a second. Who knows an electrical element who also does the same? Voltage is proportional to the time derivative of the current. Should remind you of an inductor. And so the main difference here, though, is that it's not just a constant I have here as a pre-factor, but I have this sort of a little complicated expression with this cosine of phi in there. Again, this phase difference. So what this means is that the value of this inductance actually depends on the current flowing through that junction. So what I have here is then a so-called nonlinear inductor. But if you look at it first order, it looks very similar to an inductance. So very often people also call this the Josephson inductance. Another sort of thing that will be useful in sort of the later lectures is what's actually the energy stored in such a device. And it's actually quite easy to calculate that. All we have to do is voltage time current gives us the power. If we integrate that over time, we get the total energy. So what I do is I just take my two Josephson equations. So the Ic times sine phi, put it in there. And I know the voltage is the time derivative of the phase. So I also put that in. I've pushed out all of the, we actually have a pointer or something. I've pushed out all of the constant in front of the integral. And then all I have to do is evaluate that. So instead of integrating over t, because I have this time derivative of the phase, I integrate over phi. And I arrive at this final equation. And what we see is that the energy stored in such a Josephson junction is proportional to the cosine of the phase difference. OK, so this is just something to keep in mind. We'll need that later on again. That this Josephson junction is essentially a nonlinear inductance. And that the energy stored in it is proportional to the cosine of the phase difference. Another thing I sort of wanted to talk about is another phenomenon of superconductivity, which is essentially fluxoid quantization. Now, don't worry so much about that actual equation up there. Might look a little scary, but we'll go through it. And you just have to sort of understand what it means and not really what is written there. So this comes from the London equation. So I just wanted to do it right. That's why I wrote it there. So say I have a superconducting ring and I apply a magnetic field to it. And I want to know what happens. And what I see then is that actually a superconducting ring current will be induced, but in a very specific way. And this very specific way, we can see if we look at these equations. So this first term here, this first integral is actually sort of the supercurrent in the loop. Then the second term actually comes from the applied magnetic field. OK, perfect. That's much better. So the second term actually comes from the applied magnetic field. And then this part here on the left is actually the gradient of the phase of the wave function. And sort of the London equations tell me that there is some relation between those. What is sort of important is that if you think about it, this wave function here, if I go around this ring once and I come back to the same point, the phase has to come back to the same value. Because my wave function has to be single value. There's just no other way. So if I have one wave function describing this whole ring, I go around once. The phase has to be the same. So this integral here on the left, sorry, on the right, can only be 2 pi n. So there's no other way. So what it actually means is that if I apply an external magnetic field, the total flux through that loop will always be an integer multiple of sort of this constant h over 2e, which is the so-called flux quant. So in essence, if I apply a magnetic field, it will induce a supercarion such that I'll get a quantized flux through that open area right here. So the phase and flux in such a device are linked. And there is an integer multiple of this fundamental flux quant going through that loop. So why is it interesting? Well, it turns out we can make this ring a little more complicated and actually break it up with Josephson junctions. So now it's not a ring anymore. It's a square. But you get the idea. Now I have a Josephson junction here. So this should indicate the barrier. Here's another Josephson junction which should indicate the barrier. I can send current through this loop and I can apply an external magnetic field. Now if you think about the total current going through this device, I can just sum it up. I send current in and it will split up. And it will go through the junction on the left, the junction on the right. Now for simplification, we assume both junctions are identical. So I have just one common I see here. And then I use some trigonometric functions to rewrite these equations. So what I get is the cosine of the phase difference across the junction and the sum. And now from this fluctuate quantization I've just discussed, we know that the total phase going across here actually has to be 2 pi n. And it will be comprised of the phase drop across junction 1, the phase drop across junction 2, and it will also be given by this external flux, by this external magnetic field I apply. Now this minus sign just comes from my integration convention because I have to go around the loop for my integral. So that means, for example, that guy is positive. Then I go over and that guy will be negative. So here's the difference of the phase. So now if I take this equation and actually put it up here into this term, what you will see is that actually all of a sudden I have a device where the critical current in some sense depends on the external flux I apply. Or in other words, the energy I store in that device will depend on the external magnetic field I have there. So it means with this external field I can tune my critical current and I can tune this Josephson energy. So this later on will be very useful for actually building qubits which we can tune in frequency using a magnetic field. OK, so this is sort of all I'm going to talk about really sort of superconductivity or sort of things related to it. So this was really just a very, very short overview. As I said, if you want to know more, look at these lectures or I'm happy to talk about it. OK, so now let's actually start talking about the topic of this lecture, which is superconducting circuits. So superconducting circuits or quantum circuits is really sort of a combination of many things. We sort of need this macroscopic quantum phenomenon superconductivity to get coherence, to build our devices. We need the Josephson effect, it turns out, to get nonlinearity, which will allow us to do qubits. And we need this fluxoid quantization because we want tunable qubits. And it turns out I can also use this to make a variant of a different qubit. So there's a whole, how should I say, two of different superconducting qubits I can realize using the Josephson junction capacitors inductances. OK, so why are superconducting circuits interesting? So what's nice about them and why I think you've already heard a little bit about ion trap quantum computation for Christopher Wunderlich if I have seen that correctly. So one would wonder, ion traps are a great tool. So why would I want another technology? So one advantage is that the qubits are actually man-made devices. They are built in a clean room. And so the advantage really is that we can completely engineer them. We can engineer the spectrum. We have full design flexibility. We can change our parameters we want. This is done with micro fabrication, so sort of the same technology used to create microprocessors. So in principle, this is scalable. So this is nice. Another advantage is because I can build them to my liking. I can essentially engineer that dipole moment, if you want to say. So I can make very, very strong interactions. I would say much stronger than in most other systems. And another nice thing is that sort of typical energy scales for qubits and other devices is in the microwave regime. So I can do complete microwave control. And this is very nice because there's a whole range of commercially available components just because of mobile industry, essentially. So this is very nice. Of course, no advantage. It doesn't come without a disadvantage. So well, they are man-made objects. So all of the advantages we just discussed also means there will be a spreading parameter. Essentially, it will be impossible for me to create two perfectly identical superconducting qubits, which means that unlike atoms or ions, which I can perfectly use for atomic clocks, with a superconducting qubit, I claim this will never be possible, not to the same degree. So wherever I take, I don't know, a calcium ion around the world, we will have the same transition frequency if I do the experiment right. But for a superconducting qubit, it will not be correct. I said we have very strong coupling because we can engineer that dipole moment. So this is actually great. But it's also sort of a disadvantage because I have to be careful that my qubit only couples to the things I want it to couple to. So I need to protect it against thermal microwave fields and other radiation. I have to avoid two-level fluctuators, which is essentially dirt sort of on surfaces. We'll talk about that a little bit. So I need also a very good qubit design to make sure all of this works. And what it also means is I told you we can typically energy scales are somewhere in the gigahertz range. So if you think about that in terms of energy, we have to go pretty cold to get our system into the ground state. So typically that means cryogenic environments and something like 10 millikelvin temperatures. So in the last couple of years, this whole field in circuit QED has taken off. And there's an ever-growing number of groups. So the list I put up there is by no means complete. These are just a couple names. So those dots, you should really take literal. So there's now superconducting qubits or circuits working on quantum optics, quantum simulation, quantum information. You can look at quantum measurement nowadays. So there's a whole host of applications where actually these devices work really great. OK, so if we want to talk about quantum information processing, sort of the really minimal requirement, and I really say minimal, are the divinenser criteria. So maybe let's go through them step by step and see how superconducting qubits actually fulfill these. And this will essentially also give us a list I want to go through with you, what we are trying to understand in this lecture, or in the following lectures as well. So we need a scalable qubit. Well, it turns out, I'll show you in a little bit that we'll have that because we have quantized energy levels in these electrical circuits. And we can make them actually very scalable because we have micro fabrication. We need to be able to initialize the state. So this, in our case, means we just have to cool them down, such that the energy I need for a transition, so the transition frequency is typically a few gigahertz, is actually much larger than the temperature I'm sitting in. And so a very useful number to remember in this context is that 20 gigahertz is about a Kelvin. So typical qubit frequencies are around, say, 5 to 10 gigahertz. That means that we have to go down to really ensure that these qubits are in the ground state. We have to go well below 100 milli Kelvin. But all of that is technology that is totally available. So we also, of course, need single and two qubit operations, so we have to be able to run operations. This is actually almost straightforward. You have this very nice microwave controller. We have this very strong coupling, so actually doing gate operations and very fast gate operations is not too difficult. We have to be able to measure the state. So we want to do this most likely in a quantum non-demolition fashion, so I want to be able to do repeated measurements. So that means I have to be able to couple my qubit to some other circuit element that allows me to measure its state, and we will see we can actually use microwave cavities to do that. The last thing, which I think was for quite a while, the biggest people put the most effort into was trying to increase coherence times to get them up to numbers which we can actually work with. So people had to find the right tuple-conducting materials, had to figure out how to decouple them from the environment, and learned a couple of tricks and clever engineering to really make all of this work. So we'll also, and all of these points, I'll try and talk a little bit about why, how this is realized really in detail with tuple-conducting qubit. You already might see that there are some sort of maybe, how should I say, discrepancies here. I want to have a strong coupling to the outside world for maybe the state measurement and the tuple-qubit operations, but then I want to really decouple it from the environment to get along coherence times. So this really sounds like there's a contradiction here, but it turns out you just need clever engineering, clever physics to really do it right, and then it works. So I like this graph very much. This is out of a science review article by Michel Devereux and Rob Schellkopf, which says, OK, how can we progress towards a quantum machine, a quantum computer that is actually used for it? So in 2013, we were just taking the step from, we had single-qubit operations. We could run algorithms on multiple qubits. We could do Q&D measurements. And what has been done in the last years is actually implement logical memory, so really going towards error correction and sort of trying to keep the qubit alive. So this, I would say, there are now two, three proof of principle experiments that really show that this idea works. Now in the lecture, I'll actually mostly be here, or I should say a little here, and mostly sort of even down there, which really talks about how do we do circuits, create qubits, and essentially get the right Hamiltonian. This is what it's about. OK, good. Any questions so far? So this was all more or less a motivation. So now let's see how we can actually start building up quantum circuits, how we can really quantize them, and I'll show that on the most simple example, which is essentially a simple LC oscillator. So the elements we have at our hands to build these circuits are actually three. So we'll have capacitors, we have inductors, and we have this Josephson element, the Josephson junction. Now for a moment, let's forget about that and stay with the simple capacitor and the simple inductors. What I'll also do is I'll consider them to be in the lumped element limit, which means that their actual physical size is much smaller than the wavelength. So I don't have to take any retardation effect and so on into account, and this works out quite well in our case because the wavelength of the microwave is a few centimeters, and the typical size of these devices is tens of hundreds of microns. OK, so now this would be a reminder of physics to electrostatics, electrodynamics. So in such a capacitor, I can put charges. So I'll get a voltage drop across it. I'll have an electric field. And the voltage and the charge are related via the capacitance of this device, the capacitor. And I can actually calculate the energy again stored in this device. So you see this is maybe a little bit the scheme. We always want to know about the energies because in the end, we are interested in the Hamiltonians. So the energy stored in such a capacitor, all I have to do is sort of I integrate over the voltage. And I know from this relation, essentially, I'm adding charges one by one, and I calculate what's the energy I get in the end out. And if I do this, I end up with this equation you should be familiar with that the energy stored in such a capacitance is the charge squared divided by 2 times the capacitance. Now we can do the very same thing for an inductance. So in this case, I'm actually interested in a current flowing through the device, which will create a magnetic field and which will create an effective flux through this coil. So the relations are that the voltage drop is actually proportional to the time derivative of the current. But now here, this is really just an inductance up front. And of course, the voltage is minus the time derivative of the phase, sorry, of the phase of the flux. So this is really just Lenz's law. So I can combine these equations and end up at a relation between the flux through this coil and the current. And it's again just the inductor, which is in the front here. We can again quite simply calculate what the energy stored in this device. So I just take the power and integrate it over time. I just use these equations, put them in here. And I'll end up with a very similar expression where I essentially just have to replace charge with flux and capacitance with inductance. And I'll get the total energy stored in the inductor. So this is always something which is very nice that there is this dual between capacitances inductors where I just take charges and replace them with a flux. I take a capacitance and replace them with an inductance. OK, so we have those two elements. We know the energy in them. So now let's combine them and build LC oscillator. So we just put them in parallel. We hook them up with the wire. What we get then is a resonant circuit. And it turns out, and I sort of justify it as in the following, that we can actually view the energy stored in the inductor as a kind of potential energy. And I can view the energies in the capacitance effectively as a kinetic energy. It turns out I could also swap them if I would like to. The physics I'll get out will be the same. It turns out that later on this will be the slightly more intuitive picture. So in some sense, oops, this should actually point to the flux here. So in some sense, the flux will be like my coordinate. And the charge will be like my momentum. So what I can do now is I can actually write an Lagrangian of the system where I just take kinetic energy minus my potential energy and so I have this sum, or actually I subtract one from the other. Then to arrive on the right side, I just put in that my charge is capacitance times voltage. Or I know that my voltage is minus the time derivative of the flux. I can put all of that in here and arrive at this equation. Now what I can do from Euler Lagrange, I can actually get my differential equation. And you should be very familiar with this differential equation. It's just like for a harmonic oscillator. The second time derivative of the flux is actually minus some constant times the flux. So just a harmonic oscillator equation. So you know the solutions to that. And we actually also know what the resonance frequency of such a device will be. It's just given by the square root of this pre-factor right out here. So this LC oscillator is really identical to your regular harmonic oscillator. So if you know the physics of a harmonic oscillator, you pretty much know the physics of this LC oscillator. It's really the same. One more thing is actually quite nice, which makes it all work in the end. It's actually charging flux or conjugate variables. So this will really allow us to sort of again transfer all of this into a quantum picture. Because we can now also, of course, because we have Lagrangian. So we can easily calculate our Hamiltonian. So if you then just walk through here, you can have a look at these equations yourself. It's actually quite easy to follow. Just do what's written here. What you'll find is that what we get out is pretty much exactly what we would expect. What else? That the total Hamiltonian of the system is given by the energy stored in the capacitance plus the energy stored in the inductor. So no surprise there for sort of this very, very simple system. And now, yes, essentially, exactly, I'll make a comparison in a minute again. Exactly. In principle, it's like if you take the position of your regular harmonic oscillator, it's equivalent to the flux through the inductor. And sort of the momentum of your harmonic oscillator translate into the charge here. So there's really sort of, if you sort of do this, how should I say, replacement, then all the physics stays the same. And sort of all the intuition you have for your quantum harmonic oscillator or for your harmonic oscillator, you can translate to there. So now we can sort of quite easily make, so we can sort of do what we always do in quantum physics a little bit, is we can replace our charge with the charge operator. We can replace the flux with the flux operator. And because charge and flux are conjugate variables, also the commutation relation will turn out to be exactly minus ih bar, or if I flip them ih bar. So what I get is actually a Hamiltonian where I have the charge operator squared divided by 2CE plus the flux operator squared divided by 2L. So yes, I skipped over that. Yes, absolutely, very, very, very good remark. Absolutely right. So actually, this is again where actually superconductivity also comes in, because now this will be a collective effect of all of our Cooper pairs, but this is one macroscopic wave function. So this will be totally fine, actually, it turns out. Yeah, but a very, very fair point. Now, I already said that I treat charge and flux as position and momentum, so I can just again do the same thing I do for quantum harmonic oscillators. I can introduce raising and lowering operators, such that my flux is a plus a dagger, my charge is a minus a dagger with a minus i up front, and I have this now a little strange looking pre-factor. There is no mass and resonance frequency anymore, but there is the so-called characteristic impedance, which is just given by the square root of the inductance divided by the capacitance. So if I now take this and actually, oh yeah, I forgot to mention, actually this pre-factor out front actually tells you what is the zero point fluctuation of the charge or the flux. So this is essentially related to the ground state size of our LC oscillator. So I can then take these equations and put them into the previous Hamiltonian, and what I get out is, no surprise, the regular Hamiltonian for a quantum harmonic oscillator, where essentially I have my parabolic potential, but this time it's not in the position, but it's sort of in the flux, and I'll get my discreet energy levels that walk up there. Just as an add-on, I can also calculate ground state sizes, and they are really related to these pre-factors right out here, if you would calculate this from the above equations. So what I want you to take away is really that we have this dual. So that if you know the physics of the thing on the right, which I guess most of you have done in quantum physics sort of one, then you know how the LC oscillator on the left works. So if you look at those two Hamiltonians, it's really very much the same. It's position goes over into charge. It turns out mass is transferred into capacitance. Position becomes flux. I get the very same commutation relations. I can write down the same raising and lowering operators doing the same replacements. I can calculate a resonance frequency. Here it's given by square root k over m, so spring constant and mass. Here it's 1 over square root inductance time capacitance. In addition, what people use here is this characteristic impedance. I could write down something similar for the mechanics. I'm not so sure it actually makes a lot of sense to do that, but in principle, it won't. And sort of the takeaway message is all of that in the end is described by this Hamiltonian. OK. Now how am I doing? OK, I'm actually not too good. So OK, we have this device. Now the question is, how can we actually learn about its properties? It turns out all I have to do is I have to hook it up to the outside world. How do I do that? Well, I could attach wires. Turns out that's not ideal. That's sort of a too strong connection to the outside world. I want to sort of do some weak coupling. So what I introduce is sort of these capacitances right here. And I can send signals in from the left, or microwave signal in from the left. And I can, for example, look at what comes out of the right side. Now these capacitances actually tell me how well is my system coupled to the outside world. And so it will tell me about the losses in the system, because if I have some excitation living in here, I can sort of go out this direction or that direction. And how easy that is depends on the size of the capacitance. Typically just for simplification, sort of this in and output is combined to just one so-called coupling rate to the outside world. Very often, though, I'm also interested is how much energy do I actually lose internally here. So maybe this picture is not quite correct. And I don't have a perfect capacitance. I don't have a perfect inductance. But I have some maybe additional lossy elements there because of some imperfections. And I'm sort of interested in sort of the total quality factor maybe. So how can I learn about all of these parameters? Well, it turns out all I have to do is I measure the spectrum, meaning I take my input signal, I vary its frequency, and check what I get out here on the right. Yeah. No, I don't need a junction here. This is no, no, no, no junctions here yet. So this is a simple winder coil. Make two parallel plates. Can be big. Make it macroscopic. Make it out of a superconductor. And you'll get something that you can cool into the quantum regime. Now, sort of what we, if you want to learn. Yes. Say the loss rate. So gamma and gamma. Oh, so ideally, gamma in and gamma out are only losses. And all I get is then the sort of related defacing, but no excess defacing. The gamma internal, I would say, typically is a loss rate, but could also be defacing. So there you can have an additional defacing term on top of just regular defacing. But if we measure gamma, what we get out in the end is just we don't get the sort of t1 out. We will measure t2. So we'll measure the defacing rate of the device, it turns out. So because those are sort of the Fourier transform of each other. So if you want to measure such a device, you can sort of send things in. And we can record what comes out. And this is essentially input output formalism from quantum physics. Now, instead of light as an input field, I take microwave signals, but all the rest, all the description will be completely identical. So what I in the end really want to do is I really want to measure, OK, what do I send in? What gets reflected back? What comes out the other end? What if I send it in that way? What do I get out here? Sort of the reflection. Or what do I get out the other side? So I want to measure typically transmission and reflection. And what you will see then is that these curves versus frequency are essentially Laurentians. So far away from the resonance of this device, actually all the power I sent there will be perfectly bounced back to me. So reflection here should be 1. Then at resonance, I actually manage to put energy into the device. And now it can leak out also the other side. So I'll find this dip. And then again, the reflectance will come back up to a value of 1. If I look at the transmittance, I'll see the reverse picture. Sort of at first, I can't even get anything in here. It just bounces off. Then at resonance, I manage to excite this device. And then it can sort of send out photons also the other way. And I actually find a peak in my transmission signal. Now, from such a measurement, I can actually learn what's the resonance frequency of my resonator, which tells me about the Hamiltonian already. But I can learn more, which is I can learn about what's the total dissipation I have, which will be given by a line width. And what's the ratio of this coupling rate to my total rate, which will be given by this dip here. So from doing such a spectroscopy, I can really learn all the relevant parameters of my device. Now, just to add some words very often used, people usually don't talk about loss rates, necessarily. They very often talk about quality factors. But those are just trivially related by taking the resonant frequency and dividing it by the loss rate. This is the so-called quality factor. And we've already mentioned that, actually, to keep in mind, gamma really describes the phasing rate of the resonator. So it doesn't mean it's the energy loss sort of this gamma. So really, it's the phasing rate and gamma that are sort of the Fourier transform of each other. So and now this quality factor so can be either sort of you can have these coupling losses, which very often can be actually desired. I want to be able to get signals in and signals out. But we'll also have these internal losses. And those are typically undesired losses. Ideally, it should be a perfect oscillator. And only my coupling should make it worse. And there's many, many reasons why we can have these losses. And I'll talk about those in a little bit. So I think I just have some time to show you some actual devices. So this is actually a picture from Leo DiCaldo's group in Delft. So here, those meandering lines here are actually resonators. They are now of a slightly different form than what I talked about. They are so-called coplanar waveguide resonators. So these lines here, I've drawn a sketch down there what they are. So you have a ground plane. Essentially, what you do is you take a coax cable and you squish it flat on a two-dimensional plane. So you have a ground plane here on the top, a ground plane here on the bottom, and a center pin actually carrying your microwave signal. Now what I do to that center pin is I interrupt it at two locations and make the length between those two interruptions exactly lambda-half. So what I get then is a standing wave of the electric field right here. So this is then my fundamental resonance, just a lambda-half resonance. But I'll get, of course, at sort of two times the frequency, three times the frequency, and so on. I get a whole comb of resonances for this device. That's like a cavity, like a Fabry-Perot cavity in some sense. Yes. And you can really view those gaps here as your mirrors and depending on how big you make those gaps, it changes the reflectivity of your mirrors pretty much. So I can send microwaves in, I can get microwaves out. Sort of typical length here is sort of lambda-half, so a few centimeters. Other dimensions are sort of like a few tens of micrometers is the gap here and the film thickness is at most a couple hundred nanometers. Now, with these kind of resonators, sort of state of the art, the best thing people can do is get quality factors of about 10 to the sixth, somewhere in that range. Another variant of a resonator is, let's make a box out of a superconductor. Well, I don't necessarily need to use a superconductor, it will just improve my quality factors. So here, this is just a three-dimensional box, a so-called waveguide microwave resonator. And in this box, I just have to fulfill Maxwell's boundary conditions. So if you look at the fundamental mode of this resonator, it looks something like this. I have an electric field maximum in the middle. The electric field goes to zero on the side and it sort of points upward. And I, again, have many, many modes in this device just like you would have in a drum. I, again, can sort of send microwave signals in and out so you can put some couplers here which would attach to SMA cables so you can send microwaves into the device, get microwaves out. Again, you'll have multiple resonances and the resonance frequencies are actually given by this equation here. Now, what is nice in these devices is they are very easy to make. You can literally go into the workshop yourself if you know how to operate a mill. Machine out those two halves, bolt them together, make them cold, and you'll see cold means, of course, below Kelvin. And you'll get quality factors of like 10 to the six and 10 to the seven. So this is actually quite nice. There's other variants. So you can do something similar. You can sort of, what I call a coke can resonator. It literally looks like a coke can. It's also almost about the same size. And with such a device, if you think about how to do it right, you can get quality factors in excess of 10 to the eight. So those are among the highest quality factors anyone has done for microwave resonators. The only exception are search or roast cavities which are even better. And well, and then there's other variants, just to tell you there's a whole, whatever you wanna do, you can sort of pick your resonator that suits you. There's other resonators which are really sort of take a coax cable pretty much and sort of make shorted out at the end and make this center pin lambda quarter long. So this would be sort of a lambda quarter resonator. People have seen quality factors of 10 to the seven. You can also make the actual device we have discussed which is this lumped element resonator. So you have your mandering line here, which is an inductance. You have your finger capacitors. So really resonant frequency of this is not a geometric factor that determines it, but it's really inductance times capacitance. And in this case, the whole size of that thing is a few hundred microns, so much smaller than the wavelength. The disadvantage here is actually these guys have quality fact to have sort of the lowest quality factors of the whole bunch, which is only 10 to the five. And I guess with that, I should end. So in the next lecture, we'll sort of give you a quick idea on why we have these big differences in quality factors and where they come from. And then we'll actually start talking about how we can use the same tricks sort of we have used so far to build a superconducting qubits and sort of understand a little bit how this thing works and how we can couple it to other devices. Yes, please. In principle, yes. In practice, I'd say in the community, people use only a few. So aluminum and niobium, niobium, titanium, nitride, and that's it. It's sort of, there's a couple of things. So for example, people don't use high-TC superconductors because they are typically ceramics and they are not easy to machine, not easy to grow, and it seems like they wouldn't give an apparent advantage. So people have tried building, say, coplanar waveguides out of high-TCs, they didn't see a difference. So and then it's not worth the effort to actually make much more complicated fabrication for no gain. Oh, yes. No, no, totally you can use classical fields. Absolutely, I mean, it turns out your harmonic oscillator, it's a quantum system, yes, but it's a very classical quantum system. I mean, just by itself, if you only have a harmonic oscillator at your disposal and no other non-classical input state or non-classical device, you'll never ever see anything but classical physics, even though it's a quantized system. But I mean, if you drive it with a classical drive, you just see a coherence state in it. So, but still, to get the energy out, which I mean, is just fine. I mean, I can find the resonance and this is all I need to know to fully determine my Hamiltonian pretty much. And if I want to know more, like, okay, what I need for a mass state equation, I can look at the losses and this is also given from this classical spectroscopy. Yeah, so sort of go back a little bit. You mean that the ground state fluctuations? Oh, well, those are just, I mean, given because it's a quantum state. I mean, if you look at your regular quantum harmonic oscillator, then your ground state, I mean, if you calculate expectation value of position and momentum, well, this is well defined, that's zero. But if you look at the variance, then the ground state size has an extension and this is just the same here. And instead of the ground state size having extension in sort of position and momentum, this time here for such a device is in charge of flux. Okay, but otherwise there's no real difference here. Or maybe I misunderstood the question. Ah, sorry, yeah. Sure, okay, yeah, yeah, yeah, okay. Yeah, but that's sort of all going towards how do I improve coherence times pretty much? Yes, yes, of course. I mean, this was the biggest initial problem with this whole technology that initial devices had coherence times of a few nanoseconds. Meanwhile, we are up at like 100 microseconds, even up to millisecond nowadays for some devices. So this is plenty enough to do everything we want because gate operations are really fast. Sorry. The lectures are available. Yes, absolutely.