 I will start today, the morning is for me. The morning is about numerics for quantum transport. And we will start the first part of the morning. This lecture, I will actually show you the background of this. So I will actually explain you a bit more what actually is quantum transport. Because what I'm dealing with is a little bit different than many of the other lectures that you will see in the school, where it's often about finding ground state properties in an interacting system in an interacting finite system. Quantum transport is a bit different because actually we are usually dealing with non-interacting systems. But instead, the systems are closed, they're open. So there is some connection to the outside world, which makes things quite a bit different. I will explain to you what quantum transport is for the open systems. Give you just a general background. I do this on a relatively hand-waving level. So don't expect that that would be going very much detail there. And then I will go a bit to the numerical methods that we are using. And I think it's also useful for those that do not do quantum transport because these methods, I'll present you, are quite general. So for example, what I call Hamiltonians for the computer type binding models, these are methods how to get a continuous Hamiltonian to a matrix form. I mean, that's what you always have to do if you need to deal with Hamilton on a computer. And finally, when I show you how we actually can calculate quantum transport properties numerically, I'll also show you two numerical techniques that are also quite general. For example, we will see that we will have to solve non-linear eigenvalue problems that sounds actually more difficult than in the end it is actually. And we have to solve very large sparse linear systems. And also there, I just want to show you some tricks how you can very efficiently do that. Now and then in the end, I will actually show you a little bit again about physics. I will show you some quite prominent quantum transport phenomena because those are the ones that we will actually then simulate in the second part of the lecture in the hands-on tutorial. Well, we'll show you how to use our computer program quant that implements these algorithms that I show here. And that you can use it then to actually calculate these quantum transport phenomena and see that yourself, play with it yourself, and change parameters to see how the physics changes in these systems, OK? So let's start. What actually is quantum transport? So let's first start with what is transport? When you hear that, what transport means is that something is being transported, something is flowing. And in particular, what we are looking here is the flow of charge. So that's current flow. Now transport could also be heat or these kind of things, but I'm not going to deal with that here. Now, of course, current flow, everybody of us knows from the classical word, and we all, in school, we all learn about resistors. So this is a typical diagram that you would see in school. You apply a voltage, you have some resistor, and then a current is flowing. And what you learn is that, well, the resistor, the value of the resistor is given by the amount of current that is flowing with respect to the voltage. And in particular, resistance is given as voltage divided by current. Now, it just turns out that, of course, you could also have defined the opposite way. And this is also a quantity that is called the conductance that is being defined as the current divided by the voltage. It just turns out that in quantum transport, this is the more natural object to work with. So it will actually work with the conductance. But if you see conductance, and if you're not familiar with it, just remember it's just inverse resistance. So what it actually means, conductance means that the more conductive material is, the larger the conductance is. So a metal has a higher conductance than an insulator. Just remember that. Now, in the classical word, we all know that there's Ohm's law. And Ohm's law for the conductance takes this form, that it's proportional, that the conductance is given to some proportionality factor. That's the little sigma here. That's conductivity. And it goes linear with the cross section. So the wider you make your samples, the larger the conductance will make sense. You can just pass more current through it with the same voltage. And it goes down inversely proportional to the length. That's Ohm's law. If you make your system longer, if you add two resistors in series, resistors add up, resistance goes up, conductance goes down. Now, Ohm's law comes from classical physics. So this is our usually, for us, we live in a classical word. So that's what we're used to. But if you look at how it's been derived, it's actually being derived from the Boltzmann equation, where you have some sort of little electron balls bouncing around in a metal classically. And this is only valid if you have system dimensions that are much larger than the wavelength in the system. So wavelength is typically denoted by lambda F, so you know, quantum wavelength. And also only if the system dimensions are much larger than the phase coherence length. So if you have quantum effects, then this means actually that the phase information of your particle has to be conserved. And the length of which this is conserved, this is called the phase coherence length. And for example, it's determined by a phonon sketching would destroy your phase, for example. So the phase coherence length typically is a function of temperature. And at room temperature, it's pretty short. Maybe 10 nanometers or something, I don't know. But if you go to low temperature, it becomes very large. It can become micrometers or even more. And consequently, now when we now think about these restrictions, then we see that with the recent advances, or not recent, but in the last 20 years, people have been able to fabricate small nanostructures. Just show you two examples here. This will be an example of a nanowire. So this little thing here, this is a nanowire. This is a diameter of about 100 nanometers. So it's actually not that small if you think about it. Here, this scale is a micron. There's all kinds of gates around. So this is a sample where I was also involved with some theory. This is measuring began to localization. That's important. Or another example here, this will be like a ring. Ring made out of metal. A certain system to see certain quantum effects are on a form effect. But I just want to show you these are nanostructures. Well, actually, they're not really nano. They're actually more microstructures, because you actually see here these scales of microns. But what people are doing in lab is they're cooling down these structures in a so-called dilution refrigerator. So this is the inside of one. You put your sample here. And this plate here is cooled down to about 10 milli-kelvin, that's the state of the art. And at 10 milli-kelvin, actually, you have a phase coherence length that is on the order of microns. These systems here, they're actually phase coherent, even if they're relatively, relatively large. Plus those are actually usually made out of semiconductors. So you actually have a wavelength that's on the order of, let's say, 50 nanometers or something like that. That means that the system dimensions, for example, here this lateral dimension, or here also this lateral dimension, they're comparable to the wavelength. So you expect all kinds of quantum effects to show up. Now, so in the lab, this is really a quantum, you see quantum effects that's quite normal, I would say. But is there actually any way that we would actually see maybe in a daily life quantum effects at room temperature? And well, we're probably not quite there yet. But what I show you here is a TM picture of a transistor in a recent Intel CPU, this Broadwell CPU class, where Intel has adopted a so-called 40 nanometer process. It's actually funny if you look up, but there are these processes which they give numbers, like 40 nanometers run before was 22. It actually has nothing to do with actual sizes on the chip. It has to do with some scaling that they give. It's about making transistors overall smaller. And at some point, it did correspond to a gate length, but nowadays it doesn't anymore. But OK, it's just a side remark. But still, what you actually have here are pretty small device dimensions. So for example, what you see here, this is a silicon finger, so-called FinFET transistor silicon finger that is wrapped by a dielectric. It's this thing here. And then there's a metal gate on top. And that's the active part, that these are the channels where the current is flowing through. The gate is switching among and off. And right now, they have these things of a distance of 40 nanometers. So you see, this is actually pretty small. This is maybe less than 10 nanometers here. But still, so these things, usually there is a quantum, doesn't play a role yet. But I was actually talking, by the coincidence, to Bundy from Intel just last week. And they told me that they are already now, because Intel is planning way ahead in their production, they're already now doing quantum simulations of transistors, because they know that within five or 10 years, this will be relevant even at room temperature. So this is a thing that maybe even engineers should now are interested in quantum transport. OK. So now, let's see how we do that actually, how we make a model for that. Now, the model that we use for quantum transport or for an open quantum system is, in a sense, a bit of simplification, but it's actually quite general model still. So if you look, for example, at this structure here that I showed you before, the interesting things are happening here in the middle. Now, when you look at how does the current get to that middle region, then you have here some wires attached to it. Here in this case, they're also pretty small. And these are so-called leads. It's basically a name for a wire that connects some element to the outside world. So this is where the electrical conductance is. What you see is that this thing, actually, if you would go further, you see about this is actually then attached to a huge gigantic gold bond pad, which is then attached to a, on that scale, gigantic wire, which then goes into some current or voltage measurement setup. So this little object here in the middle is, of course, it's connected to the outside world with these leads. And how we model that, the theoretical physics, if we say, well, we have some interesting internal region. This is the so-called scattering region. And the scattering region is connected to leads. And because we cannot really deal with this complicated setup of that it goes up to a big bond pad measurement setup or whatever, and it shouldn't actually matter, how we model that is by actually having it attached to wires which go up to infinity. So we have these leads that are just wires which go to infinity. Now, in this particular case, I drew it such that these wires are much smaller than the scattering region. This doesn't have to be. You can actually, for example, you could imagine of having a wire which is something very, very big. Then you make it some smaller and then something's happening here. But in this case, the scattering region will be that part. And the leads will be much larger than the scattering region. This is just a picture here. OK, so that's the model. We have a scattering region attached to infinitely long leads. And what we first have to do is we have to see what are actually the properties of the leads. The reason, of course, that we do that is that we make these infinitely long leads is because they are somewhat simple and we can solve them. We can solve the leads analytically because we're interested in what happens in the scattering region so you don't want to have a difficult problem where you don't even know what comes in. You want to have what comes in as simple as possible because the complicated part that will already be the scattering region. And the reason why one takes this example of a lead as an infinitely long wire is that if it is an infinitely long wire, it has a translational symmetry. And if it has a translational symmetry, you can apply Bloch's theorem. So what we know is that the wave function in this lead, so note that I always show you here two-dimensional examples in this lecture just because it's easier to draw, but this also works for three-dimensional systems. So here I have only a wave function in x and y, but you could also have it in set. Doesn't matter. What I will always do, I will always rotate my coordinate system such that in the leads, the x direction is along the lead, along the wire, and y and z would be perpendicular. Now, Bloch's theorem tells you that the wave function in this lead has to form that it is a transverse wave function that only depends on the y-coordinate times a plane wave along the wire. That's Bloch's theorem. So we have here in this direction, we have a plane wave. This is free movement. Whereas across the wire, we have confinement. So we have some quantum states in this wire. And now in general, this transverse wave function, we call this the transverse wave function, this will depend on some quantum numbers. Just because it's a confined state, it will usually have discrete quantum numbers. And it could, in principle, also depend on the momentum. In the examples that we're looking at now, it actually will not. But in the tutorial, so you actually can see examples where it does. And what I want to do now, just keep you awake, is I would like to solve, for you to solve that problem here. For the example of a free particle, so just a motor in p squared over 2m in a wire of width w with hard wall boundary conditions. And I would like you to calculate both the transverse wave function, don't bother about normalization, and the energies. So that I give you three minutes for that. And then we'll show you what the result is. I'm quite impressed. Most of you are really writing. This is really, I'm proud of you, no? I think it's good to be a bit interactive. So I give you half a minute, and then we'll show what the result is, and then we'll discuss what it means. So what you have in the system, if you have just a free particle and you have hard wall boundary conditions in one direction, like the y direction, then the system, of course, separates in a part with just longitudinal direction. This gives us just a block wave function that we know it should. And in the y direction, the problem is actually just particle in a box. It's really just this classical problem. So you have free motion in between, and then at the walls, you say the wave function has to be 0. So the wave functions will be signs. So it will be a sign which is 0 here, and a 0 here, and in between can have some nodes. And now a particle in a box has a eigenenergy which is quantized. The quantum number is n. That's the number of oscillations that you have. And the energy due to this transverse confinement, what we call that confinement energy typically, that's just a particle in a box energy. It's just h plus squared n squared divided by 2mw squared. So the energy eigenvalues that you have consist of two parts. You have on the one hand the part that comes from the confinement, that comes from the transverse wave function, plus then a kinetic energy that comes from the motion along the wire from this plane wave. And that's just the free particle energy. It's h plus squared k squared divided by 2mw. So this is a relatively simple system. In a sense, what you have is that you have a wave function which is propagating along the wire and across the wire. It has some shape like, for example, like here. This is the first sign. So in a sense, these wires are actually they're wave guides. It's quite a bit like for light propagation. It just guides electron waves. Now, we calculated here this eigenenergy. And you can always do that. You can always write down, for all the solutions of the lead, you can write down an energy as a function of k and some quantum number. And if you plot these eigenvalues here as a function of momentum, this defines the band structure of the lead. This is just like some of you might do some decalculation, but you also calculate the band structure of a crystal, let's say, for example. Here, we have the momentum is the momentum along the lead. It's just one dimensional problem in our case. And these band structures, they're quite useful when you do some simulations because you can look at a band structure and you can immediately see most of the important properties of the lead. For example, now, usually what we'll do is we'll do a calculation at a given energy. Let's say this is this energy here. Then I just have to look to see how many modes or how many channels I have in this lead. I just have to see how many bands does it cross. Here, I see it's three bands. So there are three propagating modes in the system. And you can just look here. The outermost one, that's the one that has in this our example, this is now wave function squared. Just this one maximum. Here, the one from this band here will have two maxima and the one in the middle will have three maxima. So often, it's actually important to know how many propagating channels does the lead have. So you can just count that. And you can also look at this band structure and you can see what the velocity of these modes are because velocity is just the relative of these bands with respect to momentum. That's the usual thing. So what you actually see here is that these are slopes which have a positive derivative. So it's a positive velocity. These are modes which move to the left, sorry, to the right, the other left to the right. Whereas these modes here have a negative derivative, negative velocity. These are modes which move to the left. So you see in these leads, you always have modes which move in one direction and modes which move in the other direction. So this always has to be equal number, actually, because of current conservation. Now we kind of understand what these leads are. And it's important because of the following. It's important because in this lead here, in this lead, I know that I can write any wave function. I can write it as a superposition of these lead modes. Because these lead modes, they are the solution of Schrodinger equation. And any solution, they form a basis of my Hilbert space that they give manatee. And any solution will be a linear superposition of these basis states. So in this sense, this is why these lead modes are so important. And what we want to do now is we want to solve the Schrodinger equation for this open system. And we can now look at solving the Schrodinger equation right down the Schrodinger equation in different parts like in these leads here. And in the scheduling region, what we have to do is we have to kind of connect them to solve the problem. And I'm not going to do that part yet. I'm not going to solve it yet. But what I'm just going to do is I'm just going to write down this general form of a scheduling wave function that I want to solve for later. And in particular, what one does is that it's very useful to define a scheduling wave function where I say that I actually have a boundary condition that in principle, the wave function could be any superposition of incoming and outgoing modes. But I just do my linear superposition such that there's only one particular incoming mode in the system. So in one lead, there's one mode that's coming in. But there can be lots of modes that are going out in all the different leads. So here I have one incoming mode. And here I have a linear superposition of all possible outgoing modes. In all the different leads. And this actually, in a sense, if you think about it physically, it makes sense what's happening is that there's a wave coming in, that it's being scattered and can go out just in all the different leads. And it can go out in all kinds of channels because I don't know yet how it's going to scatter. Plus, I also have some wave function in the system. That's, of course, the complicated part, but right now I just want to keep the wave function in this form, the scheduling wave function. And this means I have a lot of scheduling wave function. I have one scheduling wave function for every incoming mode in all of these leads. So these are my different scheduling wave functions. And in principle, the task that we have is just about solving the Schrodinger equation. There's one caveat, though, that usually when you solve Schrodinger equation, like this eigenvalue problem that I show you here, what you usually do is you calculate some eigen energies. Now, I showed you before, is that basically these leads, because they're infinite, they have a continuous spectrum. They have this band structure. So there's, for every energy, I will find a solution. So it actually doesn't make, and actually I will also find, for every energy, I will find a solution, a scheduling wave function also, because the worst case what can happen I have one incoming mode here and then it's just being reflected and it's just going back. Nothing happening in the other way, other leads. But for every energy, there will be a scheduling wave function. So what we actually have here is that we do not want to solve for E. What we actually wanna do is we take the energy as a parameter and you want to solve for all the scheduling wave functions and a given energy. It's just a conceptual difference. This is different for open systems. You don't really calculate the energy, you take the energy as a given parameter. And actually in a system, what this energy typically is is just a Fermi energy. You should just look at what's happening at the Fermi energy. Okay, now you see that I wrote here some parameters, sorry, some coefficients, which I denote by S, L, L, M, N. So the little L here, they denote the leads. So this is actually, this is the part which is in lead L and goes back into lead L. And this is here M, N, this means the N is the incoming mode and M is the outgoing mode. And how you can read these things is that you can say, well, this is a probability of being reflected or being scattered from mode N in lead L to mode M in lead L. And you just have to read it oppositely, just like matrices. And this is basically, this is basically being coming in this lead in and going out in the same lead. Whereas the other ones are actually being, you go into the system and then you're being transmitted into another lead. Now, we can take this, these parameters S. And you might note that I actually, that might look a little bit weird at first sight is that I write here the mode divided by square root of velocity. This is just to have the S parameters, S coefficients nice. So this is about current conservation because what needs to be conserved in such a scattering wave function is current. So it's not, because it's an internet system, so it's hard to normalize it, you normalize it by current conservation. So you have to know everything that comes in has to go out. And because these different modes have different velocities, they have different currents. And by that, we just normalize all the modes such that they have unit current. And if you do that, you don't have to do that. You can also put these here into these S, into these S coefficients, right? I mean, it's just a choice that I do. I don't cheat here, just a choice. It was just nice that if I do that, then, oh, actually, let's skip over that. If I do that, then the matrix that I can build up with these coefficients is actually a unitary matrix. And the reason why it's unitary is in here, I shoot in current one, and it then distributes in lots of many other currents. And, but if I sum up the probabilities of these currents, it has, one has to come out again. And this, in the end, is the same as saying that S times S dagger is one. So this is why it's a nice, why it's a nice normalization. Now, if you look in literature, you will also sometimes find out the notations. For example, a block here, which is just, between the same lead is often called a reflection block, and is then denoted by R, because it's going back in the same lead. And blocks which go between two different leads are typically called transmission blocks, because it's about the wave function being transmitted from one lead into another lead. And these are, just to note, these are the coefficients, these are amplitudes, these are not probabilities. And if you have a two terminal structure, then often actually the notation is just like that. You write the scattering matrix as a reflection block, transmission block, transition block prime, reflection block prime. Two terminal means there's just two leads. And the rest of the lecture will just consider systems as two leads. Now let me go once the back where the thing that I forgot before. This is just, because I would guess there are probably lots of, there's quite a lot of people here that do Green's functions. Is that so? Who did something with Green's functions? Who is familiar with Green's functions here? Yeah, okay, there is good. There's quite a number. Now, this scattering wave function from a listen first looks very different from Green's functions, but actually it's really absolutely the same. And just to show you what would happen if you have a Green's function formalism, if you have such an open quantum system, you have the Hamiltonian of your system. And then you have these leads, and these leads induce a self energy to the system. So that when you calculate the Green's function, then you just calculate energy minus Hamiltonian minus self energy to the power of minus one. And in particular, the self energy is here to an infinite lead has imaginary components. And once you have imaginary components, this means that if you have fear in a separated system, one which is not connected to the outside, you have discrete levels. But once you have the self energy which has an imaginary component, you broaden these levels. You don't have a value defined energy levels more, but you have a continuous spectrum for all energies. And that's just the same as having these scheduling wave functions at every energy. And in particular, one can show, this is a little bit more technical, one can show that the imaginary part of the Green's function is related to a sum over all possible scheduling wave functions in the system. In particular, you might know that if I say that X equals to X prime, this property here is just the density of states. And in fact, the density of states in the system is just equal to the sum of the wave function probabilities, then X is equal to X prime, this just becomes wave function probability of all the possible scheduling wave functions. I think it's actually very intuitive. Basically, you have a system connected to the outside. You just look at all the possible things coming in that gives you a density of states. And one can also make similar connections to non-equilibrium Green's function formalisms, for example. So, you know, these things are really, I mean, they have to be equivalent, but you can also directly see it actually. Now, let's turn back to the scheduling matrix. And for the rest, I will only focus on scheduling matrices which have the simple form in the, for a two-terminal case. And in particular, so this is just a pictorial description that you have, you know, modes coming in. They can be reflected or they can be transmitted. And in this situation, the conductance is given by a very simple formula, so-called Landau formula. The conductance is given by E squared over H, not H bar, E squared over H, times the sum of all possible transmission probabilities from the left to the right. So, the conductance is E squared over H times the total transmission probability through a system. Now, this is very easy. And it's also somewhat intuitive because, you know, it just means that, you know, the more you can go, no more they can go through the system, the more current is flowing the larger the conductance. Now, just as if you look at the literature, sometimes you see a two E squared over H here. This is just a matter of definition. So what I do is that I say that, you know, my transmission also counts spins. So if I have, for example, two spins, my transmission will be twice as large. So I just absorb that factor in the transmission probability. I do that because I usually work with electric, with magnetic fields where you have Siemens building and then there is no spin degeneracy anymore. But, you know, having this Landau formula, one can actually derive Ohm's law from it when one basically says, well, there is no quantum clearance. I'm not gonna do that here. But one can also ask the opposite question. One can ask the question, you know, what happens if I have a perfect ballistic conductor, well, just the perfect wire, just the thing that we had here. You know, I have just the perfect wire. What is the conductance of this wire? You know, it's ballistic, so there's no scattering. So classically would probably say, well, the conductance should be infinite, resistance should be zero. But the Landau formula tells us that, you know, okay, if I have a perfect ballistic wire with N channels, each of these channels is perfectly transmitted because, you know, it's just perfect. So my total transmission will be N, number of channels. So the conductance is E squared over H times the number of channels. So, you know, here in this quantum regime, this perfect conductor does have a resistance. It's one over conductance. So this is finite. Now, there is, of course, then discussions about, you know, but where does this conductance resistance come from? Basically, people will say that, well, essentially it's dropping at the context. There's a conflict resistance. But, you know, I'm not sure if these things are really very useful. This is just, this is what you will measure if you will measure in an experiment the conductance for a perfect wire. You will see that it has a quantized conductance. And you will actually come back to that later. Because of, you know, this quantized conductance here, this is actually called, this E squared over H is called the conductance quantum. Now, so far I just showed you some background. As I told you, we have to calculate the scattering wave functions. Now, to do that, we have to do that on a computer, actually. But to do that, it's not so good to use these continuous model Hamiltonians that I showed you before, because on a computer, we cannot deal with continuous degrees of freedom. What you need to do is we need to transform that into a matrix problem, to have some discrete degrees of freedom. And there's different ways to arrive that to that. And I keep all of these under the common name type binding model, although type binding model is specifically means something specific. What I mean with type binding model, you'll see that in the next slides, they just mean that I end up with a Hamiltonian which is a matrix. I will show you two examples in the next two slides. So one method that is generally very useful, that is the method of finite differences. So what we typically deal with is Hamiltonians that have some differential operators. Like kinetic energy is, for example, just a dx, d over dx squared. And what one can do is one can approximate these differential operators by finite differences. So in a sense, one just does, how one defines the derivative in mathematics. You know, remember that it's defined as some limiting procedure that you take the difference between two points and divide by the distance between these two points. Now what we just do here, we just do a finite distance. So we choose a finite lattice spacing, a. And then we say that then we can approximate the first derivative of f of x plus a minus f of x minus a divided by the distance to a. And this is accurate up to order a squared. We can also do that for the second order, then it's f of x plus a plus f of x minus a minus two f of x divided by a squared. So be a bit careful about the two. This is typical source of mistakes. There's no two in the df in the second derivative. This is also valid to up to order a squared. There's very many of these defined difference formulas. You can, for example, check Wikipedia. They all, in a sense, you can all derive them from a Taylor expansion. That's where it comes from. It's very simple. And let's apply this approximation now to an example of a free particle in two dimensions. Now the free particle in two dimensions, it has this Hamiltonian, which is minus h bar squared over two m times the second derivative in x and plus the second derivative in y. Now I can just apply this finite difference formula here. I have to use this one. And I see that the derivative in x and the derivative in y, they both give me two times, or actually minus two times the wave function. So this actually gives me minus four. And then there's another minus here, so I can absorb that with a minus sign. So I get four times h bar squared over two m times a squared. And because that quantity will come up quite often, I actually define this to be t. I just say t is h plus squared over two m a squared. It's actually a unit of energy. And well, I also have to take these terms here. And don't forget there's a minus sign here. So the x derivative gives me the minus t times plus x plus a minus t psi of x minus a. And I have the same also for in the y direction. So this is how I can approximate it. Now, how do I turn that into a matrix equation? There's different ways of doing that. What I usually find useful is just for me to write it down, is that it's in that way. So we have that result here. Now, a type binding Hamiltonian. I like to write it in this form that I say, well, it's a sum over some discrete states. With this being my type binding Hamiltonian and this being a bra and ket states. So note that I write this in terms of first quantization. Actually in papers, you often see it written in second quantization with C dagger and C operators. It doesn't matter. It's the same because for quadratic Hamiltonians that we are looking at here, those two are equivalent. So I actually find it more confusing to do it that way. So I don't like that. So I like to do it in first quantization. And just to have nomenclature, you have here terms which can be either diagonal. Then we call that an onsite term or an onsite energy. Or you can have terms which are between different states. And this is then called a hopping from J to I. So again, you have to read that from right to left. So hopping from J to I. And what you can see now is that if we just apply this operator onto a state and we project it onto some basis state, then we just get sum over IJ, the type binding matrix times psi of J. So this is already quite similar to what we have here. We just have to note that if we apply H on psi here with our differential forms, this is actually in this nomenclature the same as applying the H operator to psi and then projecting onto X, Y into one point. So what we see here is now we can now read off what these matrix elements are. So the I is just X, Y, that's one lattice point. And what's written in here, that is the J. So maybe you sit down for a little while afterwards and just write down yourself. It's really simple. What's the order? That's basically what one has to be careful about. And if you now look at this, we can just read off the type binding model for free particles and it tells you that we have an onsite energy of four T and we have hopping from X plus A, Y to X, X, Y, which is minus T and the same for all the other directions. Okay, so this is the way how you can derive type binding models and you can do that with finite differences and you couldn't do that for as complicated problems as you wish. Once you have some Hamiltonian which has some derivatives, you always do that. You always end up with a simple type binding model on some square letters. What could happen is that, for example, you could have these hopings being matrix valued. For example, if you have spin or some more complicated degrees of freedom. Okay, let's do a quick pop quiz. If I did this, what I just did before for free particles in 3D, what would be the value of the onsite energy? Would it be two T, 40, or 60? Who's for A? Who's for B? You have to choose, right? Who's for C? Yes, very good. Very good. C is the correct answer. Now I did this because actually people are very often, one is in numerics, one is treating two dimensional systems because they're experimentally very relevant and 3D is just always very costly to compute. So people are used to this 2D type binding model and then when they go to the 3D, they often forget about that they actually have to change the onsite energy. It doesn't matter that much, you just shift your whole spectrum, but if you forget it, then you can just get crazy results. Okay, so this finite differences approximation is of course what the name tells you, it's an approximation. So when is this, what is the validity of this approximation? It turns out that there's some relatively simple rules of thumb that one can follow. And in particular, what we know is for this free particle model, we know that the solutions in both cases will be plane waves because if I just consider a free particle, infinite 2D space, I always have block's theorem. Whether it's now as a continuous translational symmetry or a discrete translational symmetry doesn't matter. Always have plane waves. And then I can just put that wave function into my Hamiltonian equation. And in the continuous model, we already saw that you just get h bar h bar squared over two m times wave factor squared. Now, if you do that in type binding model, well, what you actually have is you see here, you get terms which are something like e to the i k x plus a plus e to the i k x minus a. I can take out e to the i k x, what's left is e to the i k a plus e to the minus i k a. That's a cosine. So instead of the quadratic dependence on the momentum, I get cosine terms. So the energy in the type binding model is given by 40 minus 2T cosine of k x a minus 2T cosine of k y a. And if you do a Taylor expansion, you actually see that it reduces for small momentum, it reduces to the continuous spectrum, just because cosine is something at constant minus something squared. And well, actually you can expect that this approximation is valid for small momentum just because we do an approximation on the lattice scale. And if the wave function is much larger than the lattice scale, then the approximation has to be good because then everything is behaving very smoothly and this finite difference approximation is good. Now if you plot these two dispersions on top of each other, what you see is that the type binding model is the black curve, this has this cosine dispersion, whereas the other one has this quadratic dispersion. And then a very programming approach is just to look at this thing and say, okay, how long do they agree? And a good rule of thumb is up to one in units of t. Up to one t, it's pretty good. You can actually think about it, actually means that you have on the order of pi points per wavelength, and it's really what you need to kind of really resolve a sign, for example. Okay, so this is another thing that you should remember in the type binding model. If you want to describe something with, if you want to have a good approximation to the original system that you have, better stay for small energies. You often see, even in research papers, you see people not doing that and going all the way up. And the physics that's happening, for example, here in the middle is very different. And for example, here we have an opposite curve, it's actually holes, not electrons anymore. There can be reasons why it's actually good to go there, but if you just want to describe your continuous free electron, don't do that, because then you go beyond the limit of your validity. And you can get really crazy results. Okay, this is the method of finding differences. Now, I say that this one yields a type binding model. Of course, if we just go back what the name type binding model means, originally it was, of course, made for different systems. So originally it was actually for atomic orbitals on some crystal lattice. So what people originally took, and this is still valid today, is that you have your atoms and you just know that certain atomic orbitals play a role, the ones with the valence electrons. And then you know that the atoms they are positioned in a certain lattice and then the electrons can hop between electrons. So this is, they're bound together. It's what's called type binding model. And you can, for example, now use these atomic orbitals as your basis function. Now before, in this finite difference, in a sense, you use discrete space points as basis functions of your Hamiltonian. You can just as well use atomic orbitals. And one particular example that was that simple and that was very prominent in recent years, that's graphene. Graphene is carbon atoms arranged in a hexagonal lattice, as shown here. And what you have there is that these carbon atoms, they're bound by SP2 hybridized orbitals together in this hexagonal sheet. But these are really bound electrons. So they do not move. They do not carry current, for example. And if you wanna look at the current carrying electrons, these are actually just the electrons which sit here in these PZ orbitals which stick out of the plane. And then you can have hopping between these different PZ orbitals. And you will get now a type binding model. And that's actually pretty simple because just there's one orbital. And of course, all the carbon atoms are equal. So there can only be one kind of hopping energy which is only order of minus 2.7 electron volts. And the onsite energy, you can choose anything you want. It's just a shift, but typically it sets a zero. So with doing that, you then arrive at a type binding model which is atoms sitting on this hexagonal lattice, onsite energy zero, and all the hopping terms are T. Well, it's a different T than before, but we always use T to denote that, okay? Now you can do that for also for very complicated models. Like for example, in semiconductors, people are looking at zinc-blended structures with three-dimensional crystals. But they actually have to go to quite the number of orbitals. So we use the so-called SPDS-STAR orbital set. We have, well, S is one, P is three, T is five, I think, right? So this is like eight, nine, S-STAR is also one, 10 plus spin, that's 20 orbitals per site. But it's quite a number of parameters. You can always reduce the number of parameters by symmetries, but I'm not an expert on that, so I never did that. But in principle, that's also possible. It also gives you a type binding model. Now the hoppings will then be matrix value because you have different orbitals on the different sites. Okay, so this is a typical example of type binding models. Now, there's actually an aspect here that I would like to mention in that that these type binding models, if you think about it, what we did is we have some sort of lattice, some regular lattice, like in the finite difference was the square lattice, because you go from plus A plus A, minus A, minus A, or you graphina of this hexagonal lattice. For us, it's quite easy to look at the say, okay, well, the hopping here, this has a value, it's minus T. Well, this thing belongs to this one. This is the onset energy, it was 40. Here, onset energy is zero. This is kind of easy to keep track of for us humans. But now you wanna turn that into a Hamiltonian matrix. And of course, the way how you do that is that you kind of have to number your points, consecutive with integers, and then you know, okay, this is the hopping between point one and two, so this is an entry at one, two. But this is quite tedious, you know. And in fact, this is work that can really be done by the computer. And in fact, if you look at all of these lattices, what you actually have is all called graph structure. And there's a one-to-one correspondence between graphs and matrices in the sense that, but just in this type binding that is here, we have a graph is a number of nodes with connections in between. And the matrix structure is directly related to this graph in the sense that for every connection that we have here, there is an entry in the matrix. And what you can do then, for example, on a computer, this is what we do in quant, is that when you define your system, you're working with lattices, such as those, and defining hopping energies for, you say, well, the hopping here has this value. And we say, well, you work with, you know, position by hopping from X to X plus A. And then internally, what the problem can do is it can just number all these lattice sites and then just generate the matrix for you. It's just something that you shouldn't do yourself. It's very tedious if you want to do it for a complicated structure. Actually, this analogy between graphs and the matrices is quite useful and it's used a lot in sparse linear algebra that people that can then use results from graph theory to actually apply this to matrices. Okay, so now we have a discrete Hamiltonian. We have a starting point for doing a calculation. Now, let me just come back to the scattering wave function. I'll skip over a couple of slides probably in the end. Don't worry, we'll finish on time. This was the scattering wave function I showed you. I was actually a bit cheating here because I told you in the beginning here that I can write the wave function in the lead as a linear superposition of all the propagating modes. It's actually not quite true. I actually have also taken into account all the decaying mode, all the evanescent modes. But, you know, for the theory, it doesn't matter because you can always say, well, I go out very far in the lead where the decaying modes have decayed, so they're not present anymore. And for the current, the decaying modes don't matter anyways because a mode which is decaying cannot carry current. But if you want to do numerics, then it's a little bit not so nice to go very far out. So actually numerics, we actually do take into account these evanescent modes that you have here. So you need these really to do the mode matching between the leads and the scattering regions. And just to introduce some notation, I just now absorbed this square root factor. I just absorb it in the modes. In the definition of the modes, they're not normalized differently. And I will denote modes which could be either propagating or evanescent with you. But this is just to have a compact notation. Now, when we have a tight binding system, we can write down the Hamiltonian for the system as a matrix. And what I will for simplicity consider now is a system which has here a scattering region just as one lead. This doesn't remove anything from being general because fictively I can collect all the leads into one lead. But notation is much simpler. So what we have is we have a system which has a certain scattering, so certain Hamiltonian can be quite complicated. And we have a lead which must have translational symmetry. And what this means in the matrix form is that it actually must consist of repetition of blocks which have all the same matrix. These are the unit cells of the lead and always the same hopping in between. And how the global matrix then looks like is that we have here, I put here in the lower right corner the system matrix. And then the lead is simple because it's this block diagonal matrix which has always the same entries on the diagonal and the upper diagonal. So this is my Hamiltonian matrix. And what I have is that I have my scattering wave function and that basis is that I have the scattering wave function in the system. Plus I have here the scattering wave function leads and I do number it with like this is cell one, two, three and so on. Okay, so now if I act with this Hamiltonian on this wave function, then I have my Schrodinger equation. And just to go back, I will now use a compact notation to write this scattering wave function. So if I've only one leader, I only have this upper part here. And I write this as sum over M of the outgoing modes times e to the ikj of the outgoing mode and abbreviated as a lambda. It's just a compact notation times these scattering amplitudes plus the incoming mode. And actually you see here you have a sum over one common index. You can actually write that just as a matrix product because it's easier for the computer. It's just the matrix of all the modes times the diagonal matrix of the e to the ik, e to the ik to the power of j times this is not being a vector, these amplitudes. So this gives me a scattering wave for a scattering wave function form of these scattering wave functions here. And what you actually already see is that essentially the problem that we have here is that we have an infinite system and that we cannot solve in a computer. But actually if we look here, if we plug in this psi here, we have only a finite number of unknowns because we have only this S vector and the psi S as the unknowns. So actually we can cast it later into a finite size form. But before doing so, we actually, I was just specifying these use here just saying, well, these are the modes but we actually first have to calculate them. And to do that, we actually now solve this infinite problem where the leap matrix is very simple. I have just this block diagonal form with all the entries being the same. And I again know that the wave function has to have the block form. So I plug in a wave function of this form. And then when I do that, I see that what I have and for convenience, I will now write my Schrodinger equation as H minus E times psi equals zero. I can do that because E is given. Remember, E is the parameter. So I've practiced it as a number in the beginning. And if I do that, then I have the onsite Hamiltonian is for example, acting on psi zero. Now I have here these hopping Hamiltonians connecting to the left and to the right. And I actually have VL connects to the left. So that is actually VL times psi of minus one. Plus VL dagger, which knows how it connects to the other side is VL dagger times psi of one. And that has to be zero. And what I can do now is I can plug in this block form and then I will see that this is HL minus E times U plus VL E to the minus IKU plus VL E to the minus IKU equals zero. Now we again have to just, before we continue, we have to just think one more because I already told you E is given. So what actually do we actually, why are we actually calculating here? Well, we are calculating the mode, U of course. But also the wave factor K is unknown for us. So this is something that we have to compute. So this is basically what I was, the computer form of what I was showing you before with the band structure. If you have your bands here, then we give the energy and we wanna know at which moment of K does the energy cross these bands. Now of course you could do that with the band structure and do some search, that would be possible. But that's not very efficient. It's actually better to do it in the way that I show you now. So what I'm gonna do is now I will actually define again just for compact notation E to the IK as lambda. And then you see that I get here, oh no it didn't do that. Then I see I have here lambda to the minus one and you have lambda. And then I multiply the whole equation with lambda again. Then I have HL minus E times lambda times U plus VL U plus VL dagger lambda squared U equals zero. And the unknowns again are lambda and U. Now if you look more closely at this we actually see, well it's kind of like an eigenvalue problem. Here there's an eigenvalue lambda and eigenvalue U except that lambda, the eigenvalue appears quadratically. Appears linearly and quadratically. So this is a true quadratic eigenvalue problem. And if you look at this first and you didn't see that before we actually asked, can we actually solve that? And it turns out it's actually very simple. It's actually the same as, you might remember from your math course, if you have a differential equation of second order, you can actually cast that into differential equation of first order. By doing a trick and the trick is now you find a new vector being a function and first derivative of your function. And we do the same thing here. So what we do is we just say, we double degrees of freedom we introduce a new eigenvector which is U times lambda U. And then this form can be cast into a generalized eigenvalue form. Well you know this first row here is just that lambda times U equals lambda times U. That's the same as what you do for these in the differential equation. You also say derivative of F equals derivative of F. And the lower row, this is then just this equation. But now you see, now you actually have just a linear problem because lambda here, so now actually this whole vector is our unknown. And lambda here only appears linearly. And you can do that to any nonlinear eigenvalue problem up to any order. You can, if it's order n, you can reduce it to a n times original size linear eigenvalue problem. This is actually, this is a so called generalized eigenvalue problem because there's a matrix sitting here. If this V is invertible, you can also bring it to the other side and it's a normal eigenvalue problem. But you can solve the normal and the generalized eigenvalue problem very efficiently using standard dense linear algebra, LaPak for example. This whole problem of nonlinear eigenvalue problems is relatively new in the field of American mathematics. So what I show you here, this is something which always works, it might not always be the best way of solving the system in terms of stability. So in the recent, people have been studying a bit how certain problems can be made you solve more efficient. But for practical applications, I mean usually it's just really enough to do it like that, this works pretty well. Now, we have now solved this problem of, this allows us to calculate both the eigenmodes or the eigenmodes in all these momenta. And what we can do now is, we can just now cast this original problem here where I know this U and this lambda and this phi and this lambda, I know those. We can cast that into a finite size problem. And just to show you that it's very simple, I just do the math here. If you don't wanna follow it, you can also take a one minute power nap or so. It's not important for the rest. But basically what you do is, you just take this Hamiltonian, you apply it to this vector here, and we now know that we have now here, this row times this column gives us this equation, this row times this column gives us this equation. And what we can do now is we can on the one hand use actually this previous equation here to reduce this, this is basically the lead eigenvalue problem. We can reduce this to minus VL times phi zero. Then we can plug in this known form of psi so that this is just VLU times this S vector. And then because for simplicity, I assume here that VL is invertible. You see there's always VL here. I can just take that one out. Then I get this kind of equation. And I can do the same for the second equation. That's just one thing to note. Before you saw that here, I always had a zero on the right side. Now, so here this was always of the form H minus E times psi equals zero, which is of course it's always solved by psi equals zero. It's kind of an American, not so nice. But when we plug in this form of the sketching wave function, we have only the S being unknown, but here we have just a constant. This is just a known vector and that one is not an unknown. So this one goes to the right side. So you see I have here a system of linear equations for S and psi S. And now I have due to the fact that I have an incoming mode in my sketching wave function, I have a boundary condition, which puts, which gives me a right hand side in my linear system. You can do the same for the second equation. Then you get that. And then you get actually a very compact matrix form of the scattering matrix problem. But we now we have now only these the scattering matrix amplitudes and the wave function in the system as unknowns. Now, in the tutorials, you don't have to do that yourself. That's done in quantum. I mean, you could do it yourself, but I think it's kind of pointless because it's just a lot of bookkeeping takes a long while. So it gets, now the question is how to solve that. So this is, you know, this is a linear system of equations. But of course, it's a pretty big linear system of equations. I mean, even if you do just, even if you do just, let's say a hundred by a hundred scattering region, you have a 10,000 by 10,000 matrix, which, well, you might still be able to solve and dance linear algebra, but it's pretty big. But, you know, since Hs is a type binding Hamiltonian and this is like the large part, you know, this is the large part of the scattering region. The use that they're typically dense, but they're just sitting at the lead. So that's just a small part of the system, much, much more than the rest. So the biggest part of the matrix is just a binding system, which is typically sparse, because you only have, say, nearest neighbor hoppings or next nearest neighbor hoppings. So what you actually have to do is you have to solve a sparse linear system of equations. And, you know, you can think a lot about how to do it, but it turns out the best way, the most efficient way, the most stable way is just use some existing linear algebra system as software for that. So this, I was really surprised because I didn't know that originally, but there is quite a number of existing libraries for solving sparse linear systems. We use Mooms, but there's others that's SuperLU and OomphPak or Partizzo or whatever it is. There's a large number because mathematicians spend quite some time in the last 20 years to develop mathematics, numerical linear algebra for sparse linear systems and they got extremely good. And you cannot beat what they did in 20 years, I think. And it's really, you can solve easily systems up to 10 million by 10 million. It's typically, if it's sparse matrix for this kind of tabining grid severe, no problem at all. Eventually, at some point you run out of memory. Actually, this is one of the things where your memory limited. It's computational time. I rarely see a computation there longer than an hour, basically. Now, there are some caveats, though, when you do that. And I just want to show you that a little bit because I think it's a bit instructive. And this is that these sparse linear system solvers, how efficient they are depends on which order the matrix, in which order you give them your matrix. This is something which is very counterintuitive because dense linear algebra doesn't matter. I mean, I can preview my matrix and put it to LaPak. It will always be the same time that LaPak takes. But for sparse linear system solvers, that's different. And what you have to do first, you have to find a good ordering of the matrix. Well, actually, often you can just pass a flag to this sparse linear system solver that this linear system solver doesn't themselves. But you just have to be aware that there is this thing. And just to show you that, I just want to show you some example. So I take here a, and effect why this order of the matrices matters is that basically these sparse linear systems solvers, they make use of the fact that if you make, what they do is an adiutic composition of your sparse matrix. And this adiutic composition will also be sparse. It will have more non-series than your original matrix, but it will still be sparse. And basically they make use of the fact that they don't need to calculate the zeros. So they're just smart, they know not to calculate the zeros and that way they save time. But in an adiutic composition, the number of non-series depends critically on the initial order of the matrix, if you forget about key voting. And what I show you here, this is just the type binding system where I did something stupid, or actually a quant does it in this case, will actually change in the later version, is that we just numbered the sites randomly. Let's just have a random numbering of sites and then I get a matrix which is sparse dependent like that. You see it's a pretty sparse matrix, yeah, but I have just black dots all over it. Now if I apply an adiutic composition to that matrix, then what happens is a so-called fill in. So the adiutic composition is basically Gaussian elimination where you start like with the first row, you know, you eliminate the unknowns, then you add some additional non-zero elements here. And as you go along the matrix, you add more and more non-zero elements and you see actually here is a huge black blob here where everything is non-zero. That's called fill in. And this is pretty big here because that's just a random order of the matrix. Now what people essentially did in physics before, I mean they didn't do it in that way, but effectively this is what it boiled down to, is that they did a bandwidth reduction of the matrix. So you try to get all the non-zeros closer to your data agonaut. And that helps, yeah, you get now a different sparsity pattern of your fill in. It's actually, it's less than what you have here. And it's actually, it's constrained to a twice the original bandwidth, you can show that. So that reduces your fill in. But it turns out there's another ordering called nested dissection, which is even better. And that's an ordering where, well, it's a bit hard to see here, but I'll show you the next slide, where you actually get this kind of funny tree-like pattern. And you can actually show that this ordering is for at least for two-dimensional systems. You can even prove that this is the best ordering that you can do for a type binding matrix. Just to explain you, I will just very quickly walk through it, I will actually show you that solving a system here has a certain complexity. And I will show you that the complexity of this ordering here is one order better. And to see that, well, this bandwidth reduction ordering basically corresponds to having a block diagonal ordering of your matrix. And when you look at your original system, and I just now consider a square system with end points here and end points here, it corresponds to putting like ordering your sides into slices. And then essentially what a ludicomposition does or what people were using in earlier times for quantum transports or code recursive greens functional algorithm, which is related to that, is that they basically start to build up the system slice by slice. And that every operation here, you have here n sides and n by n matrix. And the operation cost is you have to do some matrix modification, inversion, dense linear algebra is n to the power of three. And you have to do that because this is a square, you have to do that n times. So you have to do n times n to the power of three equals n to the power of four operations. Where n is here the size of the square. Now, this nested the section ordering, what it actually does, it actually is kind of like a divide and conquer method. What it does it, it takes your original matrix and we always adjust that there are two disconnected blocks and all the connections are put here at the end. And what you do then you continue that recursively. You see here, here, but here these blocks are disconnected. And this is this kind of, this gives this kind of tree like structure that you saw before, remember? In the fill in. So this is where this comes from. If you do it in real space, it's actually a weird patterning like this. It's actually easier to see if it's the other way around. What you do is you take it, just half your system, so you take that one out and you half your system always again until you had one lattice point. And what you can see now is that, now here at this point you have to do n to the power of three operations. If you go back, now you see now I have to do operations on a system which is just n divided by two. So I have n to the power of three divided by eight operations. Now, if you now look closely, I have to do that for six systems. So this is six divided by eight times n to the power of three. And so now of these two steps, the cost is n to the power of three plus six divided by eight times n to the power of three. And if you do all the math correctly, you'll actually see that this is a convergence sum which converges to a constant times n to the power of three. So the total complexity of this next section method is n to the power of three. So it's one order better than this block diagonal order. Now the prefactor is a bit larger. Yeah, so you need to go to a little bit larger systems to see it, but for the systems that you're interested where any of this can be a thousand or so, it always wins. Scaling always wins. And just to show you that, this is an actual calculation where we compared our quant software which is using this nested by section ordering with one of these RGF solvers. From earlier, which does this block diagonal ordering. And I should even say that this quant is this red dots here and the RGF is these black dots. And this RGF solver was actually a C solver, specialized for just having a wire. It's where there's no big bookkeeping over at nothing. So it's like the fastest that you can get this block diagonal ordering where our quant software is pretty general in what kind of shapes you can make and it's all Python, so you have quite some overhead, you would think. But you see that even at moderate sizes of like 100 or even here like 50, the nested by section, the scaling just beats this RGF solver. And especially for systems that you're interested in in research where you typically have a couple of 100, I mean, you can easily be an order of manager better. So that really is a large improvement. But you know, in principle, this nested by section is nothing which is specific to quantum transport. This is just a method for making sparsely a system solvers faster if you have a structured grid. And in physics, we almost always have that. It doesn't have to be a dense matrix, but if you have some sort of grid, then it's actually pretty good. The difference is not as good in 3D, it's very good for 2D. And 3D also gives you one order of manager better, but then the overall scaling is worse. So then this RGF would be like n to the power of seven and this one would be n to the power of six and yeah, from seven to six, the difference is not so big actually as here. But it's still not bad. And you know, there's also other orderings, mathematicians put a lot of work in that. Most of these sparsely systems always have some of these orderings built in. And you know, you can play with these orderings to see which one gives you the best performance. And sometimes it really makes a big difference. Okay, that was all about numerics. Now I just wanna come back to physics very briefly because I just wanna flash some slides where I show some physics that, well, from the last 20 years, let's say, that where it's very prominent quantum transport features. And these are examples that I want you to calculate, do calculations on in the second part, okay? But so I first want to introduce them a little bit. But before we do that, let's do a little pop quiz again to see if you were all attentive. Now, if you have a perfect clean wire, which has n open modes with n also kind of spin, then what conductance does it have? Is it A, is it G times n over two E squared average? Is it B, G times n E squared average? Or is this a trick and I actually didn't give you enough information, that would be answer C, okay? So who's for A? Who's for B? Who's for C? Okay, some people are still undecided, but the majority is right, yes, it is B, yes. It's indeed, this wire has conductance of n E squared of H. Now the question is, can you actually see that in an experiment, you know? Because what it means that if I changed the number of modes in the wire, which I, for example, can do by changing my energy, because you see, I mean, after spin structure, and if I tune through with energy and change the number of modes in this wire, so I change n, so I should actually see the conductance changing in steps, you know? Sounds kind of weird, right? Now, it turns out that you don't wanna do an experiment where you do really a wire, a long wire, and the reason is that typically you might have something perfect in the middle, which makes it somewhat harder to see that, you know? I mean, you have to make a really, really good system. And well, nowadays people actually, you can see it for longer and longer systems, but the first experiment actually did it for a very short wire, but that actually was enough. What they did is that a two-dimensional electron gas, they put some gates on top to complete the electron gas underneath these gates, and there was a small part, small piece, which was like a little bit of a wire. Was not like long extended, but just a small piece, just, you know, just something being pinched off or something like that, you know? And then they just applied water to measure the current, and then they compute, then they plot the conductance. And the conductance, indeed, does go up in steps like you see here. This is, it echoes up in steps of two E squared of H, and this is two because there's spin degeneracy. The transmission probability goes up as, or a number of modes goes up in steps of two, okay? So here you see steps of one, but it's in units of two E squared of H. You know, and this is, I think this is one of the most beautiful experiments in mesoscopic physics that I know because, I mean, this is, this really nays it down, right? This tells, this is, all of this is real. Essentially, this was, and this is actually an experiment that was actually done in Delft. And then, yeah. For example, like this was an experiment like the head of the department where I'm working now is Leo Cowan-Hofman Experimentalist, and he actually did this for his master's thesis. Kind of nice if you get that out in a master's thesis. There's even some funny effects because I know that a lot of you are doing interacting physics, and here there's no interacting physics, that's just the steps. Actually, you see a little kink here that was resolved much later, it's so-called 0.7 normality, which people nowadays believe that this is related to interacting physics. So you can also see interacting physics in quantum transport, but you have to look a little bit harder. But like Jan van Delft and Yone Kiesen expert on these things, they have some deep magic calculation for that. I'm not sure if it's all settled yet, what's the origin of that effect, but people are pretty sure that it's some interacting effect. You can also do now other quantum point contacts, for example, that's a cool one where people made it on an atomic scale. It's a so-called break junction where you have a little metal that you're already narrowed down quite a bit with some lithography, but you can only get down to, let's say, 50 nanometers. What they do then, they pull this piece apart and then they pull very slowly. Then eventually, here there will be a wire formed out of just single atoms, single gold atoms. So these are these dots, this is one gold atom, one gold atom, one gold atom, one gold atom. And if you do that, you also see a quantization of conductance through this one atomic chain. That's pretty cool. Or you can also use more complicated setups and use, for example, quantum point contacts, probes for qubits, for example. So they're so well-defined because you can sit like here, one of the steep slopes here so that small changes in gate voltage have a large signal. You can use them as sensors, basically. And what I want to do with you this afternoon, this is the first thing we're going to do together. We're going to program a quantum point contacting quad. There will be then a couple of additional exercises that we can choose, which one you want. One of them is about quantum billiards. What's happening there as well, we're not going to do anything quantitative there. Most of, let's say, most of what we do in the afternoon will stay on a level that you look at wave functions and you can see intuitively certain things. What happens in so-called quantum billiards? Quantum billiards are billions of systems where you have free motion in the middle and then there is some hard wall where stuff is being reflected. And in classical mechanics, we know that if you have these different shapes, the behavior is quite different because, for example, if you have a circle, then you seek this kind of regular pattern of path here. There's some conserved quantities, some sort of angular momentum, but it's always bounces around like that. Whereas if you insert a little straight piece here to make such a so-called stadium billiard, the dynamics is actually chaotic. So if you have two trajectory starting here in the stadium billiard, which have a similar initial conditions, they will exponentially fast move apart. Whereas here, they will stay together. Now, this also, this classical dynamics also is reflected in the quantum dynamics in the sense that if you look at the wave functions, the wave functions in the circular billiard, they look very regular because there's some conserved quantity. And in the quantum billiard, you have some very irregular quantum chaos. Oh, so you see here, this is just some complicated looking wave function. And what I want you to do in the afterness, compute some of these wave functions, see how you can add crossover from the circular billiard to the stadium billiard and see how the wave functions change. You can, of course, to quantum chaos, then quantitatively look at wave function statistic and these kind of things, but that sets too much for one afternoon. A second example that I would like to, that is more for the, let's say, the first example, you will have to choose between a couple of them. The first one is, I think, the simplest one to implement. The second one is a little bit harder. That's about quantum Hall effect, but maybe some of you already were dealing with quantum Hall effect, so I thought that I'd include that. And that's about the effects of magnetic field on electrons. And I'm talking here about the orbital effect. The orbital effect is that, you know, if you have an electron in a magnetic field, then its trajectory is being bent onto a circle by the Lorentz force on a classical level. There's also the Siemens effect, which gives you spin splitting, but I'm not gonna consider that. But there's a funny thing which happens if you have a very small nano system, where you have your electrons being quantum. Then if you apply a strong enough magnetic field, which for semiconductors means the order of a few Tesla, which is very easy to do experimentally, what happens is that you get very small cyclotron orbits, and they can be smaller than the system size. And what actually means is that an electron which is the middle, you know, it sits on a cyclotron orbit, but it's not really moving forward. So this is actually, this electron in the middle is not moving on a classical level star. However, what happens is that if you have an electron which is close to the edge, it's also in a cyclotron orbit, but you know, when it hits the edge, it's being reflected, and then it can go along like this here in a so-called skipping orbit. And you actually see that the skipping orbit on one side goes in one direction, and the skipping orbit in the other side goes in the other direction. Now, this is a classical picture, but actually it also is, you can understand quantum with it. Now what's happening here in the middle with the cyclotron orbits is that the system forms lambda levels. Lambda levels are flat bands which are localized. This is essentially what's happening here. But at the edge of the system, you have these skipping orbits which give rise to so-called edge channels which are flowing along the edge only in one direction. So you have like a little quantum wire at the edge that's flowing only in one direction. It's a one way. You have to go to the other side to go in the other direction. And you know, later in the tutorial, I would like you to see, for those of you who choose that one, to see how these edge channels develop and how, for example, a quantum phone contact, how it can squeeze through a quantum point contact and these kinds of things. There's of course much more to that one because, you know, the physics there is essentially the basis of so-called quantum hall effect, integer quantum hall effect that was measured by a quantum phone pitching where, you know, they do a somewhat more complicated setup, not just two terminal but six terminal and they measure the whole voltage, whole resistance. And then you find this quantization of the whole voltage which is now used as a resistance standard, you know. And then this was like also one of the first experiments which saw this kind of quantized behavior and which also got Nobel prize from pitching. And in principle, you can also do that numerically but it's a bit too much for an afternoon. But, you know, if anybody fills up for it and wants to do that in the rest of the week or the rest of the next two weeks or so, I'd actually be happy to help you. It's kind of a nice thing to see that coming out of the calculation. With that, you know, I just want to stop here. I also have just to mention, I forgot to put that here, I also have another tutorial that people can try that already, you know, topological insulators because I think a couple of you are doing it here. That's also a nice one. I think it's a little bit harder, you have to do a bit more, but okay. I also, you know, I'm gonna stop for coffee right now and later we go to lab where we'll just, you know, use quant to do some of these calculations. But I also, I would like to do a little experiment because there's this service called cloudsagemark.com where you can get an account for free and you can use all of these iPython notebooks that I told you about yesterday. You can use them there online in the cloud and do the computations there. And they have some tools to teach for teaching so that the students can share notebooks or results and then teacher can look at them and give corrections and so on. And I always wanted to try this and we're not gonna use that now for the tutorial now because we have it all installed on the computers and this is a little bit slower because it's over the web and I'm not sure what happens to 100 people at the same time access it from that connection, I don't know. But if you want to play with quant and you want me to maybe give some input or help or whatever or even with Python, it's fine, doesn't have to be quant, then make an account there and then send me your email address to my email address because then I can add you as a student of an ICTP course and then we can try, I have no idea how it works yet but I just want to try it and if somebody wants to be a guinea pig for the worst case you don't hear from me but it's nothing that can happen, then we can try that. But with that, let's stop and let's go for a coffee.