 Thank you very much. I wanted to start by thanking the organizers for the opportunity to speak here today. And I'll still be looking at optoelectronic properties of materials like the previous few talks in this session, but the angle I'll take is looking at the effects of the lattice on the optical properties. And hopefully the examples I'll give you will convince you that this is sometimes important to take into account. So we can use a very simple toy model, such as a diatomic molecule, to understand why including the effects of the lattice is technically more challenging than not including them. And in order to understand that, we just look at a diatomic molecule which has some equilibrium bond length L0. And what we usually do is we fix the ions at that equilibrium bond length and then we solve for the electrons with whichever method we want. And that electron solution is represented by the cloud here. And then we calculate the properties we're interested in. And a relevant property in this context might be the dipole matrix elements. Now, when we want to include the effects of ionic motion or temperature, then we need to take into account the fact that the ions are no longer stationary at their equilibrium positions, but they move about. So for most of the talk, I will assume the adiabatic principle, which means that when we look at different configurations that the atoms also explore, such as shorter bond lengths or longer bond lengths for the molecule, then the electrons instantaneously relax to this new configuration in which they sit. And if we now fix the ions in this new configuration we solve for the electrons, we calculate the property of interest again. In general, we'll find a different value for that property. So the question is, what is the actual value of the property when you take into account this lattice dynamics? And the adiabatic answer is very simple. It's just the average over all configurations of the system. And this average needs to be weighted by the appropriate probability of finding any given configuration in the system. So this very simple model immediately tells us why it is more challenging to include the effects of lattice dynamics or dynamics in general. And that's because we're not having to do a single calculation at the equilibrium bond length. We have to do multiple calculations to understand how the property of interest varies as the atoms move around. So if we write this in a somewhat more formal manner, I consider some general electronic observable O. And I'm interested in calculating that some finite temperature T. And what we're doing then in the adiabatic picture is to calculate the expectation value of that observable with respect to a vibrational wave function, which I represent by this chi here. And this could be something like a product of a Gaussian function to assume the harmonic approximation in our system. And then we have the standard Boltzmann factor partition function. And I explicitly write the use here. The use are the configurations of the atoms, which usually we'll describe, say, in a phonon basis if we assume the harmonic approximation. So the challenge is, how do we evaluate this type of quantity? And there are various ways, but perhaps the ideal wave would be using something like molecular dynamics or path integral molecular dynamics to generate the configurations which the system explores and then average the quantity of interest over such a path. And we heard this morning how machine learning techniques can help accelerate this type of calculations. But if we're really only interested in equilibrium properties of systems rather than maybe dynamical properties, then one could just use other methods which are somewhat computationally simpler. And one of them is just realizing that this is, after all, a high-dimensional integral. And the best way to evaluate high-dimensional integrals is using stochastic methods. So what one could do is one could use Monte Carlo integration, for example. And in the particular case where we assume the harmonic approximation for the lattice dynamics, then it's actually very simple because these objects are just Gaussian functions and you can directly sample stochastically Gaussian functions and then you just average over all configurations that you're generating that way. Now, an even simpler method or family of methods which I label here as quadratic methods is one in which you expand to some low order the property of interest. So if we look at this electronic observable at some general configuration of your system U where U equals zero is the equilibrium configuration, then I can write down an expansion in terms of say phonon modes if I have a harmonic picture. And this expansion in principle has terms at all orders, but then I decide to truncate this expansion at some order, say second order. And the expression that result from this are relatively simple. For example, if again I work within the harmonic approximation, then this nuclear density is actually an even function. So when I overlap that even function with this expansion all the odd terms in the expansion will integrate to zero. And that means that to third order the expression is relatively simple. You just get that the value of your observable of interest at some finite temperature it's equal to the static lattice value. And then it has a quantum zero point contribution, a thermal contribution which essentially both I sign occupation factor and then this is the phonon frequency and this is the coupling constant essentially which is the curvature of the property with respect to a displacement of the atoms. So the argument I made at the beginning about the computational expense of including lattice dynamics is here exemplified by this sum here where you need to add all this, you need to sum over all the degrees of freedom in your system. In molecular systems you just have the different phonon modes. In solids you also have the cube vector which is the momentum of the phonons that you need to sum over. So these are the type of expressions that we want to use when we want to incorporate lattice dynamics into our calculations. And the approach I want to describe or we want to use actually is based on finite differences. So there expressions like these are also amenable to linear response methods which have many advantages such as the ability to use a single primitive cell to the calculations. But finite difference methods also have some advantages and this will prove essential for some of the applications I'll show and in particular the advantages are that you can use them very easily with any sort of electronic structure method that you want to use because you calculate the coupling explicitly distorting the atoms in your system and then you can use GW bits help it to whatever you want in that context. But finite displacement methods are computationally expensive so we've been working a little bit on trying to make them somewhat cheaper and I just want to give you an example of some of the ideas that we have and this particular one is relatively simple. So that concerns this problem of having to sample many points in configuration space. So the idea comes from very basic mathematics, the so-called mean value theorem for integrals and you have to imagine the following. So you have a function of effects in red that we're trying to integrate from A to B and if we want to evaluate that integral numerically what we would usually do is we would evaluate the function at many points between A and B and then take some sort of average of those points and that's exactly equivalent to sampling the configurations in your system and calculating the property of interest and averaging over that. Now there's something called the mean value theorem for integrals which tells you that for any such function there's always at least one point C somewhere in the integration interval for which the value of the function at that point is actually equal to the value of the integral you're trying to calculate. So another way of putting that is that the area under the red curve here is equal to the area under the flat blue line there. Now this sounds very, very nice because if we knew what that point C is then all we have to do is calculate the function at that point and then we've solved the integral. Of course the challenge is that in general we don't know what that point C is before we do the integral. But nonetheless this actually motivated us to try and find very good approximations to such a point and this actually turns out to be relatively quite possible in the electron phonon problem and here I'm writing one such approximation which is the approximation when you make the assumptions that your system is harmonic and the electron phonon interaction is to lowest order when you make these two assumptions perhaps unsurprisingly what this solution tells you is that if for each phonon mode of the system you give an amplitude equal to the square root of the mean displacement then the value of the property at that particular configuration will be equal to the thermal average over all configurations and in this way you replace potentially thousands of calculations over different configurations with a single calculation on this particular configuration. So this is the solution for the harmonic low order case. We actually now have extensions to this to both relax the harmonic condition on the lattice dynamics and the low order in the electron phonon interaction and in those cases we cannot actually find solutions with a single point but we can find solutions with just a very few points of the order of 10 say. So this is an example of the sort of tricks that we use to reduce the computational cost to these calculations. We also very carefully choose which sort of supercells we use in order to capture the relevant vibrations of the lattice but I won't go into more detail about this. Today what I want to do for the rest of the talk is to just give you a few examples of how we can use this to do some physics. So the first example is perhaps very simple. So the quantity we're interested in here is just the pure single particle electronic energy in our system and the example is actually from a collaboration that we did with the group of Aaron Walsh and we were looking at, this comes from the photovoltaics context, a class of materials called kesterite. So these are quaternary materials which are essentially the next layer evolution from the so-called six with the representative example being copper zinc tins sulfur and the advantage of these materials for solar applications is that both all these elements are earth abundant and non-toxic. So it would be very nice to be able to make solar cells out of these readily available materials. And the idea behind this sort of family of solar cells is that they're just a derivation from six which are about 20% efficient so this type of material looks promising. However, their efficiencies are not as good as all materials and there are quite a few competing theories as to why that might be the case. And one possible explanation is the band alignment between the absorbing layer and the transport layer and this is the sort of thing we wanted to look at here. And what we looked at specifically was the temperature dependence of the band alignments between this type of material. So here the example is cesium, CZTS, and cadmium sulfite. So what we do is we just calculate the temperature dependence of the energy levels of this material and here I just dissect what it looks like for CZTS. So this is the correction to the band gap of the material at any given temperature. So we have a correction even at zero temperature arising from quantum zero point motion and then we have a further thermal correction. And the red line corresponds to the correction arising from electron phonon coupling which we calculate using the methods I've just been describing. We of course also have a contribution from thermal expansion and that we simply calculated with standard quasi harmonic approximation and then the overall correction is a combination of both. But the interesting thing is that it actually does make a big difference in band alignment. So in this particular case if we ignore temperature altogether we get a band difference of about 0.10 EV for the conduction band minimum. And at room temperature which is perhaps more relevant for social applications we get about a 70% change in the band alignment. So these are effects that could be important in this type of materials. The second example is moving away from individual electronic energy levels and looking at dipole matrix elements. So the connection between them and the first example is single particle optical absorption. And the example that I want to look at is indium oxide. So indium oxide is a material that's being used for transparent conducting applications. So this is the primitive cell the cubic primitive cell with about 40 atoms in it so it looks rather complex. And this is the band structure and it's very easy to see why it's very useful for transparent conducting applications. So this is the valence band. There is a big band gap here. And then it turns out that it's very easy to dope this material to move the Fermi level here. And then these electrons here can provide the conduction. But also there is also then a big band gap between here and here. And therefore while they conduct they also retain some transparency which makes it a good material for transparent conduct applications. So this material has been well known for has been used for a very long time in this context. But the reason why we're interested in this and this is what we did with Andrew Morris is that if you look a bit more closely at the band gap which is around the gamma point you realize the following. So indium oxide has inversion symmetry is one of the symmetry operations of the material and that means that you can classify the states at the gamma point according to their parity. Now it turns out that the parity of the conduction band minimum and the valence band maximum is the same in this material. And if we look at the dipole matrix element if dipole operator is an auto operator and therefore if it connects two states of the same parity then this matrix element vanishes. That means that in reality absorption should only start from lower lying bands that have opposite parity to the conduction band minimum. So if we do the calculation of the absorption coefficient of this material at the static lattice level we get this dashed black lines and that's the experimental result. And so a few things here. So this line here is the so-called optical gap and that's the point from which there is a strong absorption observed experimentally and also theoretically and that corresponds to the lower lying band that has the opposite parity to the conduction band minimum. But then theoretically we do actually get some absorption all the way to the valence band, to the actual valence maximum and that's because although the state at the gamma point can be definitely classified to consider parity as soon as you move a little bit away from the gamma point then parity is no longer a quantum number and therefore you get some contribution from the other parity and although the states are still largely aligned to the parity of the gamma point you still have some transitions and that's what this represents. But then the conundrum here really is that although we do get some absorption there it's actually multipliers of magnitude weaker than what's observed experimentally. So this is again where phonons come in and phonons can mediate this transition and if we include phonons at various temperatures then this is the absorption spectrum that we get and it does increase the absorption compared to the static lattice result by multipliers of magnitude and also it moves closer to the experimental results. The other important thing to know is that we also capture a redshift essentially in the absorption onset of this material that is appearing due to increasing temperature. So let me go to the third example. So the third example is really to do with phonon-assistant luminescence and this is more an advertisement of the work of two other people really. So this is the work of Elena Canuxia and Claudio Tacalita and Elena had actually a poster on this last night and she's still around in the conference so please don't speak to her about this, but so they were interested in looking at hexagonal boron nitrite and luminescence in this material and this is what the band structure of, so this is bulk hexagonal boron nitrite and that's what the band structure looks like. You have the top of the conduction band, the top of the valence band near the K point and the bottom of the conduction band near the N point, so it's the indirect band cap semiconductor, but there's actually luminescence across this indirect gap, so they were interested in trying to capture that theoretically and they just wrote to me about using finite differences in this context and I gave them very little advice and they very nicely included me in their work. But so what we ended up doing here was, in order to describe luminescence, you need to solve the out-of-equilibrium beta-salpeter equation and that's a rather hefty computational task, so unlike the other examples where we actually included all the phonon mode in the system to look at the temperature dependence, in this particular case, we actually limited ourselves to looking only at the specific phonon mode that has the correct momentum to exactly bridge the momentum difference between the conduction band minimum at the N point and the valence band minimum at the K point and that happens to be this phonon mode here, if we look at the phonon dispersion of this material. So this is actually the only phonon mode that we included in the calculations, well all the phonon modes at this wave vector and these are the results, so in this particular case, we actually went beyond the adiabatic approximation and using time dependent perturbation theory just in the same way as Holberdin and Bladdit in the 1950s to describe phonon-assisted optical absorption, but in this case in the excitonic basis, then you can derive the expression for the photoluminescence intensity and this is the exciton dipole matrix element essentially and what we're doing is we essentially were displacing the relevant phonon in the system and looking at the change in that and then these are just the occupation factors, energy conservation terms and crucially, we have the phonon contributions in the energy conserving terms and these again are the Bose-Einstein factors and these are the results, so the material has a, so this is the exciton is actually two excitons divided by about 0.01 EV I think and these are dark in the calculations if you don't include phonons experimentally, you do see a little bit of signal which probably comes from some defects in the material, but then if you do calculate the intensity using this equation, then we get the blue curve here which very nicely reproduces the experimental curve and we have all the resonances each of these corresponding to each of the various phonon modes at this wave vector and we reproduce relatively well both the position and the relative intensities of the spectrum, so I'm actually done, so that's really all I wanted to say, I hope I convinced you that it can be important when looking at the optoelectronic properties of materials to include the effects of temperature as they can have important consequences in what we observe, thank you.