 Welcome back to the course on statistical mechanics. We shall continue with our study of computer simulations as a primary method to implement the principles of statistical mechanics. In the first part, part one, which was in the last lecture, we discussed not just the importance of computer simulation with statistical mechanics, but the basic principles, how these allow us to discuss highly complicated systems, including biological systems like proteins and DNA and other techniques. And we mentioned there that there are statistical mechanics has two major techniques, molecular dynamics simulation. Other is Monte Carlo. They are entirely different approaches and tied to our two postulates of statistical mechanics. One invokes the time averaging. That's the molecular dynamics simulation. The other invokes the ensemble averaging, which is the Monte Carlo equations. In this sense, they are very nicely tied together. And again, the same thing, the issue of organic hypothesis also comes here because sampling of both the two will very much depend. We determine the outcome and the accuracy of the result. So in this class, today's class, we'll continue with that. I mentioned in the part one, last lecture, that it is a vast course. There are many books, many, many books on computer simulations. And there is also a large number of lectures in the interview. I strongly recommend students. That you read some of the books on the lectures. In our book, we have devoted chapter 29 on basics of computer simulation. And chapter 30 is more advanced topics of computer simulation. And we have done it somewhat differently from other people. But it's really tied very well with the course. I'll explain to you. I already explained how we use Ising Model to explain the necessity and the implementation of periodic boundary condition. And we use Ising Model also, the method or technique of minimum image that we use to count the interactions and minimize the load on the computer. Today, we now do again this same classification, molecular dynamics, and Monte Carlo. But today, we'll go detail now into the methods. Now, as I told you, with the time averaging, we have to generate temporal trajectory. The way we generate temporal trajectory is by solving Newton's equation in classical mechanics. And the Newton's solution, equation force and mass time acceleration. So the method is just solving, numerically, propagating a second order differential equation. You go to each particle, find this position, velocities, its orientation, if it is like water. Then you find the force and torque on the molecule. And then, from time t to next time, t plus delta t, we use propagating it by integrating. This is called the method of quadrature, which is solved numerically. And there are over the hundreds of years, many techniques have been developed. And all of them came handy in solving because it's very important that we propagate the equation for a very long time. As I told you, we propagate it over maybe 100 nanoseconds, sometimes in microseconds. However, individual time states, many of you have done a finite difference method of solving the differential equations. The difference here is that, a, that you have a large number of degrees of freedom, like even if you have 1,000 ergonatoms, then we have these 3,000 position coordinates and 3,000 momentum coordinates. And these forces change very quickly, so in a liquid. So the time step that we have to use is very small. And time steps happen in the femtoseconds. So now, you see the anomaly of the problem, even less than femtoseconds. Let us say that we take half a femtosecond. So now, I have to propagate this equation with the 60 degrees of freedom, couple degrees of freedom. By using a time step of half a femtosecond, two microseconds. So it is less than 10 to the power minus 15, 10 to the power minus 6. So I have to do something of the order of this. 10 orders from magnitude. So this is huge, and that is the real difficulty. How do we integrate? Well, as I said, this is the finite difference method. There is one of the popular one. It's the position parallel, where you both propagate. First, you propagate the position, t plus delta t. And you write the stellar expansion. Then you write t minus delta t. The whole idea is to add them and get rid of the linear term. So I get t plus delta t. And then t minus delta t, just adding the two. So right now, I do not have the position algorithm. This is I also get rid of the first term, linear term. I get rid of also the velocity. But I do have acceleration in AIT. So this is called position for algorithm. Very, very neat technique. Then in this technique, velocity is obtained after you get the trajectory. Since you know positions at t delta t and t minus delta t, you know the velocity as it's given on the right-hand side of the equation. And you have to store the velocity, not only because you have to check the kinetic energy, and you have to get many properties. You need the velocity. So by this way, I get the full phase space information, the motion of the trajectory in the full time space. But the advantage is that I don't have to store the velocity immediately. And so I don't need it. So that I have sought in simplification. Then there is the velocity following algorithm. This is very clever way of doing things that now you propagate both position and the velocity. And this is, again, cleverly. There is a cleverness in putting the velocity term by delta t by 2 A t and A delta t. The whole idea is essentially to minimize the area of propagation. In Liffrock algorithm, another interesting thing where RT is very easy, the calculation of the velocity in both these two, the calculation of the velocity is the tricky thing. So here, you use vt plus half delta t is in terms of vt minus half delta t. So it's like you are going from t minus. So velocity and position are staggered by half delta t. And that has the advantage that by doing that, your accuracy increases. And if I remember correctly, the delta t squared term is eliminated. So everything is linear in delta t up to delta t squared. And delta t squared is eliminated. And that has a new molecular accuracy. At one time, when we did computer work, it was not that powerful. These techniques were extremely important because success of your simulation depended on what technique we use. And we've used many. So then in addition to step size, I said the classical molecular dynamic simulation is half a femtosecond to two femtosecond. Abinitio molecular dynamic simulation, where you solve time dependence, for any of these times, because of the quantum decrease of freedom and because of quantum coherence, is an exceedingly small time state. So these simulations cannot be done in very more than a few. Even now, a few times a few times a few times. Many times when we talk of very big systems, we do a coarse-grained system. Coarse-grained is essentially what you do. You kind of take, you have a lens and you make the lens open. That means you now do not want the informations which are very detailed. For example, remember we did Landau's theory of order parameter and we wrote down the free energy in terms of order parameter expansion. That is an example of coarse-grained description. That means we are not going fully to position and momentum details, but we will kind of set a grid in the system. Let us say cubic grid in a three-dimensional system and now we average over the grid. And averaging, we find out, say for example, the order parameters. Now we derive an equation of motion for the order parameter and then we propagate that order parameter. So that also has a great merit and it is used in systems where we do not want the atomic details many times. Sometimes these days coarse-grained system is also done but you eliminate, say, about four or five degrees of freedom. Like I want to have a polymer. And then polymer, I have this CH3, CH2, CH2, CH2, CH2. Then I can coarse-grained it by, first CH2 is one carbon atom. So I eliminate hydrogen. Then I can eliminate, then take the, for example, in a protein, the amino acid, the side chains become a sphere. There are then, of course, you have to pay somewhere. So when you develop a coarse-grained description, the finding the force field or intermolecular potential undergoes a lot of change because no longer you have the interaction in terms of pair atoms. So you have to also develop the force field. So that has a lot of effort goes into that. However, coming back, so these are the kind of things that we do the different integration, different quadrature technique, different propagator technique by this kind of short-time state which generate the trajectory. Once we generate the trajectory, the statistical mechanics tells us what to do with it because we know the time everything. You can calculate the pressure, my pressure by calculating the specific heat by energy fluctuation. Many of our radial distribution function, all these things we can calculate and then our job is done. So then the summary of the steps, first create a finite box and that's what we discussed in the last lecture. And then I remember the example of the eyes, we will calculate the force on each atom and molecule and then you go through each atom and molecule. And depending on the force and according to the prosecution, depending on the force is experiencing and the new prosecution. And some quadrature like valley algorithm, you move the position and movement of each molecule. This is called updating the updating configuration. Then you have the trajectory and calculation. This is our computer. Monte Carlo's. Monte Carlo's. Which is straightforward and often the method of choice and most of the packages that exist in literature, most of the packages like Gromax, Charms and all these, they essentially are molecular, they're all molecular dynamic solutions. Because molecular dynamic simulation, as I explained to you is rather straightforward. In classical simulation, it's solution of the Newton's equations of motion. And if the Newton's equations of motion, then I can propagate it very easily. However, the next technique in a powerful technique and often the method of choice in complex systems, if you don't want the dynamical property, is the Monte Carlo's solution. As I discussed, firstly, it's Monte Carlo was placed in with a camp like that. It has to do with a lot of random number generator and it is based on ensemble average principle. But we have to let the system go to all different microscopic states. Remember, when we need the ensemble average or time average, basic idea is that in a time trajectory has to go to all corners of the phase space. And Monte Carlo in ensemble, I have to take sufficient number of ensembles that all the microscopic states are captured or if you have a number of their amount of microscopic states. So both have their demand. So now, so we now discuss the Monte Carlo's solution. Okay, in the Monte Carlo's solution, more common to calculate structural and thermodynamic properties of the system. As you can read here, he stated it is a popular area in Monaco city. I thought it was from France, but Monaco is supposedly visited by France. Anyway, and it's an algorithm called Monte Carlo algorithm which was used in a metropolis algorithm where this name was introduced. It's a method of probability theory and it is a probability theory that guarantees many of these things. Okay, now, as before, I explained the things in terms of a ising model and again, one-dimensional ising model. So I wrote down the one-dimensional ising model with this Hamiltonian that we have done. And if you have a few magnetic interaction, then I have the Hamiltonian that is two spins parallel then a G is less. Now, it is a wonderful thing to do a Monte Carlo simulation on this, to explain Monte Carlo simulation on that. So first I generate a initial configuration by go to each lattice side. I call it an item number and that has a value between zero and one and if it is less than 0.5, I get up is greater than 0.5, I make it down. So I can arbitrarily create a configuration This is the beauty of ising model. Now I wanted to Monte Carlo simulation. From there, I of course sample the equilibrium configurations and one-dimensional ising model is an additional advantage that we have the analytical styles. So whatever I get by complicated solution, I can check it. And it's actually a very good starting point and I already recommend all of you to do it. This simple calculation here that I have going to outline to you. Randomly select the speed by the random number generator then you find that among its neighbors we have three possible arrangements with the neighboring since point downwards then it's an unfavorable configuration one spin up and another is there among its neighbors and one spin down both the neighbors are in the upper position. Now we consider the energy of these three cases are different If the select spin is up directed as we start here energy decreases when its spin flips by the way this is exactly the example which has been done in my book and I did it by all these things in terms of ising model so that it is easy for students to understand Now if the now there is certain things in a Monte Carlo simulation is very important that you do a relative that means a relative movement that means you favor those moves those who lower the energy so you can easily know now that if I permitting interaction if I spin rates the tagged spin if I flip down then energy decreases that means now then energy decreases and that is accepted with unique probability now energy remains the same if you happen the energy remains the same after the flip not in this case but if my one spin is up another spin is down that configuration this will happen and then I take that configuration that change configuration with half probability however if it increases then there are two situations and it is very interesting if it increases then you accept but you accept it penalty by doing this these choices you bias this system to low energy configuration why? because there is a bulge band who told us that these properties of the system are weighted by e to the power minus beta e so here I have told you that if now it have two condition it can flips and if it flips then it is accepted with unique probability however if it removes flip then it is like that but however if it flips this picture is not 100% correct it should be down down going up down going up like actually on the upper egg then energy going up then you accept with the probability such that probability is conserved but with the penalty as I told you so flip probability that means from raise spin down to going up that will be in the change of energy is delta e then e to the power minus beta delta e 1 plus e to the power minus beta delta e 1 plus e to the power minus beta delta e and then consider no slip you say no I now open the option that I will have no spin flip then that will have a probability 1 minus that so so probability must be conserved so if the energy goes up then I have the option that I am going to stay in my same configuration so that is given by my bottom equation so these two must be added unity as I show you again to remind you there is a significant figure here on the extreme left should actually be down now what I describe right now to you is the one with the spin discrete dynamics as you know linear zones water we have a potential that is continuous in that case what the way we do is very interesting we take one of the n particles and complete all the interaction you can easily case what will happen compare with the interaction to all the other particles then we now utter the coordinates by random variables move the particles a little bit and then we calculate the energy if the energy goes down we accept it but energy goes up we exactly do the same thing again do if energy goes down then you accept the move it goes 0 then again we do the same thing we accept the move now one more thing I am saying there just as a model we took no flip no flip in this case no flip means that we don't make these changes of my 3 n position coordinates but here we call a random number r and if my e to the power minus delta is greater than r then only accept that means when the energy increase is not very large why we do that because if we keep on neglecting all the moves then it becomes very very demanding and we don't get so many times we have to go up the hill go up little bit then go down but if I check all the moves where energy is going up then that might trap me in a very small minima and also my statistics become poor so in this case we even when energy goes up we do the Boltzmann factor and if Boltzmann factor is e to the power minus beta e e to the power minus delta u that is delta is not very large and positive so this is the technique that I just explained to you is essentially the metropolis algorithm that so what essentially now we are going to do that the what we do need in essentially the calculation of the partition function or calculation of average that's what in statistical mechanics is we calculate the average is what Boltzmann factor so in this case can be measured as a predetermined set of points in the configuration space and then the metropolis method that's where the value of the integral is negligible so what we do the that when energy goes up very high then we neglect that and one more move on the one more move for rather create a markup chain and then we continue so it is as if we are propagating a trajectory but the propagation the trajectory is not following Newton's laws of motion it is following Boltzmann factor and and calling a random number again and again so this random number again and again as I said it has to be a markup chain in this sense they have to be independent of each other and these random numbers will allow me to move every atom in a random in its own three dimensional space random directions by random amount and then I calculate the energy so this is the metropolis algorithm as I said completely controlled this is what in flow chart I described it is little bit small but you will be able to read young guys so you generate the move calculate the energy accept it if energy is low if energy goes up then accept it with energy so this is goes on there little bit more detail because one thing that we want to implement is what is called microscopic reversibility that means you are going to a place and you should be able to come back and that thing since we are in equilibrium is extremely this is the essential concept that introduced by Onsager in terms of yeah, microscopic reversibility and in chemistry we already knew it in terms of detail balance but detail balance is not microscopic reversibility is more detail and much more profound therefore the microscopic reversibility means the probability of going to a place and the probability of in the initial position and probability of going to new position is equal to probability of the new position into probability back the rate of coming back so this thing this microscopic reversibility is essentially what we use in chemistry all the time K12 U1K12 is P2K21 and that going back and forth that has to be maintained so that is very important quantity so that that has to be implemented in choosing the scheme and in doing the transition probability from one state to other state because that guarantees very important my micro point is that that guarantees that you are in equilibrium because microscopic reversibility is the condition of equilibrium so every move must be microscopic reversibility otherwise you can go away from equilibrium and Monte Carlo algorithm is something we study on the equilibrium system yes we can study on the equilibrium system but that is required far more demanding that we are not going to discuss what we are discussing here is the equilibrium systems and where microscopic reversibility must be enforced many studies went wrong because microscopic reversibility both in analytical and numerical was not so here is the acceptance rejection ratios very similar to what I need in case of ising model so I am not going to go into detail but one random number generator same thing I described a couple of times but one has to make sure that you are in equilibrium and you are also increase the rate of your exploration rate of the exploration of the conventional space you do not want to do it so slow that you do not go anywhere if you do not accept moves you are going up for example you are just not going anywhere much of the time we are again the flow chart the Monte Carlo simulation in one dimensional ising model we did that there is an important thing that you often use is called important sampling important sampling is a name is that as I told you that problem of Monte Carlo simulation also a molecular dynamic simulation the problem is that once we system goes into a trap or kind of deeper minimum the system does not come out of that there is a terrible terrible problem in molecular dynamic simulation and many molecular dynamic simulation were wrong because the system that was simulated was stuck in the minimum however this minimum might be only a small part of the that particular minimum but the system is stuck because there are many other domains which need to be explored to get an average and these the kind of things are done by important sampling most important is that is also real events like the chemical kinetics when the product reactant the product are separated by a large barrier high free anti barrier as we have emphasized the Monte Carlo using metropolis algorithm essentially ensures a Boltzmann averaging over configurations and Boltzmann sampling of the configurations important sampling on the other hand tries to go where Boltzmann probability distribution is very small so conventional metropolis algorithm does not work as we already said these are the processes like in phase transitions one has to go over a large free energy barrier even obtaining the free energy barrier is very important the way it is done is actually quite quite smart you define a new distribution you define another and you sample from the other distribution and then so you write like here the expectation value as an integration and you define px by gx and now as a result of that the gx that you have this gives you an average over the term is fx px gx and gx now you have the freedom to choose and so you choose the value gx that it is it is not small in the region that you want to probe so this is most important is the rare event sampling where you do not get to the barrier region and it is shown on the table right side that it is if you go to a large barrier then you can go days 2 years to sample them however you can now device a distribution like it is defined here the e to the power wx and that pulls you towards the distribution and is shown here that bias is wx is shown so bias is a harmonic placed exactly at the maximum so that now the minimum is placed exactly at the barrier so couple to this added together allows you sample the barrier region and this is the I am not going to go through the mathematics but you can see how a new term e to the power minus wx plus minus that has to be is introduced and then you get the sampling over the this region of importance these are other advanced technique but it is good to know so there is a drawback of the importance sampling one is that how do you keep the temperature and pressure because these things are completely fluctuating and these are small systems temperature is easy to understand temperature you do by rescaling the velocity like this described here I know it is an equipartition theorem and I know that velocity squared average of velocity squared gives the temperature see the temperature goes up when temperature goes up scale back there are many ways of doing that so the scale running the temperature can be done by velocity rescaling similarly if the temperature falls down I can up scale the velocity however it is easy easier said than done because you have to do it systematically and you have to do without perturbing the trajectory that is all the different methods have been done first one was done by Hans Anderson Volanderson thermostat so he considered that there is an external bath and each molecule is undergoing a condition so when it is actually essentially the system the bath pervades the system and the heat bath pollution frequency given by probability and that that maintains Maxwell velocity probability distribution of the particles then how do we control pressure pressure is a little bit more tricky and little bit more dangerous and here but when we do NPT simulations we do canonical ensemble or grand canonical ensemble simulations or micro canonical much of the time we do micro canonical but we need to get the pressure correct so many times what we do we do keep the pressure and run the trajectory, stabilize the system and then you see what is the volume then we remove the pressure but before that we need the pressure and pressure is fixed by using barostrate barostrate is the name as we know from experiment as I said pressure is kept constant for NPT ensemble by rescaling the box vectors so you you just like pressure is by having a piston here you are control the volume of the simulations shell and there are many that again there is barostrate that is again used very similar technique that we have used here that you have a pressure by coupling barostrate and it is time state now instead of atoms and molecules you are changing the coordinate of the box helping by co-operating the law of one where you not only change this shape you also change the size actually these are very nice history the first computer simulation of solid state phase transition was done by Anisur Rahman where it changed the shape and that was then developed much more by Perinello who worked extensively with Anisur Rahman so you are now changing the shape changing the volume changing the by doing that that means your system is responsive but doing that you can keep the target pressure so there as I said there are lots of packages but most of the packages are molecular dynamics packages like gromax, lambs NAMD as I told you that in in Monte Carlo simulations most of the time you have to write your own code though there are some codes that are available in the in the different books and in papers but then you don't have packages like in so and one usually has to be more creative in doing Monte Carlo simulation but these are the questions I have been asked often when I give talk by experimentalist do you imagine some stage the experiments will be simulation will replace the experiment that will never happen and it does tell you one thing is that simulations are getting more and more powerful but more importantly simulation gives you information that many times experiments do not get it gives you microscopic information and it gives you how molecules are interacting so it gives you by comparing theory as theory, simulation and experiments you know whether your force will distract your interaction but that's a lot once you know how the molecules and atoms interact you can change the interaction by some substitution and you can create new materials so there's also technique and I did one job here but I strongly recommend that you read the books and and many other books are there