 streaming so I think we can stop. We can wait probably it's not yet two-thirty. Let's wait, I don't know, a few minutes past two-thirty if Roberto agrees. Sure. Even you tell us when you think. For me, we can start. We can start. Yes. Do you see me? Yes, yes we do. Okay. Good morning everyone and welcome to this first keynote lecture of the school that will be given by Roberto Carr from Princeton. Roberto hardly needs any introduction to any molecular simulation practitioner and I will just say very few words about his many recognitions and the awards that he has got all along his long and very productive career. Roberto, of course, you all know the groundbreaking work that he did in Trieste in the winter 84-25 with Michele Parinello and that work got immediately a huge attention from all over the world and started to get them many prestigious prizes. The first of which, if I remember well, was the HP price of the European Physical Health Society awarded in 1990. The two of them have got a number of other prestigious prizes including the Dirac medal of the ICTP and many others. Roberto was elected as member of the American Physical Society in 2016. The last chain of this, the last link of this long chain of awards and prizes has been this year the Franklin medal that is awarded yearly by the Franklin Institute in Philadelphia to scientists and entrepreneurs in the United States. The Franklin medal is probably the most prestigious prize awarded, scientific prize awarded in the USA and the previous awardees include the likes of Albert Einstein, Maria Skodolska-Curie and Stephen Hawking and many others. The latest interest of Roberto include the physics of water and other age-bounded systems with attention also to the role of quantum effects. So these are very time-consuming molecular simulations, much more so in the case of quantum simulations including quantum effects on nuclei with path integral molecular dynamics. And this I think is one of the reasons that brought Roberto in contact with modern machine learning techniques that allow to combine the accuracy of ab initio molecular dynamics that he has pioneered 35 years ago with the economy that can be achieved with classical forces. Let me conclude with this presentation with a few personal recollections of about Roberto and the strong links that have always tied him to the quantum espresso project. The great work of Cara Parinello as I said and all of you remember was done in the winter of 84-85 when Roberto was fresh from the United States. He arrived in Trieste in the fall of 84 coming from IBM laboratories in Yorktown Heights and amongst the many things the many knowledge that he brought back to Europe was a plane wave code that was pioneering in many ways at those times that he brought back from IBM if I remember well and the main features of that code were FFTs to do convolutions, to do real space convolutions which are one of the key technical features of modern of modern plane wave based codes and the very smart to use of symmetries. These two features still exist more or less untouched in the modern distribution of quantum espresso. Shortly after Palo Giornozzi and myself started to work on more traditional self-consistent approaches to DFT and combined ideas from Cara and Parinello with those more conventional approaches by implementing I think it was the first at those times attempt to avoid the to avoid the matrix diagonalization and to replace them with modern iterative techniques and then the densification perturbation theory came and soon after the many students that passed from Trieste were continuing their scientific journey taking back home or to other destinations their own versions of the code of the codes that they brought that they brought back that very soon diverged so the all of the codes that are now in the quantum espresso distribution originated from the first from the first plane wave code that Roberto brought to took to Europe from from your townhouse and soon the those individual codes stopped to talk to each other and were incompatible with each other so when I visited him in in Princeton in 2002 because the time where many other characters that all of you know where or had been shortly before in Princeton those characters include the light of Palo Giornozzi Nicola Merzari and Ralf Gebauer and that summer of 2002 Roberto and the other guys including myself decided to try a kind of grand unification of the main codes that started that originated from the first plane wave code brought back to Europe by by Roberto in 1984 and that was the birth of the quantum espresso the quantum espresso project the quantum espresso project didn't have its name we tried the few names that for one reason or the other didn't survive but the final name was found in 2004 as as Ralf reminded us told us this morning while we were at the summer school on electronic structure theory in Beijing and we were waiting at the Beijing airport to to board for a flight to Xi'an so I think the originator of of the present name is Nicola Merzari to whom goes the credit and our gratitude for finding out the name so I've been talking for too long already let me conclude this introduction by thanking Roberto for accepting that to be our first distinguished speaker and I wish all of you a happy a happy listening of Roberto's lecture on machine learning and ab initia molecular dynamics please Robert thank you Stefano let me see if I can share the screen share yes can you see the screen yes okay very good okay let's see that okay so so I would like to thank Stefano for the very nice introduction and all this recollection in fact the recollection on the name of quantum espresso is very nice and I just remember that all the names we came up before they are just awful and finally Nicola was able to find a very good name and I think that makes this project really distinct from other projects of that kind now in my lecture today I'm going to discuss mostly machine learning based ab initia molecular dynamics which is the issue the topic on which I have been working in the last five years now in all you all know what ab initia molecular dynamics does well essentially it solves Newton equation with forces that are derived from the quantum mechanical state of the electron and you see in the animation here the growth of a crystalline nucleus in in water that is that is a simulation done with meta dynamics using a standard force field model for water well that is something that could be done until very recently only with empirical force field and the reason for that is that even though one just does classical molecular dynamics in order to find out the trajectories that give rise to the nucleation of a crystalline nucleus ab initia molecular dynamics is very expensive even with the most powerful computer so that this kind of simulation was definitely out of reach with ab initia md on the other hand recently there has been a very important progress based on marrying ab initia molecular dynamics with machine learning in fact by learning the potential energy surface from quantum mechanics machine learning method makes possible simulation of ab initia molecular dynamics quality at force field cost essentially what machine learning does is learn the local pattern of atomic configuration that give rise to the potential energy surface and it interpolates the result obtained from the learn pattern to pattern that are that were not included in the training data set but which are close enough to the learn pattern which is what happens in fact in molecular dynamics emulation well there are several methods to do that and I will discuss the method that was developed in princeton this method is called deep potential method and it uses deep neural network to model the potential energy surface and other ground state properties that are accessible to ab initia molecular dynamics it has been developed in the phd thesis of limfong zang who is now back in china and limfong received a doctoral degree in may of last year and several other people contributed to the effort i want just to mention that besides limfong wenhan professor wenhan in the math department at princeton and myself were the director of limfong thesis okay let me move on and give some detail essentially we have a physical property that in the example that i will show at least in some of the example is the potential energy or the polarization or the polarizability surfaces so this is a quantity that an observable property o that can be a scalar property in the case of the energy a vector property in the case of the polarization and a tensor property in the case of the polarizability and what the the deep potential method does it represent this property as a sum of local properties the index i run over the atoms in the system and each local property depend on the coordinate of the atoms that belong to the neighborhood of that particular atom these are the local pattern i was talking about before and so this is a local representation however the local properties depend on the atom on the pattern on the atom that are included into several coordination shell of the i atom and within that neighborhood the function f is a truly many body function of the atomic coordinate the the property f is represented by a symmetry preserving continuous and differentiable function of the atomic coordinates in the environment with contained variable numbers of atom there are some distinctive features in this approach one is the flexibility of the deep neural network representation that allows us to model essentially any function provided the function can be represented that in term of local properties and well so this is a very important feature representing the function in term of this local property as a superposition of local properties makes the property by construction extensive which is important to extend the simulation to a system of much larger size than the size that are used to learn the local environmental dependence in term of density functional theory calculation another very distinctive property of our approach is that the learning process can be made on the fly using an efficient approach called DPGEN I will describe later on in a little more detail and finally the approach has been implemented on high performance computing platform particularly platform based on GPUs and has very high computational efficiency on this platform let me move on so basically as I said the quantity that the observable property that we are representing is the energy that the potential energy surface is what is given here in the first line then another property that we represent is the cell dipole that I call M here and here in the case of the cell dipoles I specialize the description to water so we have a dipole associated to the oxygen the charge is six because we use a pseudo potential representation and so these are the valence electron six or this is the charge associated with the ion core of the atom we have one for the hydrogen and then we have the electronic center of charge that are the vanier function actually in water system in the irrespective of whether water is still in molecular structure or it is dissociated partially dissociated into ions these vanier centers are always associated to the oxygen and so we can construct an average of the four vanier center each one with two electron associated to the oxygen we call that vanier centroid so this quantity gives the dipole the cell dipole and and then the derivative of this quantity with respect to an external applied electric field at fixed position of the nuclei defines the polarization tensor this alpha that also has the local representation given here so all this is described in the paper uh uh quoted below and so this allows us to do a number of things that can be done with standard ab initio molecular dynamics what but with a much higher computational efficiency so let me move on and just spend a few words on the deep potential generator uh uh now uh in a neural network uh representation uh in this diagram here i uh uh describe a network uh uh with uh uh three uh uh layers of uh uh nodes uh or neuron and uh uh uh a certain property uh a that depend on s uh that are uh the uh descriptor for these properties or the atomic coordinate for instance in our case uh is given uh by a complex uh functional form that depend on this parameter w and uh um and b and uh um uh this uh this is a uh convolution of a convolution of a convolution so it is a rather complex function that depends in a way that is highly non-linear on the parameter of the network that have to be learned by uh minimizing uh uh the distance between the prediction of the network and the actual dft calculation so um in the dp gen procedure rather than performing a large number of dft calculation or of ab initio molecular dynamics first and then trying to uh uh uh describe that with a neural network what we do is that we exploit the non-linear dependence of the network on the network parameter which means that if we do a minimization to find the network parameter we will get typically the minimization to find the network parameter is done with a procedure that is a stochastic gradient descent method so it is a method that uh um uh that uh find a local minimum so uh we will get different local minima different network parameters uh if we start from a different initialization of the network parameter so what we do in dp gen we use a uh uh Gaussian distribution for the initialization of the network parameter and then we select a few uh neural network that uh are generated with different network parameter um the ensemble of neural network we have a in principle an ensemble of neural network that we can be generated by simply varying the initialization uh this ensemble is already well represented by a few members three or four uh in practice and now what happens if the network is well trained uh even though the network parameter are after the minimization are different the uh it will all these members of the ensemble of neural network where it will represent equally well the physical property of interest but and so we can explore the phase space with uh uh the uh so far trained uh uh deep neural network and we keep uh uh following uh this exploration procedure that is done with molecular dynamics but molecular dynamics guided by the neural network which is very efficient computationally much less costly than quantum mechanical calculation and uh uh we measure uh how close are the prediction obtained with uh uh different members of this ensemble of network uh as we reach a point where the predictions start to diverge uh it means that the uh the neural network is no longer good to describe properly uh this property at that uh thermodynamic condition so we label uh this configuration and we do a uh quantum mechanical uh calculation for this particular configuration adding this particular configuration to the uh library of the training data and uh we improve the training of the model by this additional uh configuration and we proceed in this cycle until uh uh the network is uh sufficiently good over the entire uh range of thermodynamic condition of interest so uh this is uh uh the procedure that uh we use let me move on now I have described what the potential molecular dynamics does I will now show a few example uh that are uh uh definitely beyond the reach of direct ab initia molecular dynamics simulation uh the example that I will talk about today is phase diagram of a scan-based DP model the auto ionization of water homogeneous nucleation of ice and uh something to do with vibrational spectra but let me um just say that uh the very uh most of the calculation I will show today are based on classical MD simulation but uh we are also working at including nuclear quantum effect uh uh via a path integral simulation as Stefano mentioned in its introduction the uh uh first example uh is a uh deep potential model for water based on uh uh on scan uh DFT uh functional uh this work uh is going to appear soon in physical review letters and is available in this archive uh preprint so what we did in this work we were able to model uh uh water uh over a vast range of thermodynamic condition including uh uh crystalline phases in a range of pressure going from uh ambient pressure to uh 50 giga pascal and including all the crystalline phases that occur in this uh uh domain at pressure for for temperature extending from uh at low pressure from uh zero uh to 400 uh kelvin and at uh high pressure extending up to uh 2000 uh uh beyond 2000 uh 2000 kelvin so uh in this vast range of pressure and temperature uh there are many different crystalline phases uh indicated here by one h 11 2 6 8 7 7 second and a fluid phase that uh can be just the standard molecular fluid at uh uh uh low pressure and becomes an ionized fluid at high pressure now the red uh are the red line indicate the boundary line between the different phases that we're obtained with the dp model the gray line indicate the boundary line that uh uh are actually uh uh the correct uh boundary nine as obtained from experiment so we can see that uh overall the qualitative picture is uh quite good but uh uh there are deviations from uh the prediction of the model which is essentially the prediction of this particular dft functional for this system and experiment i uh want to show here there is in this small uh uh phase diagram here is the prediction obtained with uh uh one of the best available empirical potential model for water compared with experiment we see that there are deviation from uh experiment also with this model and uh particularly this model become particularly bad as we go to high pressure because at high pressure the uh uh ice rule that uh uh characterize uh uh water uh uh structure at low pressure um uh are broken more often much more often than at low pressure giving rise to partially ionized configuration but this partially ionized configuration are not possible with a rigid uh water model as a tip for p instead they are possible with uh uh uh uh with uh uh ab initio md and with the deep potential model um i want to mention a couple of more things here uh this uh uh here we have i6 but there is also ice uh experimentally uh uh at low temperature ice 15 would be the most stable structure but ice 15 is only metastable uh within the deep potential model this might be related to the difficulty of uh uh stabilize uh uh the phase uh ice 15 with uh standard semi-local functional and also uh uh this uh ice three that is stable in this small region here experimentally uh is only metastable in the uh deep potential model uh that might be due to the fact that the that the boundary between uh uh ice uh hexagonal ice and ice second uh is pushed at higher pressure uh with uh uh uh this uh uh model uh and also uh you can observe that the uh melting line of ice uh uh um of hexagonal ice is pushed by approximately 40 kelvin at higher uh temperature compared with the experiment now this simulation we are all done uh with classical simulation the uh uh and so that would be correction due to nuclear quantum effect but the correction that uh uh uh will be uh i expect those correction to be smaller than the uh deviation of the model from experiment um uh so what is interesting about that is that a single deep potential model can describe water over this vast range of temperature and pressure uh in order to learn uh the local pattern in this vast range of temperature and pressure uh uh we explore this huge phase space with uh uh the deep potential and uh uh we had to perform less than 35 000 dft minimization to construct the potential energy surface which is approximately uh uh 0.05 percent of the total configuration visited with uh uh the dp-gen technique the error in the free energy that we estimate is approximately one uh milli electron volt per molecule of course this is much better than chemical accuracy but this is the error compared to the dft reference so uh uh uh the the the deep potential representation is really uh quite uh accurate i want to mention just one more point here that uh uh is has been possible uh i think for the first time with this simulation and that uh uh um uh consists in the study of the transition from ice seven uh to ice seven second uh and then to the ionic fluid let's suppose that we move on a on a iso sorry iso bar over here and uh uh if we move along this iso bar uh what we see we see that uh ice seven uh uh that is still a molecular system we start developing ionic diffusion uh as we increase the temperature uh ionic diffusion start to grow uh exponentially with temperature and eventually it saturates before just uh right before saturating there is a more rapid variation in enthalpy and volume and but this variation in enthalpy and volume uh in simulation here that we are done with uh 432 uh molecule cell uh appear reversible uh we can go up and down here and there is but there is still this more rapid variation what we could do with this simulation that could not be done before we could study what happens to this uh uh uh rapid variation here if we increase the size of the cell we went to 3000 uh approximately 3500 molecule and what we see we see a sharpening of this rapid variation region both in the enthalpy and in the volume so this is consistent with the weekly first order uh phase transition between ice seven and ice seven second and indeed uh a year or so ago uh that was uh found also experimentally that this uh supereionic transition that occurred between ice seven and ice seven second is a weekly first order is a weekly first order transition i will not spend more time on that but that is just to say that uh uh issues like uh five five finite size scaling that are important to assess the nature of a thermodynamic transition become possible uh with uh deep learning let me move on and discuss another example that uh well that has still not been published because we are uh studying the effect of nuclear quantum uh on that nuclear quantum effect on that but here i will discuss only at the classical level so uh we know that the self ionization of water is what give rise to the uh ph of water and a quantity that is called the k w the k factor of water uh is given by the product of the concentration of the ions that uh are due to the dissociation of the water molecule i have here a an animation that shows the recombination of these two ions this animation was obtained by Chunil Zhang at temple university and it shows a recombination process as first described in a paper by asan ali ali asan ali and the perinellus group which was published in the the proceeding of the national academy of sciences in 2011 so now we can describe these processes with deep potential molecular dynamics and what now even though the recombination can be studied can occur spontaneously if the ions are close enough the dissociation required enhance sampling technique and in particular we use here meta dynamics and we use a reaction coordinate s that has been recently introduced in the perinellus group and is based on the voronoi tessellation of the uh uh of the uh system uh uh of the disorder structure of the oxygen in the system you see with traditional uh with traditional collective coordinate that are based on distances one has this spherical distribution and one recognizes if the ion so the difficulty in this simulation is the fact that uh as the simulation proceed the proton keep diffusing all over the place and so it is difficult to keep track of which ion is which and um and if we use uh uh coordinate the spherical uh that depend only on the uh spherical distance you see that there is an overlap between this region that is associated to the hydronium and this region that are associated to neighboring water molecule so if we are in the overlap we do not know if uh the uh uh hydronium is here or if it is there instead with the voronoi tessellation this uh uh uh this effect uh does not occur anymore and uh then we can study uh uh this thing with uh the reaction coordinate that has these forms suggested in the paper in the perinellus group and uh uh we can then compute the quantity that is the pkw that is the logarithm uh of the quantity kw and we can study this thing as a function of system size and in fact uh we see that uh we have uh uh this correspond to a system with 500 molecule and this correspond to a system with 1000 molecule we see that uh uh we achieve uh uh convergence with respect to size only with the sizes that are beyond what would be possible with ab initio md and now uh uh we do this system here at 330 k ambient pressure and uh using a deep potential model based on the scan functional uh what we get for the uh pkw for heavy water i did the compare we did the comparison with heavy water rather than light water even though for us it would be the same to use one or the other uh simply because uh uh this is based on classical simulation and heavy water should be closer to the uh um uh to our to the result of the simulation and we see that uh the agreement for the pkw with experiment is uh rather uh is uh uh rather good so these are difficult properties uh to predict from first principle but now uh it become possible to do it let me move on and describe another uh another uh example uh in this other example uh we consider uh nucleation of ice from seeding simulation so i will describe essentially in a seeding simulation is a simulation in which uh nucleation is studied by uh inserting a seed a crystalline seed inside uh a liquid uh as i will describe uh uh in the next uh uh couple of slides and uh uh but uh and uh then uh one uses uh uh the result of this seeding simulation together with classical nucleation theory c and t in order to obtain the nucleation rates but before doing that we need to study the difference in chemical potential between solid and liquid at uh various temperature uh from enhanced sampling simulation for a scan based uh deep potential model and uh uh the result uh uh the chemical potential difference as a function of the super cooling of water uh is uh uh uh represented here so uh in uh uh uh the the difference in chemical potential uh the super cooling uh uh is the super cooling that one has in the model so in experiment the super cooling is super cooling with respect to the melting temperature of ice that is 273 kelvin in the dp model the melting temperature is around 310 kelvin as shown here by this coexistence uh uh calculation and uh uh so uh it is super cooling with respect to that melting temperature in the model and similarly the super cooling for the tip for pis is super cooling with respect to the melting temperature of c for c tip for pis and we can see that as a function of the super cooling the difference in chemical potential that can be obtained with enhanced sampling uh simulation uh is uh uh indeed the uh techniques again developed in Parinello's group and uh uh i'm very i've been happy uh of having the opportunity could collaborate with Pablo Piaggi who uh received the phd under the direction of Michele and is now a postdoc at our center at Princeton University so uh you can see that actually the deep potential model uh if it is uh uh compared to experiment uh uh in this way uh looks uh describe the the the the the difference in chemical potential uh better or slightly better than the tip for pis in spite of the fact that tip for pis has a much better agreement with experiment for the melting temperature um now uh again uh this is uh uh what we do now let's go to the seeding simulation and first let me remember well i think here i do haven't had the time to uh do the the uh to check on the uh animation of the the the way the different things are uh indicated here so these uh uh the rate are the rate are derived from this expression that is the expression given by classical nucleation theory so uh essentially the difference in free energy between uh the solid that is this solid nucleus that has been inserted here in the liquid around depend on the uh uh fact that one gain uh in free energy in the bulk solid but one lose in free energy at the interface so one has this expression for the uh difference in free energy the uh uh where where is that the the r to the cube i cannot see well doesn't matter the r to the cube gives the dependence on the volume of the nucleus that is supposed to be spherical and r squared gives the dependence uh uh on uh the surface it is a cost of free energy because of the mismatch between the solid and the liquid at the interface but for sufficiently large volume the bulk uh stable phase will uh gain and indeed if we perform simulation as shown here in this upper diagram on the right we uh monitor the number of ice like configuration at different temperatures uh you see that when the temperature is low uh the uh ice like uh uh structure uh uh decrease with time time is of order of nanosecond here and uh uh um the structure grow at uh uh the um uh higher uh uh temperature but uh uh uh and remain stable in a small range of temperature now uh the expression for the for the rates is given here by this expression here so we see that the uh difference in free energy the c indicate the size of the critical nucleus that can be found uh in the simulation that i just described this gives the uh thermodynamic factor but there is also a dynamical factor that depend on uh uh z and f plus uh in addition to the density of the fluid uh z is what is called the zeldovich factor and f plus is uh uh the kinetic pre-factor that is something that one can also observe in the simulation which is the accretion rate of molecule by the crystalline nucleus the zeldovich factor has uh uh an expression an analytic expression in term of delta mu the temperature and the critical number of uh uh uh crystalline uh uh molecules uh um uh uh within uh uh classical nucleation theory and essentially gives the probability that uh when one is on top of the barrier here separating uh uh the region where uh the nucleus is unstable and the region where the nucleus is stable uh this gives the probability that if one sits exactly on top it uh one would go on uh on uh the on the growing side of the nucleus rather than dissolve uh uh by going back so similar to what one does uh in uh uh rate theory uh for uh defects for instance in uh in solids and now here are just two simulation uh uh this simulation here include uh uh 12 000 water liquid water water molecule in total and 600 molecule in the cluster and one can see that uh at this temperature uh uh the cluster uh reduces its size whereas this temperature the star the cluster increases its size well if one put together this result now this is now the point is that the lower the super saturation so the lower uh the temperature of the super cooling uh is so the the uh where uh if one is at higher temperature the simulation need to be done with the larger sizes of the system and larger sizes of the cluster because the critical nucleus will be larger at this uh uh higher temperature and so in fact we have simulation running out now with uh uh more than 100 000 water molecule all at the ab initio uh md level and let's see how the results are for the simulation rate here these uh uh uh are uh experimental uh uh data point these are the result with t for pis and these are the two simulation that we have done so far and then we interpolate with classical nucleation theory we get this green line this is the result obtained with the mw another empirical potential model for water and now what is this uh uh green region here the green region here is the uncertainty in our simulation simply due to the uncertainty at for the critical temperature estimation uh uh uh if we have uh we estimate our our uh uncertainty in the estimation of the critical temperature to be plus or minus two kelvin with plus or minus two kelvin you see that one has uh all uh the the the the the the nucleation rate uh could occur within this green region so um the the uh important result is that the ab initio result for nucleation rate and surface free energy are in the right ballpark and uh uh the result from ab initio theory are actually uh uh not in bad agreement with experiment given the uncertainty and uh uh uh in uh this very delicate uh quantity now let me uh move on i mentioned vibrational spectroscopies uh infrared and raman here using the cell dipole or using the polarization one can compute uh uh one can compute correlation function time correlation function and from this time correlation function one can obtain this quantity that is actually related to the imaginary part of the dielectric function that gives the absorption and uh uh and uh you see the comparison between theory and experiment here again temperature dependence of the raman scattering cross section at uh uh very low frequencies in which in this region the uh uh the intensity in the raman spectrum is approximately uh two orders of magnitude lower than the frequency in correspondence with the stretching moles that are the most uh uh uh give rise to the the the uh higher intensity in the spectrum and so it is very difficult to get uh uh result uh uh over there but not only could we get the result we could also study the temperature uh dependence uh and compare with experiment with this simulation uh based on uh the deep potential model let me just uh uh spend a few words uh uh here on the effect of the dft approximation and uh here uh I don't uh do any shift I report experimental data for the density of water liquid water and uh ice well the data point for ice should go up to the melting temperature that is 273 but anyhow that's the uh how it behaves and so you see that there is a discontinuity in the density going from the solid to the liquid in experiment that is the result with scan the discontinuity has the right size but it's smaller than experiment and everything is shifted to uh higher temperature we also see that uh uh in the liquid there is a temperature of maximum density that is approximately here in the scan there is also a temperature of maximum density that is approximately over here now if we go to uh uh hybrid functional approximation this is pb e0 ts we see that uh it improves a little bit with respect to scan and here we have the hybrid version of scan in which we include 10 percent of exact exchange exact heart rate fork exchange and we see that the discontinuity is uh uh uh approximately right and uh uh also uh the results are pushed down to lower temperature we have some preliminary data using path integral molecular dynamics and with this preliminary data with path integral molecular dynamics we see that the uh uh that uh uh the the temperature of uh uh of uh uh maximum density is pushed down to approximately 290 k not too far uh well approximately 10 k above uh the experimental results so we are quite confident that by clamping uh the jacob ladder of density functional theory one can improve also on these uh uh delicate uh properties and since we are talking about uh quantum espresso here i want just to uh uh report a couple of slides that i uh uh received from uh uh shinnyu uh co and uh rob distagio uh uh uh rob uh uh is a former postdoc in princeton who is now a professor at cornell university and shinnyu was a former student in princeton who is now a postdoc working with rob and uh biswajit santa was also former uh student at princeton so they have developed uh uh within the quantum espresso a technique uh to compute a linear scaling technique to compute the uh uh exact exchange and these linear scaling techniques is based on uh using a representation of the wave function based on maximally localized vanier function uh the code uh is uh available this is the alpha exx version that is currently available in the cp module in quantum espresso it allows to do uh both nve nvt and nph npt simulation with variable cell and include the contribution of the variable cell to uh the exact exchange and the code scale uh quite well uh with uh massively parallel uh computer there is a version that uh they are developing right now and which will be accessible in the near future and in this version it has modified the way uh of taking advantage of the localization of the vanier function in a way that is much more efficient so that it can achieve a speed up of six uh compared to the alpha version for liquid water a speed up of 15 for a system uh t i o 2 and water interface so the more heterogeneous the system is the more the approach uh the beta approach is effective compared to the alpha one and the third 30 factor of speed up for metal doped molten salt uh for metal doped molten salt so that is a uh capability that will be added to the quantum espresso uh uh soon in the future so at this point uh i think i'm used uh all my time i want to just uh spend a few words on conclusion and outlook so uh deep potential molecular dynamics is several orders of magnitude more efficient than direct definition molecular dynamics and has linear scaling with size due to its local uh representation opening the way to studies well beyond the reach of abunition molecular dynamics complex phase behavior reactions in solution nucleation dynamic response uh deep potential is a proxy for density functional theory um quantify the model deviation from the reference is important uh as i've shown uh in the deep potential uh method dp gen description algorithm to achieve this goal are crucial for example the spread of a network ensemble reweighting of deep potential data i've not talked about but they can be reweighted since the deep potential is so close to uh the abunition molecular dynamics to get the true prediction of abunition molecular dynamics for configuration uh based on the deep potential model etc more properties are accessible to first principle calculation and uh uh therefore this provide a wider check of uh dft approximation there are a number of issues uh uh the local representation does not uh describe uh uh explicitly uh long range electrostatics which may be important in a highly heterogeneous system uh and uh the whole uh the whole uh issue of quantum correction uh is open quantum correction to statics that can be done in principle exactly with but the integral simulation but there are also quantum correction to dynamics and spectroscopy uh which require uh of which require approximation uh because of the sign problem in sampling the uh appropriate correlation function and then there is the issue of chemical accuracy dft and beyond with that i thank you all for your attention and of course there are many people uh whom i should acknowledge some of them were mentioned here i want to recall again ling fong for his fantastic work in developing uh the deep potential method and also uh uh uh han wang another uh collaborator from china who has been behind many of these uh application in addition to the people currently at the csi center in princeton and of course the department of energy for supporting us uh thank you thank you very much roberto for the fantastic lecture uh i don't know exactly how to proceed with uh the discussion probably the best thing would be that uh uh participants uh uh wishing to ask questions so do so in the chat in the zoom chat and uh i will uh i will read i will read uh uh the questions uh to uh professor car yeah so let me just break the ice uh would you tell us something roberto about uh uh the prospects uh for uh the open issues that you have mentioned if there are any progresses uh that you see around the corner concerning uh quantum corrections or two dynamics or uh long range interactions uh any of those problems can you mention a few lines of research in fact uh uh long range uh electrostatics uh uh is uh essentially there we have a code again uh i'm collaborating with uh limfong han and uh weyman on that uh we have already a code that is not uh yet public uh to deal with that uh but uh uh it works and uh essentially it is based on using uh the ion uh charges and the electronic charges given by the vanier function uh uh vanier center uh to describe the long range electrostatic uh uh just uh uh the long range uh electrostatic with this charge center is enough because uh uh at short range uh everything is taken into account by uh the deep potential so uh one can separate uh the the the the uh uh uh long range that is taken into account with the evolved uh kind of technique and uh uh uh deal all the rest with the deep potential this kind of thing uh works and uh shows that uh in typical bulk situation uh somehow uh uh the uh the deep potential is able to take care of the effect already with uh uh uh the uh short range part of course uh this is even in the bulk this is not completely true because if you are looking at uh longitudinal optical splitting of the phonon in ionic material uh that uh uh will uh uh require uh uh long range electrostatic but for most of the properties that I mentioned in the example that I considered before this long range electrostatic uh is not important uh uh but as I meant as I said there are cases in which it is important and uh uh we find that it can be taken care of uh again using the vanier function in principle we could also use information on the spread of the vanier function but so far we haven't found that to be uh necessary just the center is enough then there is the issue of quantum correction quantum correction is tough is a tough problem uh quantum correction to dynamics quantum correction to statics we can do and we are studying uh how uh they affect properties uh and by doing path integral uh simulation and uh there again uh uh different functional uh the combination of functional and quantum correction uh is important to uh understand this effect now in the case of quantum correction to dynamics unfortunately one has the so-called dynamical sign problem due to the fact that uh uh uh the the the uh dynamic uh um correlation function that uh uh uh one uh uh is interested in uh uh is a complex uh quantity uh that uh uh uh uh is widely uh oscillating so uh there is no way of doing the calculation exactly uh at least so far uh there are various methods uh like for instance ring polymer molecular that molecular dynamics centroid molecular dynamics or uh uh linearized uh seen uh linearized um um approximation for the vigner uh dynamics and these are all uh semi-classical approximation which however for a system like water may work we are studying them and we are also combining them uh with the maximum entropy method in order to uh uh to uh include uh uh effect uh uh uh beyond the uh uh uh beyond the approximation that is uh unfortunately uncontrolled to give some control let's say over the uncontrolled approximation of the various method that I uh uh describe so we are studying uh this thing and uh um I think that uh uh for systems uh like water uh it is possible that uh in the future it will be possible to include this effect okay now there are a number uh thank you very much Roberto there are a number of questions that are flowing through the chat I will take the liberty of selecting among uh among them uh there are uh uh two general questions one directly related to your talk the other to the general field of binational molecular dynamics you may want to take them together or just one of the two uh two participants are asking you some uh qualifications about the relative merits of uh Carparinello and the Bonopenheimer molecular dynamics uh the other questions are related but on the same style relative merits of deep neural network machine learning techniques and uh Gaussian process kernel process kernel methods based on Gaussian processes yeah these are good questions so let me start with the one on Carparinello versus uh Bonopenheimer well I don't see the Carparinello versus Bonopenheimer as two separate ways of doing a binational molecular dynamics because uh uh the Bonopenheimer condition uh can be uh satisfied uh uh sufficiently accurately with uh Carparinello molecular dynamics and in fact in the limit in which the fictitious mass uh uh goes to zero it will be satisfied exactly and uh and also because uh the uh uh iterative way of doing Carparinello uh molecular dynamics is essential also to do Bonopenheimer molecular dynamics in fact we have uh nowadays uh uh schemes that are also available I think within uh the quantum espresso both in the cp and in the pw uh uh module all the quantum espresso uh by which uh one uh can enforce uh the Bonopenheimer condition in an iterative approach uh now uh the at the beginning when we develop uh uh when we develop the the Carparinello method we also tested uh uh the uh uh uh exact uh let's say condition uh uh to impose uh uh exactly the Bonopenheimer condition but it was much more expensive within our iterative approach but it was uh much more expensive than doing uh uh the Carparinello with the reasonable value of the uh of the fictitious mass so at that time we didn't even uh write anything about that uh but we should have done it one uh uh more thing here uh nowadays with uh much more powerful computer that are available uh it is important to uh satisfy particularly as we go to this uh accurate uh calculation it is important to satisfy uh the Bonopenheimer condition uh uh as closely as possible this is particularly important uh to construct uh uh the potential so this is uh uh now regarding the second question of course there are various uh different methods uh uh to uh uh uh to do um uh to use machine learning and uh uh there are various groups who have uh uh developed uh uh uh methodologies to do that uh um uh as I said at the beginning I uh uh was uh just going to discuss the methodology uh I was uh uh uh involved uh uh in uh again one can uh distinguish two general group one group that is based on deep neural network and one group of methodologies uh that are based on these uh uh Gaussian uh uh processes on kernel kernel method as I can see the advantage or one advantage that I can see with uh uh um deep neural network is that deep neural network can represent uh really very uh complex functional behavior in fact uh uh can represent essentially any function um and the difficulty uh is mostly then in the learning process because the learning process uh can be uh a process uh that uh uh is uh uh NP complete so uh the learning process says difficulty is hard uh uh while the uh uh the uh deep neural network representation can uh uh represent essentially any function now I don't know I how well uh that can be done uh with uh uh kernel method although from the result that I've seen kernel methods seem to be work uh uh quite uh quite fine so that uh uh is the only uh thing I could say I must add just one comment I asked my uh friend uh Wenan uh who is a professor in mathematics uh uh and uh understand better than me the the complex uh uh subtleties of those thing and one thing that he told me is that while I understand why uh uh deep neural network work I still don't understand why kernel method do that is is that was uh uh his question on that maybe we should keep discussing that more to try to understand that better there are a couple of related questions uh on uh molecular absorption on molecules uh on molecular absorption on surfaces and another related uh related question was about the ability of uh neural networks uh uh neural network potentials uh present uh or future ability uh to mimic uh uh weak inter uh weak molecular forces such as uh Van der Waals or uh intermediately uh weak forces yes uh so the first one was okay what was the issue on uh uh absorption molecular absorption is related I would say so one uh one participant was asking about the ability of uh neural network potentials to predict the molecular absorption the other one was more methodological but the two are related I think okay well uh of course uh uh absorption can be chemisorption or physisorption let's say physisorption let's say physisorption okay so or intermediate okay so uh uh of course uh uh the neural network uh uh the neural network uh uh doesn't uh uh discover uh uh yet new quantum mechanics and so uh it gets uh uh out in output uh what uh uh it gets in input uh so far uh the uh examples uh uh i i presented uh used um uh network that were trained uh uh mostly with the uh scan functional the scan uh uh functional is a meta gga that was developed uh in the last decade uh by uh John Perdue and uh collaborators and uh um uh has the uh ability to describe uh uh also what they call intermediate van der Waals interaction um uh uh in fact uh uh we get a decent uh uh description of uh water without uh having uh to add uh additional uh uh van der Waals uh uh interaction I like that because it is within a seamless approach however the uh uh uh while this thing seems to be working sufficiently well in a system like water it does not work well uh for system in which uh uh the uh van der Waals effect are more even more crucial and uh uh the interaction that uh uh keep the system together are uh uh weaker uh like for instance physics option or uh uh um um organic crystals so crystal made by uh organic molecules in that case uh uh we tried in some organic crystal or some case in the past the scan uh uh although uh slightly better than pbe uh uh would not work and uh in fact uh adding uh van der Waals interaction is important now van der Waals interaction there it is important to do something I mean the way in which van der Waals interaction are uh uh added in most calculation is just by doing a sum of two body term which however is already drastic approximation to the van der Waals interaction because a system uh is really described by two body terms only only if it is two body so uh um uh to go beyond that uh uh is difficult uh and uh for instance uh uh Alex Tachenko and uh also uh Rob Distagio and other people have made uh uh progress uh uh uh in that direction um uh I think that uh um uh this technique could be uh uh also included to some extent in the uh the potential because the local region of the uh uh deep potential or machine learned potential uh in the example I've shown today we use a a a radius for the local environment of six angstrom but uh uh one could go beyond that let's say eight angstrom or so uh still would not be uh exceedingly expensive so that is something that can be done so the effect in this range of the van der Waals interaction could be learned and uh effect beyond that that would be much weaker could be uh included like we include the long range electrostatic interaction uh in the case of the eval summation so there are possibilities and uh uh uh we are not uh involved uh in that uh concerning van der Waals but I'm quite sure that there are people around in the world who are working at that right now okay I think uh it's it's painful to cut such a lively uh discussion but I think it is high time uh to uh to close this uh uh uh disaster noon session I would like to thank Robert again and all the participants uh to stay with uh uh just for having stayed with us uh thus far thank you very much to all and uh for the participants we reconvene uh tomorrow morning european time uh with Roberto I wish we will soon meet in person thank you very much again have a nice day or night or whatever it is at your place thank you thank you thank you very much thank you thank you goodbye thank you bye Roberto bye bye bye let's see now the stock share okay okay bye bye