 What I'm interested in is to use machine learning start from a lot of material candidates or crystal candidates that you would then screen with a DFT calculation to find thermodynamically stable crystal structures. There are actually a lot of databases already available for this such as materials, project and OQMD with both of which it has several hundred thousand structures already calculated. However, the problem with this approach of only doing DFT is that it's generally very slow or quite slow so you can take several hours to calculate the property for a single crystal structure. What I typically want to have is to have some initial filter to screen out structures that are actually relevant or likely to be relevant for further evaluation. How that is typically being done nowadays is that you basically use heuristics or element-based heuristics for substituting elements in already known crystal structures where you have some heuristics where you basically have the probability of changing one element to another would likely yield something that is stable. And then you do the DFT calculations for that. However, this tends to, it works, but it only has a generally a hit rate of finding around 10% of the 15% of the structures that are suggested are actually new stable structures. So what I'm going to talk a little bit instead about using energy-based machine learning models for accelerating the screening. So if you would use a machine learning model that is trained to predict formation energies for crystal structures and to see if such a model can help improve the screening. And the question is then what kind of machine learning model would you need, would you use for this kind of task? So there is a, you need to have the right tool for the job essentially. And, but before I go in a little bit further into what model you would use, I will quickly talk about the convex hollow stability, which is essentially one of the first or initial measures you use to assess whether crystal structure is thermodynamically stable or not. And the convex hollow stability is essentially we have one here for just a toy system where, or binary toy system where you have a consisting of two elements, one element A and one element B. And you basically then plot the formation energy of your crystal structure versus the ratio of elements in the crystal structure. And then what you do to get the convex hole is you basically take the lowest line, the crystal structures that has the lowest line energy, and then you draw a line between them, you do a linear interpolation between them. And this, this hall is then known as the convex hole. And basically, any crystal structure that lies above this convex hollow energy is very unlikely to be stable. So it will likely decompose to a linear combination of the two neighboring crystal structure that lies on the form. And if you would find something new that lies below the convex hole, then you you have a something that might be interesting, it could be actually made. And of course, you need to do further stability analysis such analyzing its phonospectro distorting the structure and so on but I won't go into any details on this in this talk. The question is then what is the best representation for this kind of job. So, in this talk I will not focus on any specific representation in the beginning, and rather talk about different levels of course grading of representations. So, with course scanning I mean how much information you include in the model and I will talk about anything ranging from full atomistic atomistic representations that contain full coordinate information to representations that contain essentially no structural information so only containing the composition of the system. If I go into crystals I will just briefly talk about representations for molecule because it's a slightly more developed field. So what we have here is essentially hierarchy of, sorry the pointer is a bit weird, and hierarchy of representations for molecules where the higher resolution you have for your representation. And in the top you have this coordinate base base representation that essentially encodes your system using the exact coordinates of the system, and typically encodes all atom positions. And on the second hierarchy you have this graph based representation which uses either something like molecular graphs or bombs or smiles to encode your structures and on the last, the lowest here you have these representations that doesn't contain essentially any chemical or any structural information directly so it only contains chemical properties such as the dipole moment of your molecule partial charges in the molecule or composition of the molecule. And yeah similar you can draw this kind of graph for materials where you would have on the top you would have similar coordinate base representations of your material. And on the second tier you would have something like a graph or prototype base so you would use some crystal prototypes to encode your structure. And in the bottom tier you have these composition based models. So the question is then which model would work best in high throughput screening, or which kind of representation would you use in a high throughput screening settings. So let's begin by looking at coordinate base representations. And, as you can see in this figure taken from a review paper by Felix misly at all. And they are a lot of different representation so in this paper they actually characterized a lot a lot of the the currently developed representation coordinate based representations for machine learning. And it's a very well developed field and there has anything ranging from. Sorry, at the center distribution functions to potential fields to just encoding internal coordinates of your system. And, and a lot of these tend to be very accurate, and they also most of these models coordinate based representations has a high structural resolution, which makes them great for predicting structural properties such as type of moments forces or energies. And they can be used to construct construct inter atomic potentials and force fields. So just some of the applications where these representations has been used are for example in simulating an amorphous materials, modeling grain boundaries in materials, and also for simulating polymers. And finally they have also been used to predict the comics all of stability for material structure, or for crystal structures. However, and yeah so so this is what we're interested in however, there is a slight caveat with using these coordinate based models for predicting crystal structures. Typically what people have done in literature is that when they predict the energy of a crystal is that they use the relaxed crystal structure to predict the energy that you can get very good, good accuracy with the model doing this, however, to use to get the relaxed crystal structure you would need to do the DFT calculation in the first place. And, as we can see from this high stroke with screening diagram, then you want to do the DFT calculation after the ML screening because otherwise the point is a little bit moot so you have like a circular problem where you need the structure. Sorry, you need a structure to predict the the energy, but that makes it. And for that you need a DFT calculation as well. So I would say that this is a little bit cheating. But what you could do instead is for example you could use a model that would have a good enough coordinate based machine learning model you could either relax your crystal structure back into the relaxed state with the machine learning model. However, I don't think the field is developed enough to have coordinate based models that can do this across a very diverse chemical space. And finally what you could do and people have tried is to use unrelaxed crystal structures to predict relaxed energies. However, as I will show later on this tends to reduce the performance quite a lot. So the, I don't think this coordinate based model are therefore really ideal for doing this kind of for using as a filter in this high throughput screening setups. And what about composition based models. I mean they're generally, since you typically just encode a composition of your, your crystal structure that makes them very easy to customize to specific tasks you're interested in they're easy to to interpret as well because they're so simple. And they're also very efficient in the low data limit because of that. However, yeah, and sorry there, and there has been a lot of models that has been developed for this task already in literature where you have model and several different machine learning frameworks and representations to that uses compositions to predict formation energies for crystal structures. However, it has some problems when you only use the composition and it's mainly that the representation itself is not unique. So, for example, as you can see here, you can have multiple crystals that has the same composition, but completely different energies. One way to solve it that people typically do is to just remove these crystal structures from that you just pick the lowest line crystal structure to train the model on. However, even if you do this you can have non unique compositions that lies very close to each other so you can get very you get a very rough surface to actually interpolate, which makes it difficult for the model. Nevertheless, as you can see in this learning curve here where you essentially train this model train on the OQ MD data set that I mentioned earlier, where you have the mean absolute error plotted against the number of training sample. The model has been trained on for several different composition based model, you can see that the error consistently goes down when you increase the training set size and they achieve actually quite low errors. So, how do these models then perform for actually classifying structures that lie below the comic soul. So, as was found in the review paper by Bartlett et al, where they try to do a critical assessment of how well these models actually work. So what we found that they are quite good at classifying structures that lie above the comic soul but actually very poor at predicting structures that lie below the comic soul so they could barely find any new interest in crystal structures or new stable structure sorry. So what we have, so what they did is they, in these figures they excluded a lithium manganese transition metal oxides from the training of the models and then they try to use these models to predict the comic soul energies. And so what we have in these figures here is that you have the models predicted energy from the models predicted distance from the comic soul plotted versus the actual distance from the comic soul. And you would basically want your model to the models predictions to lie, either in this, the top box, or the bottom box. And because that's when gives a correct classification whether it's stable or not. And as you can see that all the models correctly classify structures above the whole, but at the same time all of them are almost failing for almost all crystal structures to classify ones that correctly lies below the hall. So, and in addition to this that even if you would have a machine learning model that is composition based that could correctly classify structures that lie on the comic soul of stability, you have a uniqueness problem that you have to realize the crystal structure in after you have the from the composition but many compositions can have many different crystal structures corresponding, many different corresponding crystal structures. So you would then have to screen a bunch of different crystal structures actually find the one that lies on the comic song. Yeah, so I would say that what you would need to do this kind of task is some kind of graph or prototype based representation, which, and you would need this representation to have a sufficient resolution to actually be able to reconstruct the crystal structure from the representation itself. And also you want the same time to be coarse grained enough to avoid. So, so that it has the same representation when you train it on on relax and relax crystal structures. And yeah so but there is barely any focus on this in. There are barely any representations for that for materials that lie in this class, so I think this class of representations needs a lot more focus. So yeah, I'm, I think I can take some questions now if there are any because then I will move on to talk about my research. Nice talk. The question that I have actually, I mean, I'm not quite sure how you came up with the number of 10 to 15% heat rate. When you talk about the suitable substitution in the slide with this like when you have some structure and then you replace with some kind of silicon to Germany was things like that. Go back, go back, go back. Very initial one. Yeah, yeah, this one. Yeah, so 10 to 15% heat rate what you mean by that. This is just when they use these heuristics for. I will get to that a little bit later because there is a data set that we will talk about in the research where they actually use this kind of heuristics to simulate the materials workflow the same materials workflow that materials project is using, and they find that with this workflow they have 15% of the structures that they find lie on a new convex all of stability. And, and I think there has been some other research in literature as well on it so so they typically these models has this kind of performance from what I know. And something else if I'm allowed to ask. So you talk about the reason reason actually when it's about this like representing of the molecules, or, or about I mean like coordination smiles or chemical properties and in case of the other one grassy graph or composition based. My question is, what do you say when in case of a database, somebody's combining both. It's like, in some part of the data I'll be using the structure, atomic structure, and in some of the cases, or mix match of them, so that it's not to demanding on the data set. Am I clear what I'm asking. I'm sorry I'm not really following. The resolution increases depending on what you choose from atomic structure to the composition based. Yeah, do you think it's a possibility that if somebody wants to mix match between them. So it's use a mix between coordinate based and non coordinate based. I mean, of course, it can be done, I guess but I'm not really sure I haven't thought about really how you would do it. Because like what would you, how, how do you suggest and that you wouldn't mix match it. Okay. I thought I would get an answer. Sorry, I shouldn't be asking you questions. Yeah, we can talk about it. Yeah, we can talk about it. Thanks. Kevin to have a question. I'm online, whether you could specify whether this set of techniques is just for high throughput screening, or could be applied more in general. Specifically which set of techniques molecular systems or compounds in general well the questions. You, you mean, I'm not still not sure exactly which technique you're talking about. Is it the representations or the DFT workflow. I mean, yeah, I think it's the same for molecules as well you suffer from it has the same issues when you train and predict on relaxed structures as well. Fantastic. One question here. Yeah, thank you for the presentation. So, when you, you discussed how they, if you go next. So, comparing the prediction on structures on the convex cell and above the convex cell. So, this looks like the classic problem when you have many more structures above the convex cell than on the convex cell so it's a classic imbalanced data set problem. Can you comment a bit on this like, how did they address this in the, because when in your data itself there's many more negative cases than positive cases, it's a bit natural that the model tends to go that way. I mean, I guess that I'm not sure if they've actually done any rebalancing of the model to to weight it by its distance from the comic so probably not in this case, which could maybe improve it slightly. But I don't know exactly how how they did it in this paper. Hello. Thank you very much for the talk. My question is regarding all the representations. So, is the uniqueness problem solved when we use a graph based representation. So the second step of your pyramid, let's say, I mean, it depends on what you mean by uniqueness because like you wouldn't be able to use a coarse grain representation to create a force field for example right because it doesn't have there. So in my mind is, for example, for materials you mean. Yeah, I mean if I have a molecule I can have different conformers of the same molecule. Yeah, in that case, I should take in account, I don't know, do exist graph based representation that is in account. So it hasn't. There are a few, I will get to it as well and my next talk will talk about such a representation. And as well so perhaps I will, I will answer your question soon. Okay, I look forward to that. Yeah, other questions. No, but you're making me run guys. Thanks for the presentation. So, I have one point that I didn't understand when you're talking about the coordinate based method. So, could you go back to that slide. Yeah, this one. So I didn't get, I can understand that you need to do DFT to fulfill the circular but why you need to the energy of the relax the structure. Because when you relax the structure with DFT you get the energy off it anyway. So, but if you, you just need the self consistent calculation to get the energy of a random structure to fit your machine on everything. So, why do why does it need to be relaxed, because when you relax it goes to the ground state. But typically, how you do these DFT calculations for crystal structures is that you do symmetry constraint calculation so you relax it only along the symmetry. And there you can even an offer in addition to that you can be in a local minimize you relax it so it won't relax into the ground state in most cases. Okay, thank you. So other questions right now. So I think you can continue with the second part and then you take more questions in the end. Yeah, so just to reiterate very quickly. That on what representation you would use that the coordinate based representations are not unique, which makes them unsuitable and full structure based representations. They don't suffer from this circular problem that you need to use to relax structure to. In order to predict the energies. And we would want to have something. Wait, I don't think I know it's working okay. You would want to have something like a graph or prototype based representation. However, there hasn't been that much work on it in literature. So you would use representations like using Baroni tessellations to encode your crystal structure. And, but I'm going to talk a little bit about using the symmetry operations of your crystal to represent so you would use something called like of sequences. So we're in in this we have it just a toy crystal structure here. We're to the toy crystal where with different atoms placed at different positions. You essentially can encode your structure using which symmetry lines they lie on so the cross yellow crosses would correspond to for example one symmetry position or one wake of position and one. These blue pentangons would correspond to another symmetry position and the red circles would correspond to another yet another symmetry position so you could then take these symmetry operations of your crystal structure and put them into a regressor to predict your energy. And these tend to be very stable towards relaxing the crystal structure because generally the only way is for these crystal structure these representations to increase in, sorry, to change symmetries it goes up into higher more symmetric crystal structure and that's relatively rare. So that's to go into a little bit more detail of how we actually do this representation, or how we encode the atoms. Is that we use a way encode the element type using just a math scholar embedding. So some embedding vector that encodes the elemental structure, and then we use the part which encodes the atomic position we have the crew we encode the crystal system as well as it's brought it the bravi centering of the crystal to encode the lattice itself, and then we use a multi holiday embedding to encode the actual atoms position on this lattice on the crystal lattice. And then what we do is we pass this. These wike of representations into a graph attention network, which basically updates the embeddings for each of these atom, based on all the other atoms around them and we call this this framework, I'm sorry, a wike of representation network. The output of this to predict the actual energy of the system. And what is very nice with this is that you can quite easily realize or get an unrelaxed crystal structure back from this representation and it's completely innumerable so it's very easy to search through your space. And in most cases, as I said, these representations are unique, they are we found some edge cases where you can construct symmetries or crystal structures that you multiple crystal structures from one wike of sequence, but in practice, this seems to only happen for very high energy crystal structures that lie but far above the convex home, which we're not very interested in any way. And then, yeah, finally, you can do a DFT, because DFT relaxation on the crystal structure to get the final structure you want. And so our word discovery workflow would then be that you start from a huge library of these enumerated wike of positions. You then put the, oh, this got weird when I moved this line of apparently, but anyway, so. You then you put the these crystal structures through this brand model to get the formation energy of the crystals. And then you compare them to the current convex hollow stability, select the ones that lie below the convex hollow stability. So you can reconstruct them in the crystal. And then finally, validate whether they actually lie below the current convex hole using DFT. And just the first we looked at two different data sets to to bed to see how this model performs. So, one of them is this is materials project which is a very large database of crystal structures. And the other is this DBM data set, where they essentially try to mimic this mimic materials project discovery workflow but using a heuristic, the heuristic base substitution method that I discussed earlier in the, the prior talk. If we just look at the learning curves. First, where we have the mean absolute of the error of the model for out of sample data plotted against the number of number of training points we can see that the error goes down. So the reason why we have two plots here is one is for just a single model and the others we use an ensemble of between multiple models simultaneously using ensemble predictions we take the average prediction to get the actual energy. But you can get very low errors. And what is perhaps more interesting than the error itself because as you saw from these composition based methods that they can look like they get good error but they actually don't perform very well so what we instead look at is the error of the model plotted against its distance from the convex home. So here we have four different models plotted against the model error for different models where they plot against the distance, their distance to the comic soul so we have the top, which is the brand model which is this model that we previously talked about. And then we have this Verona tessellation which is one of the other few course grain representations for crystal structures and CG CNN which is a full structure based neural network for crystals. And then we have this cone here. Basically corresponds to the difference between the mean absolute error and the comics hold distance. And the, you essentially want your model to be good inside this cone because the better it performs inside this cone, the more likely, or the better you will be able to classify structures that actually lies above or below the comics hold, and then it doesn't matter that much if it performs poorly outside of or far away from the cone because you will have structures that will not be stable or very unlikely to be stable anyway. And what we can first see is that all the models essentially have their lowest error inside this cone, which is, which is good, which is what we want. And the second thing is that we have the CG CNN model performs better quite a bit better than the other models by quite quite a substantial margin. However, if you then this model is trained on relaxed structures. If you would then train it on the pre relaxed structures. Then the model error deteriorates quite substantially to this red curve here. And it actually becomes one of the worst performing models. Whereas if you look at the base model which is which is coarse grain, while it performs poorly compared to the other models initially doesn't change in performance much when you train it on pre trained structures. And we actually also tried this for the rent model, but there were essentially no difference in performance when you trained it on pre trained or relax structure so that's why we only see one curve for it. So how well does the model then perform it actually classifying stable structures. So what we have here is basically a histogram to histograms. And I'm sorry three histogram for histograms, where the red and green histograms corresponds to predictions that the model has predicted to lie below the convex hole. So these two, and the orange and blue corresponds to what the model predicts to lie above the convex hole. So then you had it the, the green ones is that is true positive so it correctly classifies them to lie below the convex hole, yellow is false negatives. So it misses to classify them to lie below the convex hole, and read this false positive so the model things they lie below the convex hole. But they actually don't lie below the convex hole, and the blue is true negative so that model things they lie below the convex hole but they actually don't. So you can see here that unlike these stochometric based models which managed to barely capture any structure to lie below the convex hole, we do manage to catch a substantial portion of structures that actually life. We actually managed to capture a bunch of structures that do lie below the comic so and it correctly classifies 38% of the structures to lie below the current comic so which is more than 2.5% better than these heuristic models. And finally what we did is that we wanted to see how would how would this workflow work in a more prospective manner. So we essentially started from bunch of initial structures, and we did substitution of the elements in these crystal structures, and then we use the model to pre screen the energies and then actually calculated that the with DFT validated the components that the model predicted to be stable. So we started with roughly 415,000 structures, and then the model predicted 37,000 to actually lie below the comics for those stability. And since we used an ensemble model, we could actually get an uncertainty from the model. So if we would added that we want the convex hole distance plus the uncertainty of the prediction to be below the comics hope, we ended up with around 5,700 structures. And after validation with the DFT. And for several reasons we only managed to finish around 4,700. We found that 33% of these structures. 33% of the completed calculations lie below the comic so which is still a big improvement. And finally I just quickly say that we were working now to see if we can do even more prospective model where we're working to see if we can find. We can optimize several properties simultaneously by by looking for interesting dielectric compounds compounds. So, which which has applications in for example flash stores CPUs and rams. Well, what we're trying to optimize then is both the band gap of these models or band gap of the crystal structures as well as their total dielectric compounds and we also collaborating with experimental is to see if we can actually synthesize some of the compounds that the model suggests. And with that I would just like to summarize that I given us a very cursory overview over of the representations. Well, it works in high screenings I think, and I think there needs to be a lot more focus on course greatness models for materials screening. And I also discussed one of these course grain models that we have developed that can be used for these high throughput tasks. And finally, I would like to thank. First of all, Reese Goodall, who is the PhD student who did most of this, this work I just talked about. And then I would also thank Alpha Lee, which is my group leader in Cambridge. And then finally, Richard Armiento and a beat to our collaborators from Linköping University, and you can find the paper preprint on the paper through this QR code. Okay, thank you. So we have plenty of time for questions if you have any see. We're here for us. Thanks Felix very nice. Can I just ask about the generator step. How do you deal with the fact that most we call sites have one or two degrees of freedom right so there's like a line you can put things on or a plane. How do you choose where on that line or plan to put things. So, typically, like what we found in practice is that they, there is only one minimum along these lines so we will relax into the correct one. However, there were there were some edge cases where it doesn't. We can have multiple minimas, but this seems to be very rare and generally we found them for mostly high, high energy structure so it doesn't seem to form a big problem in practice. Can I just ask could you go back a few slides to yeah one of the keep going back. Yeah, okay stay there so you see the bottom left picture there say the yellow cross right yeah so that could be along that line, all four of them obviously move together but you. Yeah, for example, the yellow cross here would be would be along this line the, the red circle would be along this line and then you can even have plain symmetry planes that the Athens could line so this. This one would would lie in in this plane. So exactly so that the bottom left picture the yellow cross could move inside the circle of blues for example, and I'm pretty sure that would then no no if you would move in into the blue part then it would break the symmetry so you would correspond to a different structure. Okay that's no longer on that way plus I can move inside the circle of blues but still on the diagonal. Yeah. Okay. Thanks. So we have a couple of questions from the zoom chat. So one was on data set sizes and if there's a lower limit on the data set or the data size that you need. Yes, that's hard to answer. It's hard to say I mean I think depends really on your own what you want to solve with it but I think the more data is always better for, especially for these neural network based models are generally quite data hungry. If you would use something like a random forest they typically you can get away with using less data. And the second question was on. So you talked about energy models but what about models where you need a derivative like strain or stress how would you proceed then. That's a good question. And I'm not sure by hand, I can come up from my head exactly how we would solve it, because, but you might be able to encode it into the symmetry positions themselves but I'm not sure how exactly. But what I've been mostly concerned is actually predicting whether crystal structures are stable or not, which for which you need only deformation energy, since if it's not going to be stable. You're very, very unlikely to be able to make the crystal structure in the end. Nice talk again. And if I if you can go back to this, maybe we can ask him here. So, can you just clear me one thing actually what the symmetry representant of white clock is adding over or what is the advantage of doing this way over coordination methods, the slide number 32 I guess, so what it adds extra doing this symmetry way of other than coordinate. Because you, you mean instead of using a coordinate based representation. I mean, because you don't have to relax or crystal structure. Most of the cases you have the same representation. When you use your, your for the relaxed and unrelaid structure with these symmetry operations. Whereas if you use a coordinate based model you would have the representation can change quite substantially if you do relax the structure. Okay, so then to iterate over it. Sorry. Yeah, just to iterate this question. I mean, when you have this coordinate based one when I'm encoding this and also the symmetry one when you are encoding this. So, is there any kind of generalize ability that gets added when we are doing this symmetry one when we're encoding this. I mean, depends on what you mean by generalize ability. So, I mean like different structures and only if the model performs better in general. I mean, it's, I think, if you train them on the relax structures day. So I showed the non coordinate based model actually performs a lot better. But if you train them on the and relax structures to predict the relaxed energies, they, the model performance of these coordinate based models goes down quite a lot. And on the slide actually what you mean by pre relaxed is like and relaxed ones. So you've used I don't remember exactly how we did it but it's a very cheap initial relaxation of the crystal structure just to make sure that it's not something completely nonsensical. And if you just go to the database one where you were like moving down and the number of the database and the novel. No, no, no. The one with the database where you are like 33% of the one you will have at that. Sorry, the slide with the data. Yeah, we're moving down and all this one. No, no, next one. Yeah, this. Okay, so this one question may not make a lot of sense. I'm just asking is like, when you have start from these noble candidates like 415420, then you end up with 1558. And since you talk about the new of JNC data set, and then you have substituted a new ones. So, I just want you to comment over if we delete some of the data set that already existed and do this experiment. So, what are the predicted the novel materials that you say in this case, in that way, will it be predicting the already existing one. Sorry, pretty say that again. Just say it's like we have we take the data set up to 2012. Yeah, but we're living in 2020 and we do this same thing, and will it be able to predict some of the ones that we already knew in 2020. What, what will you retrain it and try to do predictions again. I mean, I think the model, or we did these experiments actually and the model performance becomes worse, simply because there are less structures left that lie on the convex hole. So each time you would iterate this out while the model gets better. There is also less possible candidates to to find thanks. A quick question from zoom. So you showed a model for the band cap right and for the total energy. Is there anything else machine learning models can predict or these type of models can predict. I mean, this is what we have tried so far but in principle you should be able to apply them to predict any property you want. And I mean I cannot say to how well the model will actually perform because that's generally something you just have to try to figure out. And in this like workflow in this slide and there's this like Sigma value that you added to like decreased the number of candidates. So did you like really decide the value of Sigma or so this is basically we since we for for this workflow we used an ensemble of neural networks so we trained several models. And the mean is then the energy prediction that we use, but you can also get a standard deviation, which is the Sigma so it's basically some measure of how uncertain the model so how much the different neural models disagree with each other. You shifted it to like just by one. Yeah, so we will send that deviation of that. Yeah. Okay. One more question. Hello. So I have a question about DFT part. When you say DFT you mean what you mean a periodic DFT or I don't know. This is standard, like, DDA PV, I think. Oh, okay, okay. We use VASC for these calculations. And do we have any other question on zoom. No. Good. So we're a little bit early for the coffee break that's good. And it was a 20 minutes of an hour. And then we have one more lecture this afternoon. So please come back for that. Our lecture is already ready. Okay, so coffee break now.