 We are live, let me give it just a second always to finish finalizing these settings, redirecting. And we are good, okay, wonderful. Hello again, everyone, welcome back in. I do have at the start of this block and the next block, I just wanna make a quick reminder. Tonight is our virtual exhibit tour of the Places and Spaces exhibit mapping science project that is, as I mentioned, at the beginning of the meeting, the one part of the conference that won't be happening in Crowdcast, that's gonna happen on Zoom so that we can do a nice relaxed exhibit tour and then hopefully a little social hour, some time for us to, some time for us all to chat. That Zoom link will be posted in about two hours. I'll put that, I'll put that Zoom link online right before, around an hour before that room opens. So you look for that in the chat for Crowdcast as well as on the website. So without any further ado, let me introduce our next speakers. We have with us Maximilian Neuchel and Andrea Lotkes, who are gonna talk about computational analysis of interdisciplinary model template transfer. And I have to brag on them for a second. They submitted what is probably the prettiest abstract file that I have ever seen for anything in my entire life. So I'm really excited just to look at the talk which follows. So without further ado, please take it away. Thanks. Thank you very much. And we are very happy to be at this conference and have a chance to talk about our work. So what I'm going to do is that in the next 30 minutes or Maximilian, I will present you some of our computational analysis of interdisciplinary modeling practices or model transfer, as I said in the title. And I should mention that this is part of a larger product on scientific modeling practices in philosophy of science on which I'm working together with Taya Knutela at the University of Vienna. So what we would like to do, argue with this project or with our paper here, is that these kind of computational methods can be very fruitfully applied to philosophy of science. So this is at least what I learned over the last months. So you should go back. So here's the outline of the talk. So the talk will be mostly consists out of three parts. You know, it would start to say a little bit about our general interest in interdisciplinary modeling practices and especially about the transfer of models between different scientific domains. Before then in the second part, Maximilian will take over and talk about our computational analysis. And I should say that this will be the main focus in this, in our paper because of the topic of the conference. And last, we will finish with presenting some of our results. So let me start with some general observations and remarks concerning interdisciplinary modeling practice. So it is maybe it's a very general observation to make that some models such as for example, the Ising model or the Lotka-Volterra model can be found outside of the original disciplines also in other scientific domains. But you know, if you think about it a little bit, then there are some interesting questions popping up from this observation. Especially, you know, when you ask, so how do these models actually get transferred into those different domains and how would they become applied in other domains than for example, the Ising model in physics? So a second observation which is more related to model transfer is that in the case of mathematical models such as for example, the Ising model, these models are not just transferred by themselves, but that the models get these model transfer includes the transfer of concepts and mathematical and computational tools which are attached to them. So as examples, you know, when you look at the case of the Ising model, we find that, you know, concepts which get, you know, this is a case study which I should mention I did earlier on with Katya Knutela on the Ising model. So the kind of concept which it can transferred when the Ising model got transferred into, for example, neural networks where no concepts such as phase transitions by procuration and critical exponents which by themselves in this new domain don't immediately have a meaning. So computation when it comes to computational to mathematical tools, we find that, you know, scientists are working in neural network with partition functions and master equations and, you know, computational tools they're making use of mean field approximations and, you know, really complicated stuff which we call these replica methods. So this all let us, you know, based on these kind of case studies let Taya and me to introduce these notion of model templates which is somehow, you know, we took it from Paul Humphrey's concept of computational concepts and theoretical templates, but I don't have time to and I don't want to discuss this here. Just our notion of model templates that we tried to capture with this notion is the intertwinement of a mathematical structure which is instantiated by the mathematical model with mathematical and computational tools as well as concepts which, you know, when you take this whole package together may depict some kind of general mechanism that is potentially applicable to any subject or field displaying particular patterns of interactions. I know this is a lot to swallow and to digest but this just, you know, is our result from thinking about these kind of transfer of models between scientific disciplines, what's going on. So as a next step we wanted to take or we want to take now is, you know, to broaden our view and this means that we decided together with Maximilian to make use or try to make use of large-scale computational analysis which aim to go beyond individual case studies and to explore the dispersion and application of model templates. And a similar print has already made yesterday by Henrik Karlsson, I hope I pronounced it the right way. So, and as Maximilian in the second part of the talk will show this kind of, you know, going broader includes that we try to contrast the thematic contents of papers of scientific articles with mathematical tools which are used in them but he will talk more about that at this point. Now only one final thing I would like to mention here is that, you know, when we came to this point that we decided, okay, we want to have this large-scale computational analysis, we had to decide on some kind of new model template. So what would he like to look on? And our choice was then to go for synchronized oscillations and partly because they are so beautiful as you can see this on this little fireflies, this little bug here. And they are the forest in Malaysia where you have the, you know, thousands of these fireflies and they are blinking. And in the first few looks there, this blinking is somehow randomly but over the time it becomes synchronized. And this is one of the favorite examples of these applied mathematicians, Stephen Strogatz. And he thinks that this is a kind of a universal, you know, universal phenomena, this kind of these synchronized oscillation which you not only can find with fireflies, but, you know, orbiting planets and lasers and all kinds of other systems. And he writes that at a deeper level, all these different systems are connected by the same mathematical theme. Self-organizations are spontaneous emergence of order out of chaos. So in a little bit less poetic way, would could also say that these, that these are, well, in our understanding, these are more like, you know, along the lines of mathematical models which are instantiating this kind of self-organization leading to synchronized oscillation. So this means, first of all, to understand these mathematical models and second of all, to see what kind of concepts are related to them and so on and see how they get transferred. So, and the last point is here, one of the models, you know, there are many models for synchronized oscillations. One of the models we are focusing at the moment on is this Kuramoto model. And as you can see from the equation, these are, you know, because of the sine title, it sine function, it's a non-linear equation which means non-linear dynamics, which you nicely see here on the animation because it shows these limited cycle, limited cycles. And the coupling is via the faces which are in the argument of the sine function. And this animation is a little bit, you know, more complicated because we have these face shift by this alpha. And you see that you not only can study the synchronizing behavior, but also people use it to study chaos because you see that here now that comes this noisy and random patterns are showing. So now it's time for Maximilian to go on. Okay, thank you, Andrea. So, as at the outset, we were interested in looking at patterns of synchronization and oscillation so to construct our sample of articles which we wanted to investigate, we started by you searching these keywords in the bio archive and the archive, so to pre-print servers and which are as the decided author show quite nicely integrated into scientific practice and we downloaded the respective articles to further work with them and we process this data in the way which is depicted in this flow chart here. So you can see we have the articles from the bio archive and the archive and what we are interested in is contrasting the thematic content of the articles with the mathematical content to see how mathematical structures which we associate to a certain degree at least with model templates tie together or divide certain thematic structures. And so what we did is we extracted from each article the first four formulas and pass them into a processable format which in our case means converting pictures of formulas from the bio archive into latex and from the archive, of course we can luckily use the latex directly and then we convert this latex into MathML which is structured in mathematics format which I will show why this is useful in a minute. Okay so then we, on the one hand we process the texts of the articles in a way which tries to bring out the thematic structure and what we did there was I think relatively uncontroversial usual machine learning routine for text processing. So what we do is we split the articles into text vectors where we basically count the words that appears in each article and we build a big table out of that and then we apply TF IDF scoring to it. Then we use a singular value decomposition on it to around 300 dimensions and then we apply UMAP on that, the mapping algorithm by Mechanis and Healy and then we cluster on the internal K and N graph of the UMAP using Louvain clustering in this case in the implementation battle Holoku. And what this gives us in the end is a, I think quite useful mapping of the which hopefully appears here in the second or two. It does not, that's unhappy. Okay, that's not what I wanted to have happen. I will try whether I can open it externally. I cannot, I can do it here. Okay, great. So I'm sorry for the technical hiccup. Here's the mapping that you should have seen right in the browser before. I don't know why it didn't work anyway. So in this map, each point represents one of them. Hang on, we're still not seeing it yet. I think you need to switch back which window you're sharing in Zoom. Oh dear, yes, of course. Yeah, I see. So sorry for that. No worries, no worries. Then I will just switch shortly for the second into Firefox. Can you see it now? Yes, got it. Lovely. Okay, so again, sorry for the hiccup. Here we are. So this is the schematic map that is given to us by UMAP. And what this should basically represent is each little dot represents one of the articles in our sample and each dot is close to the articles which are similar in terms of cosine similarity in that case to it. And this gives us like a broad overview of the themes which are present in our sample. So we can see for example here towards the right we have three clusters which are basically drawn from astrophysics and which are nearly totally composed of papers from the archive. And you can see also if you look at these little bar charts on the side which shows us the temporal distribution of these clusters, you can see that these clusters have been present from the very beginning in the 1990s because the archive is of course quite an old preprint server. Then up here, the same story here for general physics clusters I would say here in this more of pinkish color we have a cluster that after investigating we have identified with applied mathematics where people seem to explore topics of synchronization and oscillation like the Kuramoto model which Andrea mentioned earlier in a rather theoretic fashion. And then down here we have a cluster more interested to see in the keywords, algorithms, synchronization, more interested in informatics. And as we go down here I would say we have this little bridge which one might associate to a certain degree of bioinformatics there's also still a lot of archive papers in there. And then down here towards the lower right we see the clusters which are mostly drawn from the bioarchive and where we see keywords such as a neuron brain, neuron stimulus, cortical, our gene, protein, species, genome. So these are actual life science clusters. And I think this is quite nicely shows us the thematic distribution of a sample. So now I will have to switch back to a presentation. Give me a second, I hope we, let's see whether the next one works. Okay, so, one moment please, there you go. Okay, you can see this again. Yeah, great. So now how to get at the mathematical structure of our sample. We have all these formulas, we have passed them in like a unified format. And what we then do is we apply a tangent CFT to them. That's a technique that was developed by a Mansoorian colleagues where they tried, and I think they're still working on this where they are trying to build a search engine for mathematics, which should be useful basically also as an interface, for example, to the archive, but I don't know if they're in talks. So how does that work in detail? What we do is we pass each of the formulas into its passing tree, so into its order of operations. And then we extract from this passing tree each tuple of two linked entities. So for example, here we have this formula X plus Y squared equals zero, and we can pass it and we can split it, of course, at the equal sign, and then we can split it at the plus, and then to the Y is of course this square applied, and so on, and we get all these tuples out of there, and we encode them in a form of encoding which on the one hand keeps both entities in there and their relation, so which one is basically above the other in the passing tree. And that gives us a representation of the formulas which not only takes care of what's in the formula like in terms of symbols, but also how are these symbols related and which symbols apply to which other symbols. So that's I think a rather clever way of representing formulas which Missouri and colleagues came up with. And what we can then can do is when now we have just lists of little tuple strings and we can just pass them to a language model. So we can pass them to fast text and we can try to figure out through the closeness relation. So how which tuple appears in the context of which other tuples, which tuple is commonly near with other tuples and fast text has the nice extra that it also takes care of like subword components. So also the internal structure of the tuple is used for this analysis. And we can arrive at a vector representation of each tuple. And what we then can do is of course we can average these tuple vectors into vectors which represent whole formulas. With the hope that these final formula vectors should incorporate similarity relations between formulas. And then of course we can reduce these vectors with UMAP again into a two dimensional mapping and we have a map of formulas. So let's see if it works this time, great. It works at least reasonably well. Here's the mapping we get. So each data point here is a formula. We have applied in this case a HDB scan clustering to it. So it's a little bit different clustering algorithm which has a notion of noise which is quite nice because as you can see this UMAP is much noisier, much less clean than the one we had earlier. And now we can just investigate this mapping. So what we can do, for example, is we want to look for the Kuramoto model which as Andrea has mentioned has the sinus term in there. So we can look for formulas that contain a sinus and then I'm cheating here a little bit because of course I already know where to look. We can zoom in on this region up here and we can have a look. What are the formulas that are here in this corner of the map? And now we will have to wait a little second. Oh, there it is already. I hope you can see this pop up. So here you can see that there are in this region a lot of similar formulas which indeed we have seen a little bit earlier or which are versions of the formulas that we have seen just a moment ago in the presentation. So these are all formulas that can be used to introduce Kuramoto models. And if you look at the titles of the papers from which these formulas are drawn, we can also see that indeed the Kuramoto model appears quite often even in the title of these papers. So now the next step would be, well, let's zoom in a little bit closer on this map up here where the Kuramoto model lives. And now let's project the colors of the thematic clusters which we had identified earlier on these formulas. So each of these formulas now gets colored in the color of the cluster of the paper from which it was drawn. And what we see now is in a sense a picture of how coherent the application of the Kuramoto model is thematic-wise in the sample that we've looked at. And what we can see, I think, is we can say a two-point analysis of this, namely on the one hand, there's clearly one cluster that is heavily dominating. And this is the more of pinkish cluster which we earlier had identified with applied mathematics. So that is something where a lot of the applications, a lot of the usages of this Kuramoto formula come from. But on the other hand, we can see also that there is a range of colors present in this region here. So we can see these little yellow and this bluish cluster. So there are also a broad array of system-specific applications of the Kuramoto model which we now then of course can go into detail and explore. And the whole thing gives us a general idea of the coherence of this model template. So to sum up our, I would say, current results. So this is very much work in progress. We are still in the course of analyzing this maps and there are quite a few other things we want to do with them. But just to get a view here, so in this talk we have presented a method for operationalizing model templates. And of course, if we go back to the more philosophical outlook on all of this, the degree to which these attempts of operationalization and application to large samples of articles are successful, provides arguments that the model templates are indeed a good mode of analyzing at least this type of knowledge transfer between scientists. And as a more specific conclusion or example from our talk, now we can also say, okay, the KuroMoto model functions as a model template, which is both applied subject dependent as well as subject independent context. So there are both people working out, or if you will, playing in a serious sense with the KuroMoto model in a very abstract way, as well as working, people applying the KuroMoto model to very specific contexts. Okay, so that is the end of the talk. I want to also throw out for the discussion now some ideas we have for further work, and also I want to invite the ideas of our listeners, of our viewers, because I suppose they have opinions on what you can get out of these methods. So the first thing we are still thinking about and struggling with is the contrast of qualitative analysis with these mapping algorithms where you basically, you run the map and then you work through it, and quantitative analysis. How do you deal with the results of UMAP or TSNE or some other algorithm which gives you a nicely interpretable visualization, how do you move from that to a quantitative analysis and a quantitative result? So what we've been thinking about, especially is the question of how to compare different embeddings. For example, one thought I was having was cluster on it and then compare the clustering solutions. And the other question is, of course, maybe somebody has an idea for a smarter way of getting from these tuples, from the tuple vectors to formulas, and then to papers than just taking averages, because that is usually and quite commonly done also with word vectors, but I'm not sure if that's the best way to do that. Maybe somebody has an idea and wants to tell me about it. So thank you for your attention and I think we can move to the discussion of the questions. Fantastic, this is really impressive stuff. I'm really floored that the moment when you zoom in and find the cluster of similar formulas, that's a heck of a shot. That's just amazing, really great stuff. Waiting on more ideas to come in, I'm sure they will in the chat. Mostly the chat is filled with people whose minds are blown by how cool this was. But I do have one question already from Eugenio Petrovich who asks, so do all the papers, it's kind of a disciplinary norms question. Do all the papers that use the model tend to explicitly cite the mathematical formulas that are associated to it? Or are you thinking about trying to find informal citations to them of the model, sort of ways to capture less structured uses of these model templates? So, should I take this? Yeah, you can take it. Okay, so actually in the case of the Kura motor model, I think it is really, really common that people in just the first equation or a second equation in the paper, slap the formula into there. So that's, and actually that's just a funny side note. We have actually found people who obviously copy-paste the formulas between different papers. So you see just like the same formula very close to each other and you check the authors and you think, ah, okay. So somebody didn't make the work to like change the variables or something like that. But so I think especially for the analysis of model templates, we can be quite optimistic that often very similar formulas will be used because people introduce it that way. But in the long run, of course, we want to have an analysis which takes care not of one specific formula, but of like a larger grouping of formulas which have like a lot of co-mentions, if you will, and which should bring us to like more of a family resemblance notion of coherently used formulas. So then hopefully it shouldn't matter that much whether one specific formula shows up if the grand scheme of things remains similar. But we will see about that. Thank you. Great, thanks. I'll jump in with a question of my own. I wanted to ask kind of picking up actually on something in Henrik's talk last night. So I wonder in your larger visualization, have you had a chance to click around and if you will be surprised? Have you found places where similarity metrics have lumped together traditions of modeling or drawn connections between domains that you really didn't expect? In cases other than the Kuramoto model, is that if you have you seen some things like that yet? Yeah, we are really at the beginning of it. And I started to go through this map and sometimes yes, I'm surprised what I see, but especially at the moment we look at the Kuramoto model but then I find it in modeling earthquakes and all these kind of things. So then I wasn't expecting this so that there was these kind of connections. So I think it's really, really helpful to see how these models are really get used by scientists and also the question of why is this particular scientist recognizing some kind of similarities between synchronized oscillators and how earthquakes are forming? So it's not obvious to me. So and these kind of questions you can really start to really dig on and go deeper and get also a richer notion of these model template concept as we try to develop on the long run with this method. Sure, that's great, thank you. Another question from John Muncie who asks, so it's absolutely interesting work. Thanks for the paper. I was wondering if you can deal with different notational variants of the same PDE when we use the same mathematical idea but the notation formalisms are rather different. So that could happen, for example, when we have matrix formulation versus a scalar formulation of the same dynamics. Is that a problem that you're worried about? So to a certain degree, I mean, how should I say, I would say my answer to that that would come from two sides. On the one hand, of course, we try to or basically Missouri and colleagues in their system try to take care of that to a certain extent by using this relational way of looking at formulas. So if something between the notation stays the same, there should still be at least some sort of overlap. So one thing we found quite interesting when we played around with the algorithm in the early stages when we were testing, for example, was that it quite nicely captures that the plus sign and the sum sign like the sigma mean the same thing or like very similar things. So that's something that the algorithm actually can get at. On the other hand, I am not entirely decided on what my view is on whether I want it to. I mean, I obviously want to know to what extent it does that. But on the other hand, both different notation as well as the internal similarity are interesting facts for our like descriptive analysis of what scientists are doing. So yeah, I will stay vague in my answer here to a certain degree, yes. But if it does not, then that's also interesting for us. Sure, yeah, yeah, yeah, that's nice. That's nice, I like that. Let me see, let me let the, let me let the chat catch up just for a second here. So I wanted to ask a little bit and this is going to necessarily be kind of a vague, oh wait, let me, sorry, let me not take two because someone else did, someone else popped the question and I don't want to monopolize the time. A question from Catherine Herfeld who says such a great project. Could you reflect more generally about how exactly you see your descriptive analysis of those particular templates, helping answer conceptual questions about template transfers in philosophy of science more broadly. So what kinds of questions, what kinds of questions do you have in mind? Do you want to talk about it? Okay. So yeah, this is at the moment at the present stage, it's a descriptive analysis of where we can locate these different kind of model and when we talk about model templates, we talk also about these concepts which are coming within computational mathematical tools. So at least at the moment, we are able to locate them and to get more, to do, to point which are interesting for the philosophy of science. You know, we, I think we discussed it before, Catherine. We have to, we have really to do more analysis. So they're really looking to where are they appearing? How do they get there? And you really have to follow those papers. This is not an analysis which just opens up and then you see how these models are, why these model are transferred. This means really further research and this is the part we just started to do because it's, we have to orientate ourselves in these huge amount of data we have now and it will take a while. But, you know, the first results we are seeing, it's very promising that it will help us to become more concrete about our, you know, the model template concept tie and I suggest which may be a useful concept for understanding this transfer. Great, thank you. Another question, another question coming in. Oh, Catherine says, as a comment on the question, she says, she says a fully agreed. I'm just very curious because this is just great. So let me, let me pick up a next, a next question now from, from Christoph Malatter with an amendment from Eugenio Petrovich as well. So let me combine these here. Christoph says, wonderful to see how, how you tackle formulas from a textual point of view. Did you investigate how the word Kuramoto is spread throughout your corpus? So is that, is that in particular, whether the word itself is strongly correlated with the existence of the formula. So that's an interesting question about kind of cross referencing our different analysis or different analysis methods or perhaps a specific, Eugenio adds perhaps a specific cited reference is associated with invocation of the formula. What did, what did you guys see along those lines? So on the one hand, yeah, clearly, clearly the word Kuramoto figures more commonly in some clusters than the others. It appears in the cluster keyword for one specific cluster, but not, not in the others. So there is some, some cluster difference here. I think like what, what we are trying to do in this analysis and also in, so just maybe, maybe as a background, the first thing that we tried when we tried to get these formula representations was actually to ignore the formulas and just use the context like the two sentences before and after the formula to, to give like a, like a representation and contrast that with the broad thematic structure of the whole, whole paper. And that worked so, so I would say. So like just, just using the, the world works to a certain degree, but it's, it's not, I think not as satisfactory as, as contrasting the formulas more specific to the question. Like does the, does the word Kuramoto correlate with the usage of this specific formula? I haven't tried that yet. So I haven't, I haven't like, I suppose you would, you would, you would ask whether how, how largely the overlap between mentioning the word Kuramoto and this specific cluster of formulas like in the formula clustering is, I haven't, I haven't tried it out. Also, and this ties into like the, the second point which you maybe still can see on the slides here. It's not entirely, entirely clear to me. And I think, I don't think it's like super clear to anyone how you actually compare these structures in a rigorous way because of course you can do a clustering and you can do another clustering and then you can just like count co-occurrences. But of course the clustering is more of a, like it's, it's one among many clustering, of course. You could, you could look at different granularities, you could look at different strategies of clustering and you would get very different results here. And so I think, I don't know, maybe, maybe I'm wrong about this, but I think there's not like a super, super rigorous way of correlating or like associating the substructures of these embeddings with each other. And like, I hope, I hope that in the very, very near future we will see work on that. And then I hope to incorporate that. But it's not obvious to me how you would actually do that in a way that is, that is rigorous and not too like endangered by the vagaries of clustering algorithms and different layouts. Yeah, that's a really, that's a really nice point that in some sense it's a theme that's been running through all the talks so far today. In some sense that we have these different ways of breaking apart these networks and it's not ever fully clear how to manipulate more than one of them at the same time. Yeah, that's a really nice, that's a really nice point. I wanted to, okay, I can still get in my second question without being problematic, so I will. This is a vague question, but I think it's important because I think it speaks to one of the real strengths of some of the work that you guys have been doing. So I was wondering your thoughts on the interplay between the formal analysis work that you've been doing and the visualization work that you've been doing. So I can tell that that's a really important part of the way that you approach, the way that you think about these projects is really tightly tied. You guys have clearly put a lot of effort into visualizing this information. And I wonder what your thoughts are about, about how you view that relationship and what that's like when you're doing your work. What do you, this is really like inviting you to kind of ramble because I know this is not a very clear question, but your visualizations are gorgeous and I want to be let in on the process a little bit for you. Okay, so if I may ramble on that. So my thought on this is, I think we've heard it yesterday and Andrea mentioned it also again today that case studies in philosophy of science are not entirely unproblematic if they're like very focused on limited areas, especially if you think not about like science 200 years ago or 100 years ago, but science as it is currently developing in incredibly unoverseeable, horrible pace. And if philosophy of science wants to say something to that, I think to a certain degree, I mean, I think there's a lot to be gained from these, from more rigid quantitative analyses, but on the other hand, and that's where the visualization comes from, I think a lot of important insight is indirectly to be drawn from having like the right intuitions about like how coherent, incoherent different modes of science actually are, how tied together how certain areas of science, how broad, how disjunct are they? And I think these are intuitions about that, which you can only, only arrive at through the contact with large data sets, because everything else will be too disjoint and to everything you can directly capture by reading papers will be just way too small to actually have a good chance of being indicative of the whole thing that is going on. And I think that is really where these visualization methods shine because of course you could do a quantitative analysis where you never see a map and never interact with anything and you could in the end get like a score or a p-value or whatever out of that and that could like be useful for doing hypothesis tests, but on the one hand, it would miss, I think a lot of the value for philosophy of science and on the other hand, this is something that Mel of course mentioned yesterday. It's really important when you're working with machine learning algorithms that you have like a clear view always of what's going on and that's something that a lot of these visualization methods, I mean, they can trick you in important respects and you should, you must be aware of that, but they can help I think often to like figure out problematic or degenerative cases, for example. So if you have in your data set, I don't know where something in your scraping went wrong and like 20 papers are just the same word repeated a hundred times or something like that you sometimes meet that if you are using JSTOR data or something like that, you will see it at once if you do a visualization. There will be a weird cluster and you won't know what's going on there and then you can check it. And I think this interplay of like on the one hand, providing correct intuitions about like these big data sets and on the other hand, figuring out problematic going on in your analysis, that's where these mapping algorithm really shine and help. Fantastic. No, that's a great answer. Thanks. That's really helpful. I really appreciate it. With that, we are out of time. So let me thank you again, extremely cool stuff. In five minutes, we will be back with our next talk. So thank you all very much and we will be back soon.