 All right, it is by slide. So hello everyone. My name is Max-François Guerre and I am second year particle physics graduate student. It's my pleasure to welcome you today in our first talk in our new series of machine learning and particle physics seminars. We will be back every Thursday of the term in the Zoom room at the same time, 3 p.m. The talks are gonna be recorded and shared on our YouTube channel. I'm gonna share the link in the conversation a bit later. Today we welcome Dr. Ricardo Pinuesa who is an associate professor at the Department of Engineering Mechanics at KTH Stockholm. He is also a researcher at the AI Sustainability Center in Stockholm and vice director of the KTH Digitalization Platform. He received his PhD in mechanical and aerospace engineering from the Illinois Institute of Technology in Chicago. Among his many achievements, he has notably received the Goran Gustafson Award for beyond researchers. One of his many interests lies in combining numerical simulations and data-driven methods to understand and model complex wall-bonded turbulent flows. This is going to be the main topic of his talk today titled artificial intelligence, computational fluid dynamics and sustainability. Thank you for joining us Ricardo and virtual floor is yours. I would just like to remind everyone to turn off the microphone except talking during the seminar. Thank you very much. Thank you very much for the kind introduction and for inviting me to the seminar. It's really a pleasure for me. As mentioned in the introduction, I am at KTH in Stockholm at the Department of Engineering Mechanics and I also work within CERC the Swedish Science Research Center. So in today's talk, I'm going to address two aspects. The first one is how can we use artificial intelligence for sustainability? There will be a short introduction and that will motivate some of the computational fluid dynamics, CFD problems on the second part of the presentation. So for the first part, we asked ourselves the question of if there is any published evidence of whether AI can act as an enabler or an inhibitor of the 17 sustainable development goals of the United Nations. And to do this, we had to assemble a quite complete multidisciplinary team spanning a very wide range of knowledge areas to be able to assess the literature and to have a meaningful discussion to do this. This has been published last year, actually now almost two years ago, in Nature Communications. So you can see the reference here and you can actually see all the details of this study. I'm just going to highlight some of the main results in this talk. You can have a look at the team which spans a very wide range of researchers, experts in applied AI, fundamental AI, social interaction, ethics, economy, biodiversity and also sustainability and energy systems. So you can really see the very broad range of areas that we covered, which of course matches the range of areas in the 17 SDGs. And the first thing that we did was to divide the 17 SDGs into three main areas, having to do with the society, with the economy and with the environment. This is quite a standard classification so that we can actually make the analysis a bit more broadly within these three areas that we have identified. So generally our results indicate that we have AI based on public evidence has mostly positive effect on the SDGs, 79% of the targets of the United Nations can be positively affected by AI, whereas 35% can have a negative influence as a consequence of the AI. Perhaps the environment is the area with the largest positive potential, whereas one can maybe argue that the society is the one which has the most negative potential, although this can be maybe understood a bit in more detail to go to each of the SDGs. So what we can see on the left is for each of the SDGs, what is the percentage of targets that is positively affected by the development of AI and on the right, the percentage of targets from each of the SDGs that receives a negative influence as a consequence of the development of AI. And what we can actually see here, and we can focus on the light lines over here, but what we can actually do is to analyze the strength of the evidence of the references used to make these relations, these positive and negative relations. And to do that, we classified all the references that we found into four different categories. So the A type of references were the ones based on the most reproducible and robust methods, which have a weighting factor of one, going all the way down to the D type of references with a weighting factor of 0.25, which have the more speculative and perhaps less easy to generalize references. And if we go back to this result, to this slide, the thicker lines are the ones obtained by taking into account the weighting factors. So we can see that, of course, the thicker lines are smaller or equal than the thin ones, right? And what we can actually see by comparing the thin and the thick ones is how strong is the evidence for positive and negative effects of AI. And what we can actually see is that the positive effects of AI on the environment and the society are quite robust because both the thin and thick lines are very close to each other. However, when one looks at the economy, we can actually see a quite big decrease. When we look at the thicker lines, this suggests that there is a big knowledge gap when it comes to the positive effects of AI on the economy and therefore those positive effects are perhaps more based on speculative work that really highlights an important area of research. Now, some examples could include the SDG-1 on no poverty, where there has been some work on analysis of satellite data with convolutional networks to track and assess areas of poverty. Negative effects on SDG-10, on increasing inequalities. Of course, if the future is going to heavily rely on AI and not everybody has the same access to this technology, then we're going to exacerbate existing inequalities. And some more negative effects can be associated with polarization and increase of bias, election outcomes and things that we have been observing many more pronounced now in the context of corona. So what we argue is that regulatory oversight should be preceded by regulatory insight. So we should have experts and people who know what they're doing who are part of these committees that are regulating and that are supervising the deployment of these AI technologies from a legal framework. Another example has to do with the COVID contact tracing apps where we actually developed a socio-technical framework to really assess whether the data management and the deployment of these apps were compliant with technical solutions, but also with regulatory bodies. You can actually see the results in this article and results in engineering. And maybe the summary of this talk can be put into these three agents, the technology, the individuals and the government where the developments of the technology are very fast. The individuals are lagging behind and the governments certainly are much slower when it comes to the regulations. Everything happens in the context of the environment where we are also having, of course, in this cycle an important impact. And in this figure, the thickness of the arrows indicates the speed of change. So of course, the changes in technology are much faster than the reaction speed of the governments with respect to that technology. And one area that I want to focus on is SDG-11 in sustainable cities. We want to highlight that 800,000 people die in Europe every year because of exposure to high pollution levels and AI can really help in the context of developing more robust ways of measuring the pollution in cities and coming up with more effective strategies to mitigate this. And this is one of the areas where we can actually connect with the physical interpretation and the fluid mechanics of these urban environments which will be the second part of the talk. You can see here a very detailed simulation of the flow around an obstacle, a building. And what we want to do is to use sparse measurements to be able to reproduce the three dimensional flow fields which also include the pollution concentration and the temperature by neural networks and by deep learning. And we can do this in different ways but essentially we can use either convolutional networks to be able to reproduce the planes of the flow above the measurement location. We can also use here I'm listing the current neural networks but also transformers or other methods to be able to reconstruct the temporal dynamics. You can actually see here the level of detail that we have in these simulations where we actually reconstruct all the important flow structures in these environments. And one more aspect that is important is the interpretability of the deep learning models. Of course, deep learning models are essentially black boxes but despite the number of explainability methods that one can develop, more classical methods, in a recent article in Nature Machine Intelligence we highlight the importance of really developing the interpretable deep learning models when applied in applications, when really deployed in medical applications or in decision making. And there is a way to try to do this in deep learning models, for example, the approach by Kramel and others with inductive biases and basically genetic programming in order to develop a symbolic expressions that can generalize quite well. So now we're getting more into the more technical part of the talk, how we can actually leverage all the knowledge that we have from the AI possibilities into sustainability and how we can apply that to concrete applications. And to do this, we're going to do non-intrusive sensing. Here I'm mostly using convolutional networks but we're also gonna show results with guns at the end of the talk. So essentially what we want to do is this problem where we have the building and the flow and we use the information at the wall to predict what happens above the wall. And there has been quite some literature in this. There have been a number of linear studies dealing with linear methods. For example, linear stochastic estimation. In our recent work in the Journal of Fluid Mechanics we showed by using transfer functions that non-linear prediction methods are actually better to be able to do this estimation. Usually information at the wall to predict the flow above the wall. And this is because turbulence is essentially non-linear. More concretely, it has two different parts. One, a scale interaction part which is mostly linear that's called the superposition of the largest scale as close to the wall. And this linear part can be very well reproduced by linear methods. But of course the non-linear interactions that are mostly due to modulation that those require non-linear methods to be properly represented. And that's where the neural networks come in and we can actually use them to be able to predict. Quite well, depending on the location that we are trying to really analyze the flow properties based on the wall information. So to do this study we start with a simpler geometry than the urban environment. We consider an open channel. We do direct numerical simulations with the Fourier-Chevichet code in Simpson here from KTH, from Stockholm. We consider in this first case a low Reynolds number at Reynolds star of 180. And then we consider a number of properties including the open and on the upper side to be able to simplify the dynamics of the larger scales when we only have one wall. Interesting things. One can use fully connected networks for predicting flow features. The problem of course is that then well in addition to the very large number of parameters is that you don't have a way to exploit the spatial information that you have in the data which of course in the case of fluid mechanics problems it's very, very prominent. These spatial distributions are very important and that's what we try to exploit with convolutional neural networks which are extensively used in computer vision of course and in this case I guess everybody in the audience or most people in the audience are familiar with this but essentially we use this kernel to be able to do the convolution operations and with a reduced number of parameters in the first sliders we can actually obtain very meaningful predictions. So schematically what we are doing we have this open channel. We are using the information at the blue plane that's the wall. So basically the two wall shear stress components to predict the information on the yellow plane above and we do this with convolutional neural networks that are going to be exploiting the spatial features in the data. So in this first example we will use the two wall shear stress components to predict the stream-wise velocity fluctuations above the wall and we are doing this for closed loop control. This is one of the motivations. If we can predict the flow very well we can use this information to actually pretty nicely develop bigger control strategies. What we have here is to design the convolutional neural network architecture that we're going to do. So we have two wall shear, two planes as the inputs the two wall shear stress components. We have a number of hidden layers and in this case the output is only one plane which is the stream-wise velocity fluctuations. All the results from this part can be found in this article by Washton and others. We use the relu as an activation function. You can see the size of the input is 128 square grid points. So that's really the amount of grid points that we're using in this Lorentz number case. And this number indicates the number of stack layers. So the number of filters that are being applied in each of these layers and they are stored as feature maps that are going to be used for the prediction of the next layer. So in this first layer what we have is a filter that has or a kernel has a size five times five and a depth of two which of course is much less in terms of number of parameters than the equivalent that you would have in a fully connected network. So this is why, or one of the reasons why CNNs are quite widely used in the context of in the context of computer vision. And of course when we go deeper into the network what happens is that each of these filters is identifying features on the input map in the case of two wall and flows these features can be strickey extractors regions of circulation, regions of intense fluctuation. And of course as we go deeper into the network then the depth of the filter needs to match the number of feature maps of the previous layers. So what we also observe because of the definition of convolution is that as we go deeper into the network we can actually identify progressively larger features. And this is interesting because turbulence has a hierarchical extractor in the sense that smaller extractors are able to make up larger features that are observed farther away from the wall. And therefore this hierarchical way of learning of the convolutional networks is very nicely suited to the way in which the turbulence scales in turbulence are actually taking place. So this deeper, this network stood as the end of the output are identifying larger features from the input map and they're able to predict the larger scales that we have in the output. Now we can use CNNs in different ways. Of course if we want to have a global output which could be a classification problem and we have this image and we want to label it as a car then we will have a fully connected layer at the end. In our case we don't want to have fully connected layer at the end we want to have a local output. So we want to see for each of the points of the input what would be the point of the output. And with this we have a fully convolutional network. So towards the end we still do convolutions. We need to adjust the size of the input to that of the output and there is some padding because of the periodicity of the domain but essentially the size of the output has the same size of the image that we want to predict. And that's why we use fully convolutional networks which give me this local output that I'm actually interested in. What I'm showing here is actually the characteristics of the flow, of this turbulent flow. So you can actually see these strickey structures that are very typical of turbulence. The first line, the first row is the linear result that's the linear stochastic estimation. The second one is the reference and the third one is the neural network. So the non-linear prediction. This is the streamwise velocity fluctuation of Y plus 15. So that's the near wall peak of fluctuations. That's the region of strong turbulence intensity. And what we want to match is the second row. What you can see is that the neural network agrees quite well with the reference whereas the linear method attenuates significantly the intensity of the fluctuations and also it smoothens the characteristics of the streaks in a way that you can see that the non-linearities are not so well reproduced. So this is actually interesting. Again, these results can be observed in more detail in this reference over here. It's interesting because you can actually see how expected we can outperform the linear methods quite significantly by using CNNs which can account for the non-linearities of the flow. This also happens if we go farther away from the wall at Y plus 30 and this plane is still close to the wall but probably you can actually see that this is more in the buffer layer. So you can actually see some different features of the turbulence. You can see the reference in the second line and still the neural network reproduces pretty well the non-linear patterns that are present in this flow whereas the linear field, the linear prediction is much more attenuated and you can actually see that the streaks are very, very smooth here. So this is quite interesting because of course when you are farther away from the wall, the fields are less correlated so the predictions are more difficult. There is still a linear footprint which is what the linear stochastic estimation is reproducing but the neural network is able to significantly outperform the performance of this linear method. And of course at Y plus 50 which is farther away from the wall the neural network starts to predict worse but it maintains the structure of the field and the level of the fluctuations. You can see that the linear method is basically not predicting anything which is a quite interesting result because if you want to target larger scales to control or to be able to analyze how they are correlated with the wall then you certainly need non-linear methods and deep learning in this case is really helping us. So this is what we can see when it comes to the predictions, there is actually an effect of the time step between the snapshots because of course if you are looking very close to the wall then the time scales of the structures are very short. So a very small delta T will allow me to see many different structures but if you keep the same delta T farther away from the wall when the scales are larger and slower then the information will be much more redundant. So you won't be able to actually see so much data. And in this particular case of Y plus 50 having larger delta T improves the predictions and reduces overfitting. So one has to be aware of the importance of choosing the right delta T so you can actually get the right scales when you're choosing to train data for your predictions. And this is when it comes to turbulence statistics. On the mean we have the inflow and on the right we have the stream-wise velocity fluctuations. You can actually see in orange the small delta T in blue a larger delta T and in green the linear method. And I mean close to the wall we get errors smaller than 3% which is very nice. And we actually the linear method goes from 12 to actually 45% error when we go farther away from the wall. So in all the cases the neural network-based approaches are outperforming the linear classical methods for this. Now what I have shown you so far is that the low Reynolds number a future Reynolds number of 180 which is turbulent but it doesn't really exhibit the significant separation of scales. What I will show now is a Reynolds of 550 which is a more let's say reasonable to Reynolds number is a higher Reynolds number where we can actually understand much better the features of the separation of scales and the effects of a more turbulent flow at a more realistic condition. Now in this case we consider three input fields which are the two wall stress components and the wall pressure. And then so instead of two inputs we actually have three. You can actually see the fields over here. And as an output I have the three velocity fluctuations in the three directions. The streamwise, wall normal and spanwise velocity fluctuations. These results can be found in a recent article in JFM. So you have the reference here. This is Guastoni and others who extended the previous work to this one. So you can actually get that quite nice description of all the deep learning tools and also the features of the flow that we are trying to predict. As mentioned before, we use periodic pattern which is used because the flows are periodic and we can actually exploit the deterministic capabilities of the CNN prediction. Now in addition to the method that I have been showing so far, this is what we call the FCM method, the fully convolutional network. I have one playing as an input. I predict the whole play of a play as an output and that's it. However, there is another method that we have also developed and you can see in the same article in JFM but also in WMS and other synthesis of fluids you can see a bit more of how this method was developed. This is what we call the FCM POD. So in this case, what we do is that we conduct appropriate orthogonal decomposition and the POD which you can see here. That means that the U vector containing the three velocity fluctuations is decomposed into spatial modes and temporal coefficients. And then in this sense, what we need to do is to predict those temporal coefficients instead of predicting the whole signal. We actually do it in a desalated domain. So we divide the domain into smaller squares thus for making the predictions easier because for each of these squares any scale larger than the domain it gets lumped into the zero mode. So this is more practical from the prediction point of view. To summarize what we do is that we do the POD of the flow and we have the spatial modes and then what we need to predict is these temporal coefficients over here. So this has its advantages but it also has its disadvantages. Once the model is trained it's about its evaluation. So the evaluation is reasonably straightforward and it's reasonably fast but it's important to keep in mind that in some regions of the domain one method is going to perform better than the other one. So what I'm showing here is the stream-wise velocity fluctuations at y plus 15. That's the first column. This is again the near wall peak. So the region of the strong fluctuations and in the second column I'm showing the results at y plus 100 which is in the outer region. This is as I mentioned are a nostile 550 which already has separation of scales. So we actually have this outer region where we can identify larger scales. Now the third row is the DNS, the direct numerical simulation. This is the reference that we're trying too much. The first row is the extended POD and the extended POD is a linear prediction method that is formally equivalent to the linear stochastic estimation. So essentially this is the linear method. The second row is the FCM POD. So we will use the network to predict the POD coefficients and the last row is the FCM. So that's the case where we are just doing the convolution network to make the prediction. What is interesting first of all is that the linear method of course is exhibits attenuated fluctuations. So as before, if we compare the first and the third column and third rows the linear method is not performing so well because it can only capture the linear interactions of the flow and not particularly the superposition of the largest scales into the wall. The fourth row, which is the FCM, you see that it has an excellent agreement at Y plus 15. So close to the wall, when I'm predicting close to the wall doing the full convolutional network to make the predictions, it gives me excellent results. And interestingly, I will show you in a minute the error levels. Interestingly, this is more efficient than doing the POD, than doing the proper evaluation of the composition and then predicting the coefficients. If we compare the second and the third row what we see is that farther away from the wall all the predictions degrade because the output is less correlated with input but in the case of the FCM POD we get a better agreement than just with the FCM. Now, what is that? Well, one could argue that in the case of the FCM POD what you are doing is that you are first encapsulating part of the spatial information into these spatial modes first and then you don't need to predict the whole signal. You only need to predict the temporal coefficients. You only need to predict part of the information. When you have small scales close to the wall the range of scales that are present there and the energy contained in this this is really not so easy to encapsulate in a few POD modes. They are not dominant large structures that are taking up all the energy but farther away from the wall you have dominance of these larger scales and it is in principle possible to doing POD to encapsulate a significant fraction of the energy into these larger scales and then it's more efficient computationally to just predict the temporal evolution of those larger scales than having to really, really look at the full prediction of the plane with FCM of the large and the smaller scales and the attachities and all the mess that happens. So message close to the wall the most efficient approach is the FCM far away from the wall the most efficient is the FCM POD, okay? And if we look now at the statistics what we see is these three figures the streamwise, wall normal and spanwise velocity fluctuations. The black line is the reference of the DNS. The orange triangle is the fully convolutional network the FCM method, the blue dot is the FCM POD and the green square is the extended POD. So what we see is that close to the wall the orange triangles, the FCM performs extremely well. I mean, the streamwise velocity fluctuations has less than 1% error close to the wall when using the FCM method. When of course the both FCM approaches perform better than the extended POD in other locations except perhaps in the wall normal case where the three of them are performing in a similar way but what is interesting is this crossover, right? When I'm going to the farthest away location the FCM POD has better performance than the FCM. The error that we obtain with FCM POD very far away from the wall is a bit more than 25% which still is quite remarkable given the difficulty of these predictions. And this is something that would allow us to really do control of these large scales with a real-time prediction based on whole information. So FCM POD performs very well far away from the wall which is a quite interesting and remarkable result for us. Something that I also want to comment on is the possibility of improving the training performance. So this is something that we... This is a transfer learning essentially, right? So what we do two examples of transfer learning and I guess many people in this audience is familiar with transfer learning essentially is to transfer the weights of one model to another one in order to exploit the knowledge that has already been learned in the first configuration. And what we are doing here is that we have a network trained to make predictions of weight plus 15 and then for predictions of weight plus 50 so far away from the wall, we freeze the three first layers. So we transfer the network from 15 to 50. We freeze these three first layers and we only train the three last layers. Now, what is the intention of doing this? When we go farther away from the wall we will have larger and smaller scales and in principle, since the first layers are identified and predicting smaller scales those ones probably will not change so much so the smaller scales are probably quite similar close and farther away from the wall whereas the larger scales change and those are the ones that we can actually identify with the last layers that we train after the transfer learning. So the results which you can see here show us that we can obtain the same level of accuracy with the transfer learning at one fourth of the computational cost. In this case, we measure the GPU time used to train the model. So essentially we can reduce by more than a factor of four the training time and the required by the GPU model in order to make these predictions. So we are effectively exploiting the information that we know from the smaller scales when we are actually trying to predict farther away from the wall benefiting from some of the most interesting features of the process. Another example of transfer learning which is also quite promising is going from low to high Reynolds numbers. This is an example that I like very much. So, and this is again also shown in the JFN paper Journal of Fluid Mechanics so you can actually see the full reference here. What we do and for the ones who are not familiar with direct numerical simulations this simulation at Aritao 550 is significantly more expensive than the one at 180. I mean, we're talking of orders of magnitude more expensive because of how the Reynolds number, the cost of the simulation scales with Reynolds number. So what we do here is that we train a network to make predictions at 180. And then we do transfer learning. So we initialize the network for the high Reynolds number predictions with the weights of the 180. And then what we compare is the loss function as a function of the batches of the training of the network for 550 considering 100% of the training data set. 100% of the training data set with transfer learning 50% and 25%. 25% would be this brown line that you see over here. So what we can actually see with these results is that in the case where we transfer the weights and we use 25% of the data set has error levels that are very similar to the ones that we obtain originally without any transfer learning. In other words, we can obtain the same results with 25% of the data which is actually quite remarkable, right? Because the simulation at 180 at Reynolds number is very cheap. So you can run and obtain millions of instantaneous fields to this training. And by doing transfer learning, we can reduce by a factor of four the amount of data that is needed at higher Reynolds numbers. So we can reduce significantly the computational cost of the high Reynolds number training data set while preserving the performance and the quality of our predictions. So I think I would say that this is a quite interesting application that of course we're now exploring in the context of transferring among geometries and also among different two learning configurations which seems to work quite well. So this is the potential of transfer learning that we are observing. And another technique which I believe is interesting is what happens when the information at the wall is sparse, right? So the first example that I showed you for the building was when in the ground we had only a few sensors, right? And we tried to predict from those sensors the flow. However, many of the examples that I have been showing you we are considering the whole information at the wall. We are not considering only a few measurements but the whole plane as the input. In reality, that's not the case. You don't have access to such detailed input information. You only have few sensors for that input. What we did here was to use generative adversarial networks, JANs, which as you can see here schematically this architecture has two parts, a generator and a discriminator. The generator is producing high resolution images based on lower resolution ones. And these images are respecting the statistical properties of the reference data set. And the discriminator has to differentiate which high resolution image is true and which high resolution image has been produced by the generator. In a way, I mean, these two parts of the network they are trained together using game theory in a way that both get better at their jobs, right? The discriminator gets better at differentiating which images are fake and the generator gets better at producing realistic high resolution images. So using this approach, what we did was to create a network which has two steps as you can see here. The first part of the network takes the three inputs. So the two wall shear stress fields and the wall pressure with significant down sampling. So you can actually see very coarse fields here and obtains a high resolution version of those fields, right? Using the GAN. And in the same network, there is a second step which uses this high resolution inputs to produce predictions of high resolution of the three velocity fluctuations away from the wall. So we are using low resolution data from the wall to produce high resolution predictions away from the wall. Now, what is interesting about this, and I can show you here, this result, this work, by the way, is published in Physics of Fluids. This is a paper from last year by women's and others. So if you want more details of this, you can just go to this article, but you can actually see something quite interesting. So here I'm showing you the instantaneous wall shear stress at the way, so this is wall information. And this is the DNS, this is the reference data. The first row shows results after down sampling. So by a factor of four, by a factor of eight and by a factor of 16 in its direction. So you can actually see how the input data gets coarser and coarser. And this one over here with FD16 resembles the few sensors that I was showing before for the urban data case. And so this one over here is the few sensors. And you can see that in this case, the input does not really resemble much the original data. I mean, this has been sold on sample 16 times in each of the two directions. So this is really, really a very coarse input field. And on the second row, what we have is the prediction of the network of the guns, part of the network of the wall shear stress field based on this input. And you can actually see that both at four and eight the agreement with the reference is significant. We actually get very good agreement. But even at 16, which of course exhibits some attenuation, the fluctuations with FD16 are much weaker than the ones that we see in the reference. But you can actually see something interesting and is that the locations of these strickey extractors. In turbulence, you have elongated extractors which are called streaks close to the wall. And these elongated extractors, their location is respected and the sizes are also respected. So this architecture from this very, very, very coarse input that you can see here is able to obtain quite good predictions of the reference flow at this location. And when we are doing flow control, for example, the idea is that, I mean, we want to have the information from the wall to predict what happens above the wall. And now in this information, I want to set up the control. What happens is that I don't necessarily need to have a perfect prediction of the flow here. Most likely, what I want is just an idea of where the large-scale, the energetic scales are so that I can try to suppress those in a robust flow control approach. So what happens is that the flow that I'm predicting with the GANs approach, it is, I mean, it's not exactly correct, but it is physical. It is physical and it gives me the information that is interesting that I will require in order for a closed-loop control strategy. So this is actually quite a remarkable approach. And I think that this is quite beneficial in the context of future studies for non-intensive sensing. Now, if we are actually running these models in the context of urban environments, so we are managing to get very good predictions of the urban flows, but this has really the potential of obtaining very robust predictions in urban environments. So some future applications on which we are working, we are using, we have shown the feasibility of using a neural networks to predict turbulent shear flows, both in the context of spatial information. I have also some publications where we have done this for the temporal dynamics. So we can actually predict pretty well the temporal dynamics of these complex turbulent flow cases, but there is quite some work on developing machine learning based boundary conditions for turbulent simulations, generation of inflow conditions. In order to obtain a computationally efficient, especially developing simulations, time tracking of turbulent distractors. This is something that we are working on to be able to learn the temporal dynamics of these turbulent distractors via machine learning. And we're also trying to use this for flow control. So we are using the non-intensive sensing and the flow control approaches based on deep learning. And we're using auto encoders for model reaction. This is another area that we are being quite active. So we are able to use with auto encoders non-linear model of the compositions. So classical methods are linear superpositions of the modes. The non-linear approach, the advantage of this is that we can, in a way, exploit the possibility of having much fewer modes. So we can have a more compact reducer of the model while retaining some of the interpretability and the orthogonality of the modes that we have been obtaining for this model of the composition. So there are many directions within fluid mechanics and, in particular, computational fluid dynamics where we can benefit from machine learning and in particular deep learning also in the context of optimization and speed-up of some of these computational fluid dynamics applications. So to conclude, I have first shown that based on our work, AI can help to achieve 79% of the SDG targets, but it can be an inhibitor for 35%. So there's a very much needed global debate to be able to harness all the positive potential of AI. CNNs are very useful tools for reconstructing turbulent fields and predicting turbulent fields. We actually have obtained excellent predictions, especially close to the wall. So when we want to control the near-world region, we have obtained less than 1% error in the turbulent fluctuation fields, which that means that we can obtain significantly better results than with linear reconstruction methods. And we are looking at some improvements, like taking into account the structure inclination, refining the transfer learning strategies, and also we are looking at more complicated architectures like transformers, both for the temporal but also for the spatial predictions of these interesting flow cases. And I would like to thank you very much for your attention. You have my contact information here, my email, the web of my group and also my social media information. So I'm always happy to discuss if you have any questions or possible collaborations. And I'll be very happy to address your questions now. Thank you very much. Thank you very much indeed for this very interesting talk. I guess we all are joining into giving you a round of virtual applause at the moment. If anyone has a question, please don't hesitate to jump in. All right, otherwise I actually had one myself. So you seem to show that using information at intermediate layers helps you predict and train the network to predict at a further layer, further away from the initial plane. Have you considered using a recurrent neural network using the CNN as its goal to try to model planes at successive distance from the initial inputs? Yeah, that's a good question. That's something that we are exploring with different approaches. So what I have shown here is always from the wall to the location. So either close or far away from the wall. But we are now trying to use architecture so we can get intermediate predictions that can help improve the prediction that I have ultimately far away from the wall. So that's, I mean, with the recurrent approach that can be a very nice method and we are exploring different alternatives in order to do it because of course, the challenging prediction is the far away from the wall plane based on this input information. So if you can get intermediate planes, you can hopefully improve the performance of that prediction. But that's currently under investigation. That's very interesting. And I also connected to this, probably trying to reverse the direction. So when you've predicted something at a certain distance, try to predict back the input to see if there's closure. So that's a good question. That's something that we have tried in another study. It would not as it works so well or if you can predict the wall information, based on information far away from the wall. But there are two things that are different. One is, of course, the size of the receptive field. Far away from the wall, you have very large scales and you're trying to predict something very small. When you predict something small from the wall and built towards something big, you are exploiting the hierarchical learning of the network. So in that sense, it's easier to make that prediction. To do it the other way around is a bit more difficult because you try to predict something big, based on something small. We are playing also with normalization of the inputs and trying to really reorganize the figures or segmentation of the figure so you can get some of those features with a smaller receptive field. But that's a data management aspect. But I would say that there is another physical aspect and it's that the wall receives all the information from the larger scales. There's a causality there. So the wall contains that information as an input. But the other way around is not necessarily true. The larger scales far away from the wall do not contain as much information from what happens at the wall. So there is some lack of information there for that prediction. But it is something interesting. I mean, we are trying to, for example, for developing boundary conditions and things like that. We are looking at doing that reverse prediction, but so far it's not as, let's say results are not as good as with the direct prediction. Thank you very much for your explanation. So if anyone has a question, please do jump in. Otherwise, I think we can thank our speaker again. Thank you very much for inviting me today. This was really a pleasure. And yeah, I'm very happy to have this discussion with you. Yeah, same pleasure to discover, but thank you. Flip Mechanics. Well, have a good day everyone and thanks again. Goodbye. Thank you.