 Good morning, everybody. For the last day of the summer school, for those here, congratulations. You survived up to now. Even be here for the early morning talk. As you see, in the program, we try to go gradually from the pathophysiology and then further to the different models and things like that, data extraction. And so at the final day, what we want to do is show a little bit even closer towards some of the applications. As you heard in some of the talks before, it's like what we do this type of research for in the end is, well, obviously, first of all, to have some more understanding in some processes. But in the end, we always want to see that we can do something which can help the patient in the end, can help understanding the patient or can help us developing or testing new treatments and therapies. And so the talks this morning will be going in that direction as trying to see which are the applications that we can do, so trying to go closer to the clinic, closer to the patient, using patient data and try to understand. And for that, it's my great pleasure to invite Maxime Sermeson, who is a senior researcher at the Research Institute in Enria in Sofia, Antipolis, near Nice. And I think he's one of the guys in Europe that has the longest experience in cardiac modeling and trying to be as realistic as possible instead of to have as nice models as possible. So that's why we asked him to talk about his experiences in cardiac modeling and trying to see whether you can approach the patient. Maxime. Thank you, Marc. So as Bart said, it's the last day. So I'll try to mostly show illustrations of what I'm doing and what we're doing at Enria. But if you've got questions, more on the techniques. Don't hesitate to ask. So what I want to present to you here is work trying to combine two different approaches. Because if we try to have some clinical applications, so we focus on helping the patient, in a way there are two different ways to approach the patient. You could either try to build up from mathematical models of the cells and the tissue and the organs and try to simulate the heart of this patient to predict therapy or to do some prognosis. And that's what I've been doing in biophysical modeling of the heart for many, many years, trying to build up a full-scale model with multiple physics and all these details. The other way to try to approach it is more to start from statistics. So you've got a group of patients and you try to find what is the common factor between these patients. You do these group-wise analyses. And then you compare your given patient to this behavior, to this average behavior of the patient. Then you can say, OK, is this patient doing better or worse than the average patient with this pathology? And I think these two approaches are very interesting because they also are very complementary. If you look at group-wise analysis, it means that then you'll be able to extract what are the important phenomena to this group of patients. And if you want to model something, if you don't know what is important to model, then you can just go in any direction. There are so many details you can put in a computer model. So I think this statistical approach is a very good way to focus your biophysical modeling approach. And also, if you introduce some statistics in your modeling, you can also do some more probabilistic modeling and look at uncertainty in your predictions of your model. On the other hand, if you just do statistics, it's hard to interpret some types of results. So the biophysical modeling can help you in having some kind of mechanistic understanding on what's happening because you really model the physical phenomena. So you can explain the process that have been producing this data you observe statistically. And also, as I will show you, the statistical approach and all the deep learning, all these things, it's nice, but you need a lot of data. And the biophysical modeling can be also a way to generate data. Because if you trust your model, if you validate how it behaves, then you can generate a whole lot of different cases for a given pathology or a whole lot of different pathologies and do some analysis on this synthetic data. So I'll try to go through this in three different parts. So first on the biophysical modeling part, then on the group-wide statistics, and finally some example on how we try to couple these. So I don't think I need to motivate much the modeling part. I mean, the idea is to combine different physics and to try to look at data you have on a given heart so that you can adjust your generic model to this heart. And then you can help characterizing the pathology. So you help diagnosis. Or you can test some therapies on your computer model and try to help in therapy planning. So the kind of pipeline we've been using is a bit of a causality one. I mean, we start from the shapes of the anatomy. Then on this shape, you need to put structures, so the muscle fibers of the heart. Then once you have this, it's your computational domain for your model. So you can simulate extra physiology. So to personalize this, we've been using a lot undercardial mapping data, so from catheters. Once you have this, you can then simulate the contraction. So you can look at motion. So you can extract motion from data like echo, 3D echo, or cinema. And try to estimate the mechanical parameters. And ideally, you've got also some mechanical data like pressure, otherwise with what you have. And by combining all this, you can try to have a kind of fully personalized model, at least on the electromechanical parts, to see if you can simulate different therapies and how this heart behaves. So in each of these stages, there are still lots of questions and lots of difficulties. So I'll show you some parts where there are some advances which are interesting, I think. So on the patient-specific fibro-architecture, I mean, it's the work of Nicolas Toussaint trying to measure actually the fibro-architecture from a patient. So I don't know how familiar you are with the magnetic resonance imaging. But so the idea is to use a very strong magnetic field to measure the density of water molecule. But it's been shown that you can also, by using different directions of gradients, you can also measure the principal directions of diffusion of this water molecule in the tissue. And by doing this, you can more or less measure the anisotropy of the tissue by MRI. So what people do, usually when they have all these gradient measures, they approximate the result by a tensor. So you've got like three principal directions. And the first one is said to be the muscle fiber direction in the heart. The problem is that if you want to acquire the small motion of a water molecule in a beating heart, you see that the scales of motion are very different. So it's still a very challenging way to measure fiber. So it's been shown to be very efficient in ex vivo hearts. I mean, you put a fixed heart for hours in the scanner and you get a beautiful image. If you want to do it on a beating heart, it's still very challenging. So there have been lots of work on this in this last six, eight years and years. And I think there are lots of progress. So now you can acquire reasonably well one slice of in vivo DTI, but it's still very challenging to do it in 3D. So what we've been working on is try to set up a model so that you can combine different acquisitions of different slices in order to reconstruct a full 3D architecture of the heart. So to do this, we just took advantage of the shape of the heart. I mean, the heart can be approximated like it's a porous ferrari, it's a kind of half ellipsoid. And there is a clear symmetry around the axis of the ellipsoid for the directions of the fibers. I don't know how relatively about the cardiac muscle fibers, but actually they change direction mostly across the wall. So when you go from the inside to the outside of the heart, the fiber are rotating. But if you go from one place to another in the other directions, it is quite stable. So the idea is that if you use some cartesian interpolation for interpoling tensors, you will get very strange tensors in the middle, because you don't take advantage of the knowledge of the shape and of these variations. But if you adapt your interpolation scheme to the shape of the heart, then it's much more robust and you get a much better reconstruction of the fiber. So the idea is that we take a few slides, like five or six slices, and we transform everything into the porous ferrari dual coordinates so that we do all the interpolation in this coordinate space. And we move back to the cartesian coordinates to have a fully dense fiber architecture of the heart. So we did some analysis of this kind of reconstructions of changes of angle across the wall, but also during different instance, because as you have a beating heart, I mean, this sequence we used was able to acquire two images for one cycle. So we had both an image of when the heart is full of blood and when the heart has ejected all the blood, which is called systole, so dystole and systole. So we had fibers at these two instance and we could look at the distribution of angle. And in fact, it's not changing that much between these two instance because there are deformations in more or less in the most directions. So the final angle is quite present. This is, for instance, a kind of fiber tracking. So this is just for visualization purposes because they are not like huge fibers rotating all around the heart. But once you have these tensors, you can do some kind of, you put a particle and you follow the, you build up the streamlines along these directions to try to reconstruct the fibers. That's the fibers we had at dystole and systole. The next level is that in the heart, you've got the fibers, but you've got also laminar structure. So the fibers are organized in some kind of small planes together. And the question was, can we also go down to this level? So in a way, it's the second eigenvector of your tensor. So the first eigenvector is doing the main direction. And if you add the second eigenvector, then you've got this laminar structure, which should appear. And there seems to be some data. I mean, we're sure that it's not completely a noise, it's not completely random and we try to do some tracking of this laminar structure. Actually, there's much more changes in the laminar structure orientations between dystole and systole than of the actual fiber directions. But it's still, I mean, it's still tricky because when you try to measure that, it's also perturbed by the strain, I mean, the deformation of the heart and the sequence has to be adapted. So it was still a bit preliminary. So on this part, I mean, it was to show you that, I mean, there is an active researcher around using MRI to measure cardiac fibers. I think there are already very interesting results. And as you will see, I mean, it's, all along the pipeline of cardiac modeling, you've got these fibers which impact everything. I mean, it impacts the electrophysiology, it impacts the mechanics. So being able to measure it on patients is really, would be really nice to ensure that the first block of your pipeline is kind of personalized already. But yeah, it's still tricky to acquire and take time and it needs to be used on more patients because on this data, often you will show the data on volunteers who breathe normally, can breath all easily. But when you have patients who are a bit breathless in the scanner and who don't want to stay too long, I mean, the data becomes much more challenging. If you've got question on parts, I mean, if you've got a question here that you'd like to ask on this part, don't hesitate because maybe it will be easier or if you want to interact at some point. So then once you've got this, the idea was to simulate the electrophysiology. So the work of Jackson Roland on vorticular tachycardia. So the idea is to, once you've got this nice mesh of the heart and you've got some fiber architecture, you want to simulate the propagation of the electrical wave which is creating the contraction at each heartbeat. And I think you've probably seen a lot of this during Blanca's talk. I mean, there have been a lot of work since decades on modeling the cell, the cardiac cell and all the ionic channels involved. And it builds up to very computationally expensive models if you want to simulate the whole heart. But then it's even worse if you want to personalize it because basically you want to optimize your model to a data. So you need to simulate it many times to adjust your parameters and fit your data. So that's where it really is the choice of the complexity of the model can be critical to ensure that you manage to compute what you want to compute. So the global idea is to, you have your mesh with your fibers and you select your model, your ionic model and if you use the monodomain or by the way, you select the kind of equations you want to solve. And then you've got this kind of simulations where you have your propagation of truss membrane potential on the whole heart. And from this, you can extract activation maps so at which time each point of the heart was activated and action potential duration maps for how long each point of the mesh was activated because that's also an important point in arrhythmia. So as I said, the choice of the model is very, very important. And there have been different levels of complexity. If you take the most detailed one, you can get very nice predictions at very fine levels, but it's very expensive to compute. And there are some intermediate ones which are kind of in between in cost and in predictions. And then there are the iconic ones which are mostly to simulate the timing. I mean, just the activation time but it's hard to really predict the arrhythmia with it. But then on the other hand here, it's also the number of parameters you have to set. It's also decreasing a lot when you go from this which could have like 100 parameters to here where you have like only a few. So as I said, if you want to use it on clinical data, you need something which is not too expensive because you will need to iterate to optimize your model. But also you need still to a prediction which answers your question. Because if you simplify too much, then you can't answer the original question. So in our lab, we chose the Michel Scheffer model which is kind of intermediate because it's still a reaction diffusion model. So it's still kind of properly representing the physics in the equation. But it has some kind of average channels, I mean, more or less what goes in and what goes out. So, and it has only two variables and not too many parameters. But still with that you can reproduce. So this is kind of cross membrane potential trace. And you can control more or less the four important phases of this. I mean, the up stroke, the plateau, down stroke and the refractory periodant. In the end, I think at the macroscopic level for clinical data, that's mostly what you could hope to observe in this kind of data. So yeah, the kind of data we have are this kind of activation maps you get from a catheter. So you could see also catheter here in blue and yellow. So they swipe the catheter around the heart and they have like these multipolar catheters where you can have like up to 20 electrodes per catheter. So it can really get quite dense maps of activation. And so the idea is to feed the map of activation map from the model to this kind of data. So you need to set up algorithm for that. So in our case, so we chose to adjust the diffusion parameter, I mean, which has a big impact on the conduction velocity and to do it in some kind of multi-scale way where we start by a number of regions where we feed the parameter in each of these regions. And when there is a region where there's still a lot of error after adjustment, then we subdivide and iterate. Then as I was saying, the second important point is the action potential duration. I mean, for how long the cell is activated. And for this, you need more observations at different pacing frequencies because what you really want to do is not just at one heart rhythm, what is this APD map, but more how it varies if the heart beats faster, how this duration varies because that's what is a kind of dangerous for arrhythmia. So it's called the restitution curve which gives you how the action potential duration varies depending on the heart rate. So one of the good point of the model we are using, the Michel-Chef, our model is that you've got an analytical formula of this curve depending on the parameters of the model. So if you've got data on this curve, you can directly fit it. So that's what we did with only our fit of the curve to the different data points we have. So if you are pacing at different frequencies, then you can fit these restitution curves to the data. So then, so you've got your model, you've got your algorithms to estimate the parameters but how do you validate this? So we started to work with a group in Toronto, so with Mihail Appop in the Sunnybrook Science Center because he set up a very nice experimental work where from a perfused heart, so extracted heart, so it's mostly porcine heart or rabbit heart, she pasted the heart and then measured the propagation through optical imaging. So the idea is that you inject a fluorescent dye in the heart. And if you use special cameras at special wavelengths with special filters, you can see this kind of images where when it turns white, it means that it's activated, it's actually activated. And additionally, she then put some markers on the heart and put the heart in the MR scanner where she measured the shape and the fibers of this heart. And through the marker, she can do the fusion of both. But in the end, we've got like a 3D mesh with fibers of this heart. And also projection on the surface of the activation times and action potential duration for this precise heart which we have measured through this optical imaging. So if you run the algorithms I just presented, you can get this kind of difference. So this is if you take just the original parameters of the equation. And this is after personalization compared to, and you can see especially the reparization which is very complex here, which is well captured by the personalized model while here it's very homogeneous so the reparization follows just the deparization. Then the second question is okay, it's nice to be able to reproduce the data but you already have the data. So how much can you predict also with your personalized model? So the idea is that once we estimated these parameters, we then paste from another place and she had also measured pacing from this place. So we could compare the prediction of the pacing at this location to the actual data at this location. And we did that for different locations but also different pacing frequencies because we estimated also the restitution. So I mean that's nice because it shows that you can personalize to experimental data and predict this but that's still very rich data. I mean we had the fibers, we had this very detailed optical mapping. So how much can you go to the clinical data then? So we tried this during a European project with the UPF and the King's College and others where there was a work package on vorticular tachycardia and there was a very unique protocol here which used a mapping of the heart so it's a non-contact mapping so you put a balloon of electrode inside the heart and it maps the electrical activity on the whole undercardium at every bit. And they used this mapping during the whole VT steam protocol. So this protocol is that they put a catheter in the right ventricle and they paste faster and faster until they get an arrhythmia. I mean they try to see if they go very fast does the heart get into an arrhythmia? So is the person at risk of sudden cardiac death and getting a tachycardia and potentially fibrillation? Or is the heart robust enough so that even if you paste it fast it doesn't go into arrhythmia and there is a less risk? So the data was imaging through MRI so in MRI there's what is called late gadolinium enhancement which is you inject gadolinium and then if you wait for 15 minutes the gadolinium which is a contrast agent is just in the scar tissue. The dead tissue is not flushed by the blood so the gadolinium arrived there and stayed there. So what appears very white and bright here is a more dead tissue. So it's fibrosis that you can segment and integrate in your model. And we had this balloon of electrodes I was mentioning. We also put some electrodes on the torso to try to look also at the inverse problem but I won't present this here. So then next step is to fuse all these data which in itself is often a challenge but in the end if you do all this you can have your mesh and your scar from late gadolinium enhancement from MRI and your activation times color here from the catheter mapping. So this is the kind of data we had so you can see here these are all normal sinus wisdom so for all these bits we have this kind of mapping so we see the activation wave. Then it started the protocol so they paste the heart from these locations so otherwise dots are where the pacing catheter was and also we mapped the activation for this. And then after some point so the heart went into tachycardia so you can see here the signals which get very fast. So we also have a map of the circuit and how the tachycardia evolves in this heart. So here are the results. So what is important in this kind of data is what they call often the exit points because the tachycardia is often a circuit I mean the wave is kind of rotating somewhere often between different parts of fibrosis and the exit point is where it starts again to the next rotation and what they say that if you burn this tissue so kind of therapy they aim for is catheter ablation so you go with the catheter and you burn some cells to try to prevent the heart from getting into tachycardia so if you cut this line here I mean if you burn where the exit point is the thing that won't be able to rotate again so this heart cannot go again into tachycardia at least not into this circuit. And this is the bullseye plot so a projection on the disc of the heart that's because it's easier to see than 3D so where the red is the early activation so that's the exit point and the wave is rotating like this when it's in a tachycardia. In this case it's indeed there and that's the second patient where we had this data with again the exit point and the rotation of the tachycardia for this patient. So what we did is that we run the same algorithm that I showed on the experimental data on this data so we used the activation times from the pacing and also the actual potential duration for different pacing and then once we had these restriction curves and conductivity parameters we simulated the stimulation protocol so we put the virtual catheter in a right vertical and starting pacing faster and faster to see if this heart was getting into tachycardia and we did get this tachycardia in the computer model. So the idea is how close is it to the actual tachycardia that this patient had? Actually it was not too bad because the exit point was quite close to the one which was observed here. And the advantage that the data clinical data is just on the endocardium but here we've got the full activation in the whole myocardium. And for the second patient it was quite well located as well. I mean interestingly in our model the wave was rotating in the other direction. It was the same circuit but in the other directions. But in the end it's often kind of symmetric because the equations we're using are close also to the light path equations which don't depend on the direction. But what is nice also that you can have as I said the full 3D circuit and it can help also to know where the ablation maybe could be the most useful. So where should you cut this path? And in particular I mean the catheters they come from the inside, I mean from the vessels so they're inside the heart. But sometimes a place where the best place to cut this path is coming from the outside. So now there are these epichardial interventions where in fact they approach the heart from the outside and they put the catheter on the external surface of the heart and they can then ablate from the exterior. So this kind of data could help maybe decide if they should approach it from the inside or the outside to cut the circuit. Then another application of this is also that here we just test what they call the inducibility. I mean is this heart at risk of getting into tachycardia? But when they do that they just pace from one place. I mean they put the catheter somewhere and they pace faster and faster. But maybe if they pace from somewhere else their results would be difficult but they cannot go anywhere in the heart and pace. But on the computer model you can pace from anywhere. So you can reproduce this stimulation protocol from any point of the mesh and see what is the envelope and what is the total number of exit points that you form and what is the areas that you should really go and ablate and that's the kind of risk map or target map that ideally could be produced by this kind of computer model. So again some short conclusions. I mean that I probably already mentioned. But clinical data is changing and in this case it was very rich clinical data again. It was having a mapping for long procedures. So the next step is can we go for less invasive data. So currently we are trying to look at the torso data because there's no company and there have been a lot of interest for some time on reconstructing the cardiac activity just from electrodes on the torso. So just put like a jacket of electrodes and by solving the inverse problem you measure more or less what's happening in the heart. And then yeah, we need much more patience. Any question on this? So then you've got your electrophysiology which actually is there to create the contraction. I mean to control the motor. So then we want to look at these mechanical aspects. So the pathology we've been looking at was a heart failure so when the heart doesn't manage to pump enough blood one very promising therapy is what's called cardiac resynchronization therapy where you put leads in the heart so that you resynchronize the heart so that everything contracts at the same time because in some heart failure patients the problem is that the heart is not contracting at the same time so it's not efficient if some parts are contracting while others are not. It's not really a good pump to eject the blood. But there are still loads of questions on this therapy so maybe cardiac modeling in some cases could help for optimizing this therapy. So again, the complexity of the model depending on the question you want to answer. So here we're looking at more global efficiency aspects of the heart so maybe we don't need to go for very detailed biophysical model of the cardiac cell. So we used the Bestel Clemenceau in cardiac model which is in the Hill Maxwell rheological model but it's been built from a kind of multi-scale approach. But in the end it doesn't have that many parameters. I mean it may not be very nice to look at but it's compared to some mechanics equations to represent this kind of active non-linear aviscularistic and isotropic for possible material like the myocardium it's not that complex. And the parameters are more or less physiological meaning so you can interpret a bit what you find which is important. But the thing is that even if you've got all these complex equations you are just halfway through because the way the heart mechanically behaves is also much controlled by the boundary conditions. I mean all the pressure from the blood or what's happening around. So you need to manage all the different cardiac phases of the different isotropic phases when all the valves are closed and the ejection, the reaction and for all these you need different boundary conditions and you need to manage how your model moves from one case to the other. So in the end you need to compute the blood flow also. How the heart is behaving with respect to the surrounding structure because often in models you show nice simulations of heart beating in itself but in the body I mean there are lots of things around and that's what is called the pericardium which is also an envelope around the heart which influences how it moves. I mean there are many things influencing the cardiac motion so you need also to take this into account. And we implemented this in so far which is a kind of medical simulation framework. So we did some simple tests of changing the afterload, the preload or the see how the contraction changes and how it changes the PV loops of the heart. I mean just to look at the global behavior of the model. And then the idea is that you want to adjust more refinely your model. So you've got your simulation, you can extract observations like volume curves or pressure curves or. And then you've got your clinical data and again you want an algorithm to minimize the difference. So the way we do it is first we do a calibration to estimate global parameters. Then we go to regions, again we subdivide and we estimate for regions and we iterate too. So again the data here was from collaborative projects where we had both MR data and catheter data. So this is from the global calibration results which is a way to first globally more or less adjust the model to the data. And the first question was on these seven parameters we calibrated, I mean does it have any specificity with respect to what the data we're looking at? So all the box plots are from volunteers, like 15 volunteers, so it was to give us more to minimize the envelope of these parameters of unvoluntary data. And then we had two patients that the blue triangle of the red dot. And we wanted to look how the parameters of these patients was varying with respect to the normal distributions and we found kind of reasonable results on these aspects. So this is the contractility, this is the relaxation, then this is the stiffness, and this is peripheral resistance. So that was for the global parameters. Then we subdivide the heart in regions and we want to estimate contractility for each of these regions. So there have been also lots of work on this. So we use the data simulation process which has been something which has been very well developed for climate, I mean weather forecast, that's how they adjust the parameters of the weather forecasting software and we use the same kind of approach for the heart. So the question was what to use as an observation. In our case, we looked at the regional volumes which is like the volume which is spanned by a given region during the cardiac contraction. Because in the end, from this kind of data, it's hard to have much more than that. I mean, if you would like to have detailed local strain or things like that, it's not really visible. But this kind of radial motion of the region is more or less observable and even in eco, in 3D eco, you can have good measurements of this. So again, these are like the measurements of these regional volumes and these are after the personalization, I mean what we found as regional volumes. And again, we looked at the parameters. So now these are for the 17 different regions of the life ventricle and we look at the distribution of these parameters. So again, the box plots are from the volunteers so it's more or less homogeneous around these values and we found that the heart failure patient did have had a lower contractility for these parameters. Interestingly, I mean, there is one which had still quite higher parameters and he was a non-responder for this therapy. So maybe also this kind of parameters could be also informative for trying to predict the response to therapy. So then the question is, again, you've got a personalized model so you managed to represent well the data but you'd like to use a model to predict what will happen with therapy. So is it predictive? So for this, we use this, so a personalized model like this with electrophysiology and motion and contraction adapted to the data of the patient. So here we had pressure for this patient. So this is a pressure curve between the data in red and the simulation in a dashed blue. And this is the time derivative of the pressure because the clinicians look at this peak. I mean, the maximum value of the slope here as an index of efficiency of the heart and if the therapy works. So we personalized on the baseline data of this patient and then we simulate the stimulation. So as I said, the recent chronization therapy is to put electrodes in the heart and stimulate the heart so that it contracts at the same time. So we added a stimulation electrode here and simulated the electrical activation that would happen with such therapy and simulated the mechanical contraction resulting from this. So we found the change in slope of the pressure predicted by the model. And for this case, we had a very good agreement with what was actually measured in the patient when they did paste this heart in this location to see the changes. And we did it for different pacing conditions in this heart with very nice results and also in a second heart where results were quite good as well. Just a slide on the extensions we are working on on this is to try to be more robust in the personalization and faster. So we try to couple such 3D model with a zero D model of the heart that some of you played with during the hands on with CELML and OpenCore, these kind of models so that we use the fast aspect and robust aspect of the zero D models to go faster for the 3D personalization. So again, it's nice pictures on two or three patients but if you want to be convinced and to convince clinicians you need a bit more like that than that so it's still a lot of things to do. Again, the choice of the model is always a bit critical and it needs to be done for each precise question of your clinical application. So now I'll switch to another, so the second part which is on the statistical approach. So it's more like group-wise if I've got a group of patients how can I analyze this patient at a group-wise level? So the first is on the statistical model of shape and using that to look at remodeling of the heart with a pathology which was the PG work of Tomasso Manzi. So I'll start with a question. So it's nice to say you want to do statistics on shapes but how do you use statistics on shapes? If you want to compute the mean of these two shapes and how would you compute the mean of a square and a circle, no idea? It's Friday after a long week. Well, there have been different ways to try to tackle this problem but actually a way that has been emerging more and more in the community and that we are using that rather trying to compute statistics and working on the shape themselves, you could try to work on the deformation between these shapes. And the idea is that it's hard to compute the mean of these two shapes but maybe if you compute the deformation from one shape to the other and you stop halfway, then maybe you've got a good approximation of the half shape in a way. So what is the mean of these two shapes? One way to looking at this problem is to look at deformations and do your statistics on deformations and then by applying these statistics on one of the shapes, you get your kind of mean shape and you can get your same way for the mean directions of variations. I mean if you want to do a principal component analysis all these kind of things on shapes which are difficult to handle, maybe working on deformations is easier. So deformations are not that easy to work with as well but there are some ways to parametrize deformations so that you are in a kind of vector space and you can do this kind of computation. So that's what we use which is a kind of a forward model approach where we say that a given patient, so a shape of a given patient, is like a mean shape that you deform to this patient plus some residual. So you have some kind of mean shape and to describe each patient, you describe it through the transformation that will map this mean shape to the given patient shape. And to do this, I mean, there have been a lot of work on the algorithm with very nice properties which are different morphic that you would like because otherwise it's difficult to work and to compute with this transformation. And LEDMM is one of them which we use to compute. So there is that you have all these transformations between your atlas or your mean shape and all the different patients of your group. So then you still need to compute this deformation so to compute the transformation between two meshes. And the problem is that when you work from medical imaging or from different sources, your meshes, you don't have a one-to-one mapping between points of your meshes. I mean, one point in one mesh doesn't correspond to the same point in the other mesh. I mean, you may have not the same number of points, the points have nothing to do with themselves. You need to have ways to work between meshes without assuming correspondence between points. And that's what has been developed also over the years through the mathematical currents which are more using an idea of the flux of vector fields through the mesh. So it doesn't depend where your points are. You just look at the flux of vectors through your mesh. It's a nice way to be independent from whatever discretization of your shape you have in whatever mesh. Then as I say in this case, I wanted to use this to look at remodeling, so how the shape evolves with pathology. It's very difficult to have the same patient and scan him every year and build really longitudinal model of this harp. So what we use is what is called a cross-sectional approach. It's more like you have a group of patients of different age. And from this group, you try to build a longitudinal model. So how would the shape vary across time? So we did that for tetralogy of fallot patients. So it's a cogenital heart disease. So baby's born with a, they have a problem on the heart which is quite severe. So they are operated like at six months of age. But then when the operation happens, the pulmonary valve is kind of alterated, or even there's not much more of pulmonary valve. So the heart, the right ventricle will start to dilate when they get older. So at some point you need to implant a prosthetic valve. I mean, you need to implant a valve so that the right ventricle doesn't dilate too much. But if you implant it too early, I mean, it means once you started to implant a valve, then you need to replace it regularly. I mean, then the person is in a cycle of surgery. So you would like to wait the latest possible before implanting. But also if you wait too much, the idea when you implant a valve that the heart will come back to its normal shape because it has no, it's able to function better. So it doesn't have to be so dilated. But if you wait too much and it's so dilated that it cannot really go back to its shape. So still you don't want to wait too much, but you don't want to implant too early. So that's why clinicians were interested in trying to having a kind of average evolution of the shape of these ventricles and to be able to detect when for a patient, it's really getting in the part where it should be implanted. These are all the different shapes of the right ventricles of the patient group we are working on. So like 50 patients, we can see it's very different. And it was all over, it was spanning between seven, eight on the 23 years old. So the cross section idea is that can we build one model of a right ventricle evolution across this time scale? So with the method I just told about computing shape. So we computed a mean shape about all these right ventricles and how to evaluate this at last. So we looked at what is the closest shape to this mean shape. Actually the closest patient was with the mean age and the mean body surface area. So which tells us that the mean shape is probably not completely stupid because we found a shape which is close to the middle of the height, age and the size of the patient. Then what we did is that we do a regression between the main modes of variation of shape we found in this group and the BSA or the age more or less. And by doing this regression, then moving along this mode, we can reconstruct a kind of average time evolution of the right ventricular shape for this group of patients between the age of 10 and 20 months. So we found this, so it's nice, but then how do we evaluate it? So we showed it to the clinicians and by looking at this remodeling model, they could kind of reinterpret it through what they knew about this pathology and they knew that some of these aspects, they were observing it in their patients. So that was the way we kind of evaluated the remodeling model we built and like how the septum shape changes or how the different valves evolve. So that was the kind of evaluation we did. So now the next stage that we are working on now is that actually the best validation in a way would be to have a patient that you follow across time and that you have an image at an age and an image at a later age. Look, if I apply my evolution model to this patient, do I predict the right shape when you image him a few years later? But this kind of approach here takes a much longer because you need to wait to have the data. So that's what's for the shape. But the heart is not a static shape. I mean, it's beating. So we'd like to also extend this kind of approach to the deformation, to the contraction of the heart. And the problem is that if you, I mean, the deformation is also a lot of parameters. I mean, if you want to have a vector field at each time step describing the 3D deformation of the heart, how do you compute the statistics on such huge data? So we propose to have a kind of reduced model of the deformation by cutting the heart again in small regions, I mean, regions. And for each of these regions, we just say there's just an affine transformation for this region. So it can just be translated, rotated, or taking some sharing and string and yeah, expansions. So that's what is called the poly-affine. So we developed a kind of mathematical framework to nicely combine different affine transformations so that the resulting transformation is smooth and deforming it or you want. In a way, it's kind of a projection. So we have a, you could have any algorithm to track the cardiac motion in your images. So we've got your vector fields of deformations. And then we project this field in the space of poly-affine transformation. So we find the closest approximation of this field, which can be represented by this combination of affine transformation. So for the regions, again, we use the AHS or the American Heart Association regions, which is the way radiologists and cardiologists cut the heart so that they can easily standardize discussions and findings. And again, so for each of these regions, we have an affine transform with the scaling rotation part and the translation part. So in the end, 17 regions, so we've got like 200 parameters per frame. And if you compare to other tracking algorithm, where you could have more thousands or millions of degrees of freedoms of parameters per frame, it's still quite reduced. And accuracy is the same order of magnitude as what was found in the literature. So that's kind of the result of the poly-affine tracking on this healthy heart. So that's good for tracking. But then the question was if we reduce this, it's to be able to do some population-based analysis. But then it's kind of easy because for each of these regions, for every patient, you cut the same regions. And for each region, you just have a affine matrix. So in a way, you just have to average. I mean, you compute your statistics on your affine matrices for each regions to do your group-wise analysis. And if you then, if you apply your, so that's kind of the mean model we found. So you can apply your mean affine transformation model to an image to see what is the mean motion I observed applied to this image to interpret it. But if you want to do a more refined analysis on the parameters, the way we did it is that we used a tensor representation. So you've got all these parameters for each region and for each time instant of the sequence and for each patient. So you can build a 3D tensor or even a 4D. If you add other parameters because you can cut affine regions differently. And in fact, there's a huge literature on analysis of this kind of tensor. So how do you do statistics on this tensor to extract the main modes, the most important part of your tensor? So these are two of the most well-known decomposition of such tensor. So in our case, we used the Tucker decompositions to look then at the different dimensions independently and look at what's happening in this group. So one finding, for instance, if you look at the, so the main modes we found on the temporal dimension. So if you look through the cardiac cycle, comparing the volunteers and the group of tetralogy of fallow hearts, we could find differences in the slope, for instance, and in the timing. I mean, you could see differences. And this is to show that this statistical group for analysis is kind of capturing differences in the cardiac contraction between these two groups. But you can also look at in space. I mean, the other dimension in space is the transformation. So our fine transformation. And same way, you can look at the main modes that you find. So you find for the healthy control, the contraction and some of the torsion maybe a bit. And for the tetralogy of fallow patient, we had a very clear motion from the left ventricle to the right ventricle. Actually, if you look at the, this is a trilogy of fallow patient image. You can clearly see this kind of moving motion of the left ventricle towards the right ventricle. So again, this time sort of composition, this group of analysis did capture quite well this kind of group behavior of this patient group. So that was to present you the more statistical aspects. And now to finish, I'll just go through applications where we try to combine both. In fact, try to combine some biophysics approach and some statistics approach and see how they can work together. So the first was the work from a detail where we try to use models to generate data. So the initial question was a bit tricky, which is, I'll say it's difficult to measure the electrophysiology of the heart. You can put catheters, you can try to solve the inverse problem, but it's still not working that well. Could we see it from the images? In the end, you see the heart beating and you know that the contraction is a result of the electrical activation. So by observing the contraction, could you go back to the electrical activation? But that's a very complex relationship because it's not just one cell contracting. I mean, there's all the effects of the blood, the pressure, the different spatial aspects. So it's not just an easy inversion of a phenomenon. So if you want to study it, you need a large data set, especially as we couldn't try to write a direct inverse model, so we thought we had just try to learn it, but then you need a lot of data. So the idea is, can we use the electromechanical model to generate data for this learning application? So what we decided to do is that we tried to do what we called a patient-specific database. Not like we're gonna create a huge database and learn on this. We're gonna take a patient, what we know about this patient and generate a database around this patient and then do the learning kind of locally and see if from the learning we can estimate something. And as we were observing then through an image, we thought we shouldn't just look at the model to do the learning, but we should simulate the image and go down to the image level so that what we learn is really comparable to what we're gonna extract from the medical image. So there's also a lot of research on simulating synthetic images. And some people approach it from the whole physical process. I mean, you know all the laws of physics involved in image acquisition, so you could try to reproduce this with computer. In our case, we wanted to have a faster and simpler approach, so we thought, let's just start from an existing image that we have already for this patient and modify it so that it corresponds to a synthetic image that is fitted to our model. So basically, so this is an echo image, so a real image, and we've got a model. And in the end, we want to have an image where the motion of the image is the one we simulated from the model, and not the original one. So to do this, the approach we chose at that time was that we first do a registration of the image so that we put back all the image to the shape of the first image. So we get this, more or less, it's a heart which is not beating anymore. I mean, we stabilize the image because we deformed all the images so that they fit the first image. And once we have this frozen image, we extract, so a mesh, and we do a simulation, an electromagnetic simulation, so we've got a simulated deformation. And then we combine this frozen image and this motion to generate the synthetic sequence where we're gonna have different images corresponding to different positions of the mesh along this sequence. But in the end, we've got this kind of result where this image is beating the same way as the model is beating. So here are some examples on cinema. So one of these roles is a real image and the other one is a synthetic image. Who thinks the first one is a synthetic image? Just have to vote, you don't have to speak. No? You don't even have to vote? Well, on cine, you could see some, yes, there are some artifacts that probably for people who know MR, they would guess that this is the synthetic image. And so if we superpose the contour of the model which was used to simulate this image, you can see that in this case, it fits well the image, while here the original image is not moving like the model. So as I said, we don't want to generate a huge generic database, we want it to generate a patient-specific database. So what we have, we have an image from this patient, we get the mesh, we do the simulation of EP for different initial positions because also we use what we know on this patient, we had the left bundle branch block, which means that the electrical activation just comes from the right vertical. So we can test different initialization of the electrophysiology from the right vertical, we don't have to initialize in the left vertical. And in that case, we simulated 144 cases for this patient. So once we have all these images, all these synthetic images, then we need to extract again the motion from these images because that's the way we're gonna do for the real data. So we need to use the same tools to extract the motion from the images and learn the correlation between what we extract from the images and what we know from the electrophysiology because we modeled it, so we know. So what we use that descriptor as the displacement of the regions, I mean the mean displacement of the regions and descriptors around the strain, tonsils of the region, trace and determinant. So the first thing that we've got is a kind of ground truth for all this from the model directly. And we've got what we can estimate with our image processing tools. And there are some differences, especially in amplitude. So one of the first thing we decided was to kind of normalize these features so that even if there are differences between our simulations and what we extract from images, it will not impact too much the learning. And then so we did some, it's kind of kind of a rich regression to learn the link between the activation times of each region and the kinematic descriptors of each of these regions as well. So we looked at then, so these are all the descriptors we use, so around the strain and so on, square and so on, so the mean displacement, the strain along the main displacement and all different kinds of descriptors. And we looked at which ones were the ones which were the most impacting the estimation of the electrophysiology from the motion. That was the first information which was what are the important descriptors to use in this case if you want to estimate it for physiology from motion. And so what data did we had? So we had data to evaluate this with the endocardial mapping of left ventricle which was done by non-contact mapping. So again with some noise and uncertainty, but so we had this kind of activation maps for each region of the left ventricle. And this is what we want to guess from the motion basically. So we had this for three different patients with the associated cinema images. And these are the results, so this is what the ground truth told us. And this is the prediction from the motion. So the accuracy is still limited, but the pattern more or less is captured. And the thing is that we are using cinema to describe the motion and the time resolution and even the spatial resolution through the slices is not that great. So probably that's not the best motion data to use for this kind of things because you would like a very high temporal frequency because the electrical phenomenon is very fast, but that was the data where we had both the motion and the electrophysiology. It was to illustrate you how you can combine modeling and learning for this kind of application. Then another coupling of statistics and modeling was through the work of Ender Konuku Glue, which was, I mean, I showed you in the first part using biophysical models to do some predictions, but then you give an answer or you say, okay, that's here that you need to burn here. But I mean, I know that along the whole process there are lots of errors and uncertainties. And in the end, you can give a prediction at one point, but it would be good to give some kind of confidence in your prediction because it could be well influenced by all these steps. So the idea was to introduce probabilistic formulation in the model so that the parameter estimation part also becomes probabilist so that you can also, in the end, give some kind of standard deviation on your prediction, I mean, some kind of uncertainty. So as you can guess probably, I mean, if you want to go from deterministic to probabilistic, it's gonna have a cost, I mean, so we had to go down to a simpler model to try this. So we use the Iconal Model of Electrophysiology rather than the Reaction Diffusion Model. So this is, you need to solve this equation, which actually is a static equation because your variable is the activation time. So when you solve this equation, you know at each time each point of your mesh is activated. And we wanted to estimate so the local diffusion coefficient, so to map the conduction velocity. And also the onset, I mean, where does the wave start from because that's an important parameter. So we divided again in regions the heart and also we parameterized a patch of the heart as a potential location for the onset to find where it starts. So to start with, so you need to put your parameters as a random variables to do that and you write your joint distribution of everything. You have to put assumptions on your observations and on your model. And then you end up with a big formula like this. So if you want to compute your posterior distribution of your parameters knowing the data, I mean, what you measured, you're supposed to evaluate all this, which in practice you cannot do for this kind of models analytically. So you need sampling methods, but here we had like 20 parameters. So I mean, it can explode quite easily if you use sampling for such high dimension. So even for the, because it's very fast model, I mean, in less than a second, we had one simulation, but even for a very fast model, the sampling would explode in time. So, but then their purpose at that time was to use the spectral method, which was emerging and quite popular at that time, which is called a polynomial chaos expansion. So you represent your uncertainty in a lower, in a reduced manner. The way you do it is that you say that your random variable can be represented by a series of polynomials other than having a full distribution. But in the end, what you need to estimate is just the parameters of your polynomial. And depending on what kind of prior you put on your parameters, it's a different family of polynomials that you need to use to ensure that you can represent it well in your probability space. And the idea of this polynomial chaos expansion is that your resulting, your state variable will be represented by the same family of polynomials. So in the end, your activation times is also a combination of this polynomial. So what you need to estimate in the end to have your probabilities is to estimate these coefficients of your polynomial, in front of your polynomial. So, yeah, as I said, depending on what, so we took uniform distributions for the diffusion because we had no real prior on it. So we had to use the multivariate legion polynomials and the same for the, for the onset. But again, I mean, you reduce your complexity by using this polynomial. But again, if you want to estimate these coefficients, you need to solve a lot, a lot, a lot of times the problem and it can be very expensive. So the nice other idea of ENDER was to combine this with compressed sensing, which was also quite popular at that time, because we expect that we know that it's all connected through a model which has not that many parameters in the end. So it's not like everything is very independent and I mean, everything is kind of controlled. So the behavior, the final behavior is kind of not fundamental. So probably if we have only a few data sets, we can still estimate what's happening because we know that we shouldn't have that many number of polynomials and that's high degree of polynomial to be used for such a model, which is all the idea of compressing something that, knowing that you are sparse in some kind of space, you shouldn't need too many data, too much data to estimate your model. So I did run this compressed sensing. So on the data we have, again, it's like imaging and SCAR and some electronic tomical mapping. In this case, we thought that one way to test this framework was to see, look at the question that people working in this area can have, which is, if you want to estimate personalized electrophysiology model of the heart, is it more important to have data from inside the heart or from outside the heart? Because in the end, you know that if you want data from inside the heart, you need to put a catheter. If you want data from outside the heart, maybe ECG or electrodes on the torso could be enough, but is it good to estimate your parameters to have this? So what we did is that we used a probabilistic approach to have some notion of confidence on the parameters, depending of if we use data from the inside of the heart or from the outside of the heart. Also for this, you need to have some kind of uncertainty model of your data. So what we used that, all these points here are the catheter data, so that's where the measurements are. According to the catheter workstation, so they locate the catheter, they locate the catheter was here when you measure the point. And the mesh is the mesh from the imaging, and you can see it's not really, I mean the points are not on the mesh, basically. So we use as our measure of uncertainty on the data, the distance between the points of the measurements and the closest point from the image. So that's the color here. If it's more red, it means that the point was further away. So probably you're not really certain of the location of this data because the point was not really matching the mesh. So you don't really know if it was, which point of the mesh corresponds to this data. And using this framework, so here you can have, you can see the, that's the mean value of the conductivity, so the parameter we estimate. But then these columns are the standard deviations for this parameter. So that's kind of the benefit of the approach compared to this. We don't just don't have a one value, but we have a standard deviation. So this is on the undercardion where we try to estimate kind of the parking here, the fast convicting system. So for here, you can see that the standard deviation is smaller. I mean red means bigger. So it's smaller if you use undercardial data, which is logical because we are really looking locally at what's happening on the undercardion. But if you look in the volume of the muscle, I mean we want to estimate the parameters inside the muscle, which is here. Actually you see that using the epicardial data only, you get the closer, closer number to what we, we assume the closest to the control is when you use all the data from the outside and the inside. So it's very constrained. So we guess that's the best value we can get. In fact, you're closer to this value when you use epicardial data rather than using undercardial data. And I mean to me at first was a bit of a surprise because I was thinking that as the electrical wave in the heart start from the inside, if you really estimate what's happening in the inside, then you're controlling more or less what will happen afterwards because it's quite causal. But in the end, if you look from the outside, you see the end of the story, but then it has been impacted but all what happened when the wave went through the muscle. So maybe you've got more informative data from the outside because it has been impacted by much more parts of the heart rather than undercardial data. It's just the onset, but whatever happens after you don't really observe it. That was also an interesting finding. Nice approach. The next one is to try to couple a bit more directly the two different parts on the shape statistics and biophysical modeling. The idea was that we have data on pulmonary arteries. It was also to implant a vase, but it was, can we look at an average shape but also at an average behavior or an average model on this average shape? So the idea is the same to build an atlas on all these meshes. And once we have this, we combine it with the CFD simulation. So solving on a mesh for velocities and pressure. And the idea was that it's very lengthy to compute this kind of CFD simulation on a mesh. And maybe we want to just understand the main feature. So we want to couple this kind of statistics modeling apart. So we can compute a reduced order model of the simulation. So this is what is called a proper orthogonal decomposition in this area. So it's kind of PCM or SVD. You look at the main modes of your variables and this is solved here by doing this same kind of approach as SVD. In the end, you get this kind of modes which are the main distributions of pressure and velocity on your pulmonary artery. And with only 30 modes, you can represent well the full solution on the 20,000. basis functions. So the idea was that we have a mean atlas of the shape and we have a reduced order model of CFD on this mean atlas. And then how can we use that for different patients? So the idea was to transport this mean shape and mean simulation to a patient. Then we could use patient specific boundary conditions and solve with this reduced model on this new geometry. So to transport, we had already the registration between shapes. So to transport the locations of the nodes. But we need also to transport the basis of the reduced model. And for this, you need to be careful because for instance, if you have the divergence free velocity field on your simulation, when you transport it, you don't want to break this kind of physical constraint. So we had to use a specific transport which was preserving this property. For the pressure, we could just transport it more easily because we didn't have that. That's an evaluation of the full model compared to on this patient geometry compared to the transported reduced model. So on the velocity, I mean, there are also, I mean, that's more percentage of error. So it's more than 20% which is a bit high, but on the pressure it was quite good. And the estimation of the pressure drop which is also important parameter was reasonable. It's first evaluation. If we look at the results, I mean, this is so the simulation from the reduced model. And this is the original simulation and you can see that it's quite close. But the idea was also how predictive is it to new boundary conditions. So if in this case, we add some regurgitation in the model, is a reduced model able to also produce a good simulation? And it was fitting quite well with the full simulation with regurgitation as well. And to conclude, more recent work from Rosio Cabral, also here on, more directly combining machine learning and modeling. So again, back to electrophysiology, the idea was that some people are thinking that local abnormal signals when they measure with capacitors can be a good signature of locations where you need to burn. So they try to find in the heart directly some specific features in the electrical signals that tells them that this is a pathological location and probably you need to burn it. But it's difficult if they have to go all around the heart and look at all the signals to see if it's abnormal or not. So the idea was, can we try to produce before the intervention a kind of target map? So what are the locations where we believe probably there are some abnormal ventricular activities and probably they need to go and check there first to see if they need to burn. So clinical data again from imaging, SCAR, gene and electrophysiological mapping. On the machine learning side, we use the random forests but we try to include some uncertainty in it. And for the modeling again, the Michel Scheffer reaction diffusion model. But here we added the simulation of the catheter signals directly. Just not just the transmom brain potential but really trying to reproduce what is seen in the catheters. So again, delayed enhanced MRR has been shown to be able to find the fibrosis. So we use this to find fibrosis areas and what's called border zone, which is kind of mix of fibrosis and healthy tissue. And we build, so we build a mesh of the heart and the segmentation of the SCAR fibrosis and the gray zone, which is this mix. Then for the EP data, so we had this catheter data mapping with all these kind of signals. So we had also labels and signals. So, electrophysiologist gave us the labels of which were the, what's he said, abnormal signals and which were the normal signals. So that we had labels for the learning. And this is just a video of, so you can see the catheter moving and being acquired on the signal to get. So this is localized in 3D so we can co-register our MRR derived data again to the catheter data. So the first step is, if we have only the image. So before the intervention, we've got an MRR image of a patient. We want to guess where are the dangerous parts so where they should be. So for this, we compute image features and we try to learn the link between these image features and the labels that the cardiologist told us which are the dangerous areas. Introducing some uncertainty. So for the features, we, for a given position of a catheter, we say there's a sensing range so there's more or less a sphere around which is all what influence the data for this catheter. And in this sphere, we compute some features of the intensity, mean, max, thickness, transmurality of scar, etc. Then we compute also features of texture. So we use some classical texture features in this area around the catheter measurement. So it gives us in the end quite a lot of features. So we've got a big list of features for each point. And for each point, we know if it's a lava so it's not normal or if it's a normal. Then we wanted to introduce uncertainty in this. So what are the sources of uncertainty in this kind of data? First, I mean, we have a static mesh but the patient is breathing and his heart is beating, hopefully. So if you look at the trajectory of the catheter because we have the position of the catheter during the recording, you can see that it's quite moving. So you've got one signal in the end but the signal was recording during a displacement. So what we did is that we fit an ellipsoid to this kind of trajectory and we kind of estimate the uncertainty by the size of this ellipsoid. Was it moving a lot or not when it was acquired? Then there's also the registration error. I mean, again, as I showed a bit earlier, you've got your measurement from your 3D localization of the catheter and you've got your mesh from the image but they're not like fitting one or the other due to all sort of registration error, segmentation errors. I mean, you can also, so it's also looking at the distance between your average measurement position and your mesh gives you an idea of the uncertainty between what you were gonna look in the image and what your signal looks like. So we combine, I mean, very crude way, these two uncertainties by multiplying them. So if we have a smaller catheter motion and if it's close to the mesh, we say we are quite confident in this measurement and if it's moving a lot and it's further from the mesh, we are less confident. So how to introduce this in the learning framework? So what we use is that when you do the training of your decision trees or your random forest, you estimate the parameters of your split at a node by maximizing the information gain, basically looking at how many of your samples going on one side and the other side. Now what we just did is that we added a weight. We added some weights there but each sample that you put there is weighted by its confidence so that it will influence the threshold here. I mean, it will influence the training less if it's a less confident point and if you are really confident in your data, it will influence more the training of your random forest. So to look at the results, we look at some classical positive through positive and rock curve and this and also look at the target map we would produce with this approach. So this is just to show that the fact that if you introduce the uncertainty, you improve the results. So basically the blue is without putting any uncertainty in your machine learning and the others are by using one uncertainty, the other uncertainty or the combined uncertainty and that was to check that it's helping to use uncertainty in this learning. And also so we look at the prediction and where we are failing and if the confidence was low on where we were failing in the prediction and it was also interesting again to have some kind of confidence in your predictions so that when this is where we predicted abnormal it was not abnormal but you can see that the colors and it's more like that blue which is small confidence in the prediction. So again there are still some quite few errors. So the question was can we combine this with modeling to improve the results? So I was saying we take the same kind of an extra physiological model but we go up to the modeling of the signals of the recordings. So we use some literature value of the parameters for the healthy border zone and scar tissue to simulate EP non-invasively from the imaging data and change the property of the gray zone. And then to simulate the catheter data, I mean there are some very advanced method where you really simulate the equation of the propagation of the potential through the blood and you've got a very detailed simulation of the potential in your catheter. We wanted to have a fast and approximated method. So what we did is that we just estimate the local dipole by looking at the gradient of our transmom brain potential. So we've got the resulting dipole when the front is moving is more or less proportional to the gradient of the transmom brain potential. And then we assume that everything is homogenous and infinite. So we don't have boundary problems for the potential. And so where we have our catheter we just sum the potential generated by all these dipoles to get what is the electrical potential where our catheter is, so that's basically this equation. So here we have the current density which is proportional to the gradient of our potential that we already compute. I mean we already need to compute the gradient to solve the equation. So basically we've got everything to compute this. And then it's just a sum with some scalar products and the norm of the distance between your catheter and each of these dipoles to the cube. That's quite a simple formula. But it gives reasonable results. So to evaluate it we use the five patients where we had this data of imaging and electrode mapping and labeling of abnormal and normal signals. And here are first the simulated signals. So for a normal healthy tissue we had normal signals which looked quite okay. We did, I mean the repolarization wave, we didn't vary the action potential duration. So our repolarization wave has a wrong sign because the repolarization wave is in the same direction as the depolarization in healthy, I mean normal heart or even pathological. You should have varying action potential duration so that the repolarization often is in the reverse direction than the depolarization wave. That's why it's called the T wave in the ECG. So this wave here is positive but in our case negative. But here we mostly focus on the depolarization side so which is this part, which looks quite similar. And also for abnormal locations, here for instance, which is close to the scar, we do get some strange signals and the measured one as well have many different, I mean changes of sign where here you should just have more, just biophysic signals and you'll get much more deflections. So we look at different features of these signals to evaluate how different they were between normal and pathological signals. So we took all these kind of features that are commonly used in electrophysiology literature of analysis of signals. And we could find differences between lava and non-lava and the differences were in the same directions more or less between the simulated signals and the clinical signals. And we evaluated it through a statistical test. And basically when it's green, it means that both the simulated signals and the measured signals have a statistically significant difference between normal and abnormal. So it's to check that we do find in our simulated signals differences where there are differences in the measurements. But not for all features. So it was also a way to select the kind of features which are probably interesting to differentiate. So this is, yeah, we can also try to do the learning then just on these simulated signals to predict the targets. And again, I mean, we have some errors and the confidence are sometimes a bit higher in DCR. So this prediction is also limited. So that's why the idea is, can we combine both? I mean, the learning from the imaging, learning from the modeling, and try to combine both to get a bit of the advantages of both approaches. And so that's what the combined framework where we compute features from the immediate, compute features from the simulation. We combine them and look at what happened. So we tested different ways of combining them but I'll just present the fusion where we basically, we just add up the features. We've got features from the imaging. We put the features from modeling and go through the learning as if it was one big set of features. Actually, so that's the first one is the only image. Second one is only simulation based. And these are different ways of combining features. And the last one is the feature set fusion where we put together all the other features. And we could see that it was improving in particular sensitivity. I mean, if we combine, compared to independently doing the learning only on imaging or only on modeling, doing the learning on this combined feature set is improving the results. Different patients for the five patients average results. Again, it's improving the results to combine all the features in this learning. Yeah, some conclusion on this part. So MRI can help in predicting this and including uncertainty in learning helped as well. Even using a simple model of vector physiology and a simple model of catheter measurements, you can still find a simulate different signals between normal and abnormal regions. And yeah, again, one idea is that can we generate data with such models to help the learning as well? So we put this for SCAR for instance and deformations to go to more sophisticated learning and maybe look at more complex and like sources for some specific features of the signals. I guess we'll stop now. Yeah, I'll stop now. Thank you for your attention. And if you've got a question. Thank you. I heard you say a couple of times, yeah, this is a really simple formula here, but you've mentioned so many simple formulas that in the end it becomes like rather overwhelming, let's say at that point. But this actually illustrates a little bit which I very much like in your approach is that there's not just one simple thing that you can do or one thing to focus on. It's like, okay, when we look at this whole field of research, especially when you go to patients, it's like, okay, we say we need to make a model of the patients and then we can do everything but the model is just one small part of it and it's the whole thing that you need and especially also nowadays with machine learning and these kind of things to try to combine it. But what I'm wondering a little bit and especially if there's some PhD students or kind of future researchers in here is like, how the hell do you combine that? How can you combine these things? How can you be a specialist in all these things or find a way to collaborate with these specialists in order to come to these results? How do you do it? Well, I saw the word humble yesterday. I think apparently it was mentioned also by Blanca before. I think if you want to work with cardiologists and mathematicians, there are already enough egos in the room so you just have to be, you just try to talk with people and learn from them and try to understand their language and then that's why it can be useful. I mean, I understand not being a specialist in each of these areas but understand what they do, how they do it and how you can apply it because when you want to use this kind of techniques, there are always ways to contribute. I mean, even scientifically, you always contribute because you have to make decisions, not just like you take, you're okay, implement this or take this paper, but that's not working like this. You always have to contribute to the modeling, to contribute to the learning, to the uncertainty. I mean, there are all sorts of ways to contribute so just talk with people to learn and interact and then contribute where you can which is in this integration part of it. Yeah, and also regularly I see that when you go to your models, you make in a way pragmatic decisions and saying like, okay, what do we need? Which is the type of model we need? Which is kind of the approaches that you take and so on. And so this pragmatism is like, when do you decide it? Is this something more like a feeling or do you each time test like, let's take the most kind of complex and then see whether we can reduce it or is it something that you say like, no, let's go for the fastest or like. It's a bit of a trial and error. I mean, for instance, for the reaction diffusion, we started with the Ali-F.Filoff model which is also a two-viable model. But when we started to look at arrhythmia as an restitution curve, we realized that the way the parameters were coupled, we couldn't really fit what we wanted to fit. So then we say, okay, we're gonna increase a bit from both parameters maybe with Michel Scheffer, but we're gonna get what we want from the restitution curve. And the other way, I mean, the Iconol or thinking, okay, it's very nice, let's take the Iconol equation and put the fast-marching, we'll get a real-time simulation, it's nice. And then you want to do some arrhythmias and realize, okay, but this model is made to simulate in one computation, the whole depolarization of the whole heart, but arrhythmia is like a circulating. So how do you, so then you say, okay, I can try to trick it to do it, but maybe I need to go back to reaction diffusion for some aspect, so. I think I would use a middle-out approach right now. And then a lot of the things that you do depend also on data, having data available from patients or optical mapping or whatever, and on the other hand, data is sparse. You talk about three, four, five patients, maybe when you're very, very lucky. How are we going to solve this in the future? Because when you look out in the field and you look like medical publications and so on, it's like there is a lot of data. Maybe it's not the most rich, but the problem is it's not available. It's like, what do you think we should do? We as a community, can we contribute there or how do you see these things? I think it needs to become a push-pull between the community but also the authorities, like Europe or FDA or the kind of authorities that can push the clinical community saying, okay, you've got this nice clinical study, but in the end, we need to evaluate it on a larger basis so you should probably provide more access to this data. I think Europe is trying to push in this direction. I think in the end, it can be beneficial for everyone but clinicians, for this kind of thing, it's always quite a heavy protocol and lots of people, I always want to make sure that they can publish the most out of it before sharing, but it should go. The passion also for these heavy protocols is like what I've seen a little bit also is in when you look at projects, large projects proposed by engineers or physicists or modelers, they say like, yeah, I need from the clinician, this data, this, this, this, this, this, this. And in the end, it's a protocol which is almost impossible to do. So then you say, the clinicians say like, yeah, no, there's no way I can do that. But then when you go to clinical reality, I mean, of course in every patient there is data because they treat the patient but the data is extremely sparse. So what is the best way forward? Do you think, do we have to keep on pushing to get better data or do we need to find ways to cope with whatever bad data because in reality it's sometimes crap data but maybe it's richer to use that data than push for other data. Yeah, actually that's something I'd like to look more into. I think it's a nice application also over this probabilistic framework because if you manage to write your whole pipeline, this kind of probabilistic framework, then there are techniques to look at what is the uncertainty I can reduce in my pipeline that will reduce the most uncertainty on my final results. And maybe this way you could find that which are the crucial parts of my data I really need to be able to answer the question and which are the parts which really even if I don't personalize it doesn't impact my answer. And I think that would be the nicest way to do it. Yes, that's indeed an important thing to do. Some other questions? Very nice presentation, first of all. Okay, I have a couple of questions about first of all the work on cardiac resincrease therapy and also cardiac laminar structure. About CRT, in my understanding you run a simulation taking into account the different position of the catheter, this is right. Yeah, we did try to predict different pacing locations. Did you register, there was a registration of the position of the catheter in the implant at the time of the implant, the year model? Actually the way it's done is that there is a tracking of the X-ray table which managed to register I mean the position seen in fluoroscopy and position in the MR. So this pacing were not really the pacing of the implant is when in the study, I mean the EP study before the implant they paste directly the catheters and through the fluoroscopy we localized this position and registered them to the MR. So that's how we localized this for the CRT. So there was a kind of registration between the X-ray and the MR. In the X-ray you can see just the catheter, right? But by tracking, I mean. I do, do you do that? I mean there were some markers. I mean we are tracking the patient and the table and so the markers you see them in the MR. I mean it was in the X-ray room of King's College where basically it's in the same room. So this way it's in the MR and the X-ray. You just slide the patient. And how much time did you wait to see if a patient is a responder or not a responder? I think in this study you released the six months. Since you cannot do MRI acquisition after the implant is it important, this kind of work, to take into account the cardiac shape remodeling? I mean all this work is kind of based on the assumption that if we acutely predict the response in pressure then we can predict the chronic response in shape. But I think yeah it should be probably both the criterion of response and acutely and chronically it should be refined and looking at shape in both stages. Just because it seems that after the implant the catheter moves from the original position or in some cases there is a remodeling of the shape so it's changed the model. Yeah, I mean that's one thing we try to work on is to build more longitudinal models. I mean could we try to couple time scales as well and having a model with also the shape evolving that is still quite open for me. Okay, thanks. And about the work on cardiac laminate structure in DTI, maybe I missed it, but how did you perform the registration of the images? So we don't really register images as we move our data into these parades for little coordinates. I mean these coordinates are kind of standardized for all our heart, so that's where we do the statistics in a way that's how we do the registration that's we change coordinates of all our heart in a standard coordinate system so that they are all registered in this. And so this way you created a 3D model starting from the 2D images of the, okay so did you use it in a different acquisition or just the acquisition of the S0 with the B value of zero? I mean for this work it was the standard implement sequence that was described by Gumpur and Kozwerki. More recent work, they did change sequence to another one to better cope with the strain at the system. But it was enough for compensate for the respiration? Yeah, I mean again it's volunteers and it was acquired in Switzerland but have professional volunteers who can breath all perfectly in any position, so I mean using this on a patient or less trained volunteers is a... Thank you so much. Okay, in the sake of time I think we better go for coffee, so thank you very much.