 Okay good let's continue with the applications and it's my great pleasure to introduce my closest colleague Oscar Cameras working together here with us in the group in the UPF and he has been working for years and years also on modeling and especially focusing on electrophysiological modeling with a big emphasis in trying to go towards the patients it's like not so much the modeling aspect as such but trying to create tools so that electrophysiologies in reality can try to both understand patient better look at data in a more integrated way and at some point try to use VPH technologies in order to improve or come up with new therapies please Oscar thank you Mark so don't worry just this talk and the next one you will be free so as Bart said Oscar camera I'm an associate professor here at UPF and I co-supervised together with Bart the pheasant group and I'm supposed to talk about cardiac electrophysiological modeling so let me explain you a bit the story behind it so a regular day in pheasants the two of us close together and then Bart explained some things about while I'm going to help Jerome to organize whatever summer school VPH and well I was really surprised like Bart is always with the rubbish and bullshit word especially with VPH things and I said no no it's a good idea because we need to organize whatever events for cardiac function and I was what cardiac what and and then while whatever I'm into complicated for you blood span is always same but we could have a catalan beer testing then that changed a lot of things and then say okay you want to help no way I mean no way I want to help but okay at least you need to give a talk okay well let's see no and then some weeks past and last week I was checking the program and I see cardiac electrophysiological modeling and I say what do I need to talk about cardiac electrophysiological model really in this program VPH no way I mean I'm an expert on cardiac electrophysiological model no way just ask anyone in the community it's a really interesting community basically meeting every one year and a half in the cardiac efficient events and you can ask anyone there and a lot of these names will come up big communities in France in Oxford in New Zealand in the US also etc and a lot of people here really nice people most of them then seriously I know most of them but we'll see why because I mean I had I mean one thing it was okay Bart is not really good organization thing so maybe he was wrong because I started in my background was on image processing my PhD I did it in Paris on image processing non-registeration in oncological applications between CT and nuclear medicine images and then I moved to London for modeling Alzheimer disease the effect of Alzheimer disease in brain structures but then why I've been in the cardiac fission conference during the last year I mean four of the last five in fact I was here in this picture also besides traveling I mean this is really nice places for traveling I think it reflects well how the community is changing over the years I wasn't the only one I wasn't the only one without a proper or a traditional EP modeling background in this conference people like Alistair Young Pablo Lamar Takao et cetera are often in these conferences and it's really related to what Blanca was explaining the other day that I like a lot he's more or less philosophical papers on this interplay between modeling and experiments on how you really need to think that modeling is not anymore just for modelers it needs to be an interdisciplinary effort and you need to start forgetting or the community needs to start forgetting like these people like say no we need to trust the data because the models are not good enough and the other one around I mean the modelers say the data is too noisy for the perfect models I haven't heard at some point some models saying okay I'm sorry we need to change the data because my model is right so these things don't lead anywhere so we need to change this over and over and this is one part and the other part in particular in cardiac electrophysiology is that a lot of people have been working for years and years in at the microscopic level but I like everything in nature in particular the human body it's a multi-scale problem and we have to deal with what is happening at the cellular level even at the protein level but also going to the different scales at tissue level and organ level and even systemic level and this is changing over the years sorry and this is changing over the years but it always eat everything started with Alice and Noble and these people working and developing in the 60s the first cardiac electrophysiological models at the cellular level and then also there were really very good initiatives led by Peter Hunter the children like to share these models also the kind of a standard model domain and by domain propagation tissue propagation models that this is quite particular to cardiac I mean being having working different fields like the neuro field you see that in this respect the brain modeling is far from having this knowledge in terms of standard models to use and also the problem is that most clinicians work at the macroscopic level I mean their measurements dealing with patients they are basically working at the macroscopic level why these people have not been working until recently at the organ level this is what I mean some ideas it's just the difficulty to get good data also having models and simulations at the organ scale and including the different scales it's very difficult and very time-consuming and as Bart was saying before I mean quite often you don't have the data for all patients so that's problematic and the problem is that historically all this cardiac EP modeling has been quite far from clinical clinical routine but this has evolved dramatically during the last let's say 10 years or so because there has been better and better imaging and signal acquisition systems in clinical routine not just in particular big hospitals research research hospitals also very helpful open source initiatives to share codes to share models and also the advent and the use of computational resources we saw in the talk of Mariano how now it's relatively easy to get access to high-performance computing that helps a lot to speed up things so with all this culture I was in London at some point and I met Maxim and the first thing the first day arrived to Casey and he showed me kind of a really nasty video on the acquisition of optical mapping of a dog heart in an open chest surgery it was so nasty that it was really nice so then after several beers and parties etc we decided okay maybe apart from beers we should we could start doing things together so I started to help him doing some image processing for the modeling projects he was working on basically I was doing a stupid affine registration he was doing the hard part on trying to construct a data assimilation pipeline in order to estimate different conductivities in the heart and this is already 10 years ago max so then I came back to Barcelona in 2007 and since then really I kind of forgot a bit the image processing side or just a couple this with a bit of the going to the clinic more and more often in particular locally to hospital clinic and work with electrophysiologist but also being involved in large European projects you hard and VP to HF so kind of my objective the objective of my research has just gone towards really trying to develop computational pipelines and really using the part of the data processing and also the modeling as well and and that's true I mean in the end the main goal is try to help the clinicians to better understand some particular clinical questions that's quite important to focus and to optimize some of the therapies and get them better tools for the to support them that the clinical decisions they need to make and a lot of them related to cardiac electrophysiology so in fact well Bart wasn't that wrong the problem is that I need to talk about this after Blanca really nice talk and after 150 slides of Maxim so and on top of this is the last day so everyone is super tired and he insists me you need to do it all kind of functions and use will present so this is what I'm doing but no equation just money con goes money Congress is just nice pictures and obviously Bart needed to pay for this so in the talk we see that so just a disclaimer so cardiac electrophysiological modeling but with a subtitle someone that is not really a modular image processing background and just trying to integrate this macroscopic patient data into models and use models not really develop them just talk to people that really are developing these models using different schemes on how we can go to image based cardiac models these days as always you really need to start from getting nice images and data from patients do image processing in order to stack patient specific representation in terms of mesh and then add the solvers and the model the mathematical models that replicates some of the physical phenomena in this case in the heart and then trying to what struct additional information that is not given by by the data the thing is nowadays we have a lot of information that you could you could input into the model and it's not very easy even for the clinicians for an expert to really handle or all these deluge of data and know how to interpret and even to communicate to other specialists or even to the patient and even if we just look at the at the at the imaging site we have I mean a lot of different imaging modalities nowadays they can provide very useful complementary information from echo to CT MR or different modalities of MR information about the structure and function but all this needs to be properly integrated if we want to make the models useful really at the at the organ level and models in terms of the different physical phenomena in the heart from the electrophysiology to the to the mechanics and to the flow so basically these three important steps not just the modeling but first extract the relevant biomarkers and indices from the data then integrate them in a proper way including uncertainty including what to do when you have contradictory information between different modalities etc so I'm going to start explaining some examples out of these three parts not really a lot on the data processing because this was the job of Matthew the other day so just to remember that in terms of the image processing once you get the raw data from an image of the patient you need to just recognize some structures the relevant structures in them different techniques at less base etc other things very importantly just to extract shape indices to start the formation indices in order to have a better overview of the patient's heart also we've been involved a lot in the signal processing with a group of Pablo Laguna and Pablo Martinez in Saragossa in order to better extract information a better process in this case electron atomical data that Max was mentioning before is quite used during interventions radio frequency ablations to guide the intervention and to be honest the commercial systems these days they mean the signal processing is very very basic so there is a lot of room for improvement in here to have better better data I won't go more in detail into this but we are now having a pattern on some of the techniques here that the clinicians are very happy about it so we need to construct what sometimes is called like a digital patient or patient avatar where we can integrate patient-specific information from images signals etc but ideally also population-based information information that can come from the clinical guidelines information that doesn't necessarily need to be in a nice image format but also everything related to reports and guidelines need to be to some extent integrated there I will just touch the integration of the imaging here but it needs to be considered so the first example this is the work mainly of David Soto that hopefully will defend his PhD in September and Maxim also has talked about VTs so I won't go into detail but obviously this is quite often related to heterogeneity of the scar in the left ventricle mainly and it's very important to extract or to get information about the heterogeneity of this scar in terms of core zone that is tissue that really doesn't conduct a lot anymore and border zone that is kind of a mix between healthy and and that tissue that is still conducts but allowed velocities and that are supposedly the reasons of their weakness so at the hospital clinic in the unit of arithmias in particular Antonio Berrezzo they've been during the last years developing optimized techniques for ablating these cases and rather than ablating the whole scar that it was kind of the standard procedure until not long ago just try to analyze the course at the border zone that I mean it's quite challenging you need to be really an expert on this to visualize this conduction channels of gray zone or border zone and then ablate just the entrances of these channels so that you kind of induce a short secret and then you stop the electrical reenters the problem is that yeah this is not a potato this is supposed to be a heart but it looks more a potato and then this is quite difficult to interpret in kind of research hospitals what they are using is just this image guided ablation vitian ablation where they acquire a preoperative MR image in the lane enhancement where they can get information about tissue properties really where is the scar and the heterogeneity of the scar but obviously we are talking about data that is acquired before the intervention and that it looks like a heart and data that is acquired during the intervention and it looks like a potato so in order to integrate this the clinician needs to do it a bit in his or her brain and it's quite complex so what David has been developing together with Costa in our lab during the last years is kind of an integration of these two types of data and it's basically based on a mapping of the 3d data into a 2d 2d disk where we can flatten the left ventricle with a laplacian conformal mapping so what we did was to really analyze and compare with a standard with a state of the art integration techniques on synthetic data we generated kind of simulated electron atomical maps and a real data so basically you can see here the process we did it for the endocardium and for the epicardium and you can see from the 3d you need to select some landmarks to guide this mapping and you go from the original 3d to flattening the structural anatomy of the left ventricle and here both ventricles into these maps and obviously you can do this with whatever that looks like kind of left ventricle even if it's more potato like shape you can see this animated that is a partner and you can see that while we need to do some readjustments in the end just to put the apex at the center but this process is quite nice because from the 2d disk you can get back to the 3d without losing a lot of information sorry we tested on on synthetic data and well we were quite happy because our methods kind of this landmark QCM work a lot better than the other standard techniques including rigid registration or even including no rigid registration based on currents because basically there were not constraints imposed on this non-rigid deformation so that was good for different scars so we were quite happy to have better overlap measures between the between the scars it was a synthetic scar in green the conduction channels imposed and not just the overall overlap but we developed a set of indices to analyze locally the conduction channels in terms of the position of the entrance and exit of the conduction channels and also the continuity of the conduction channels because some methods for instance these are the two more or less used now in clinical routine these days sometimes you lose the continuity of the conduction channel and this is not good we tested on several cases real cases and this is a lot more challenging to evaluate because of the noisy nature of the data but while we were quite happy to see this green dots come from the carto information carto data and are supposedly the places where you have double potential so candidates for being a channel and when we were applying our integration technique most of them were really onto what it was the conduction channel and depending on which technique you were using I mean points were going everywhere not really following the conduction channel so we were quite happy about these results but it's still very challenging to apply in routinely in the clinic we also applied similar techniques with experimental data here it's just some experiments that we were involved in hospital clinic for several years with pigs quite challenging to work with these pigs at some point they were free-relating and legs were just jumping etc here we have an electrophysiologist doing or acquiring electrophysiological data here we have an ecocardiographer acquiring mechanical data or deformation data the vet Gemma super scare here engineers and there was quite a set up and the nice thing is that we got a lot of data in fact to these pigs we had preoperative structural image and an MR but also we acquired when at baseline estate we acquired electrical and mechanical data of the peak then we induce an electrical problem and lbbb are left under branch block and then we again acquire these mechanical and electrophysiological data and then we implanted a CRT device kind of pacemaker to see which what was the effect of the electrical abnormality and to which is then this was corrected together with the mechanical abnormality with the device so that means a lot of data and David applied this kind of 2d conformal mapping to this data because we were having very heterogeneous data meaning a different time points with different resolution and we needed to integrate them in a common reference space so this to this was quite useful so we've got endo epi data basal lbbb with the device implanted we could just go for from this 2d to a kind of a 3d bullseye plot and then extract indices from these data and compare data different time points and even between different cases we could get kind of videos of the activation propagation at the different stages you see how the propagation at baseline is quite different from when you have a block of conduction and lbbb or when the CRT is implanted and we could also quantify kind of this interventricular and interventricular delays with kind of these histograms of isochromes so we are in the process of analyzing all this data that's quite challenging it's a lot of data and we have a lot of different markers and we are trying to analyze the relation between this pattern of electrical activation to where the leads of the CRT device were implanted because there are some some positions that are better than others we also had peaks with scar and without the scar so I mean we are analyzing these differences also together with Siemens with Tomaso Manzi they are using this data to validate their electromechanical simulations and they are trying to predict the CRT electrical activation patterns once the model is personalized either at baseline or at lbbb and results are quite promising it's kind of data simulation process with different parameters and we are trying different parameters to better have this data simulation done in addition not just the left ventricle and the right ventricle we are trying to apply kind of a similar strategy that we like to flatten things to the atrium as well and we have very preliminary results and to some extent it's a lot more challenging because of the pulmonary veins because of the left ventral appendage so we try to construct this kind of circular disc for the atria also but at the moment it's quite dependent on the registration phase between a given template we have and and the new atria we want to we want to analyze but we have one interesting results comparing carto preoperative let's say fibrosis after the ablation and the carto during the ablation more things to come in the next in the next month so this is a bit of the integration and once we have more or less the data in the same common reference space we need to build this reference space structurally for the for the models and I like it this is like that I stole from from Rafa Rafael Sebastian in Valencia because it was showing a bit of what it was some years ago and not even that long ago I remember conversations kind of 10 15 years ago with people using ellipsoid for us as the approximation of the of the of the anatomical instructor of the heart and for some questions it's okay but if you really want to go to very specific patient modeling it's a bit limited depending on the question now a days or more recently we get more and more information from the images and higher higher resolution data than this is reflected to better anatomical definition in the models once we have the anatomy we need to complement the whatever we extract from the image with information you cannot extract from the images that are more related to superstructural information or even with function so then we start from from the imaging we generate kind of a segmentation we generate a surface mesh but again we have a lot of limitations still you often see very very smooth left ventricles when this is not reality really this surface mesh more or less it's done in a standard way but with some kind of dilemmas we'll see in a minute and then you go to a volubetric machine and depending on how you do you can end up with very very expensive computationally meshes if you want to solve all your equations at every note of this of this mesh once you have the model construct you kind of add additional information based on experimental modes etc. like the pookinja system the fibers or the sky information and just starting by kind of this machine kind of machine dilemma it depends on the families kind of the French people like Salote to Edra whatever and the UK and kind of New Zealand they like a lot etc. well I mean it's always one week ago we were still discussing about this what is better so well I guess it depends on the application it depends on on what which properties you prefer of these meshes for the external meshes these are very very nice meshes to work with quite regular but they are they are very very difficult to construct with human organs it's not always easy to fit kind of these nice etc. into a real human anatomy and there are some people like André McCollum that they just push very hard the maths to really construct external meshes in complex structures like the atria but well the other family of people it just kind of building the general meshes where they are more flexible they need to be well they can be better fit to data but then they are less regular and in VP2HF European project we are working people from KCL they spent one year and a half just developing a pipeline a computational pipeline to construct these tetradial meshes automatically from from data and this was quite challenging really kind of more development work than research to some extent but really needed because this was one of the main bottlenecks to bring simulations into the clinic we'll see later these are some examples of these meshes generated by KCL people on really patient-specific data information from images and then quite important this super structural information to understand what's going on in the heart maybe a bit farther from clinical applications but still I was mentioning these smooth ventricles but reality is not like that I mean this is just some examples when you see the traviculations in the endocardial walls and when you simulate left ventricle anatomy smooth probably you will get errors when you simulate flow at some point mechanics at some point electrophysiology you need to be very cautious and on the assumptions you make while this is just some data from high-resolution MRI or synchrotron some people in our group generated and you really see how complex is this structure just imagine this full of blood and then with a catheter inside trying to touch a wall so that's quite challenging to to model so just commenting a bit on the fiber orientation Maxime has main comment this in terms of the acquisition these are some how relevant fiber orientation is for modeling for modeling electrophysiology and for modeling mechanics of the heart and while there are a lot of kind of histological data where you can see the different fiber orientations depending on where you are looking at in the cardio wall the cardio wall the left ventricle and the right ventricle etc. these are images from Damien Sanchez Quintana in Extremadura and there are some kind of data that some are believers some not based on MR DTI MRI ex vivo most of the time from canine hearts from human hearts there are also data in where people has been quite active in this field we are trying to get this information from higher resolution data from synchrotron also and then as Maxime was mentioning this pioneering work on in vivo DTI even if it's at some slices so there are some data there and and so we were trying to answer in one particular clinical question when we realized that we needed to look into fibers really properly and this clinical question was related to better understand the outflow track ventricular arithmias and try to generate a computational modeling pipeline to replicate what the clinicians see depending on the site of origin of the topic topic foresight of these arithmias and and in fact in these arithmias it can have a left ventricle origin or a right ventricle origin and based on the standard ECG data is not possible in a lot of cases to distinguish if it's coming from the left or from the right because the outflow tracks they are really close together physically so really you cannot distinguish properly in the ECG data and then it means that the radiofrequency ablation interventions can be very very long they go to the right they try to burn arithmias still is there they go to the left maybe sometimes that's nothing happened so it can be really really long procedures and quite painful so the idea is well let's try to do some EP modeling and owner whatever geometry one heart we had in the in the lab and just put different sites of origin and see how in the left ventricle and how this arrived to the outflow track of the right ventricle with a standard kind of detailed models tend to share etc and and then we show this to the clinicians they say exactly this is what I see in the data when we have the origin and the non-coronary cast while the propagation arrives around here a bit farther from the red farther from the outflow track when is the L cast it arrived from this exact site etc etc this it was even presented in a supposed to in a clinical conference so we were quite happy to get the clinicians believing in these simulations and then we kind of compare when qualitatively our results in terms of some parameters of these simulations with the literature and they were more or less agreeing to some extent in terms of the areas of the isochronous etc so we say okay let's go to process some real cases with these pathologies and we started with three cases that they were really awful in terms of the quality of the data the potatoes in Carter were really potatoes and data were really really noisy and then the first thing we realized is that we needed to segment the geometry well we set up kind of the whole pipeline like image process image generation parameter setting and simulations in comparison with the Carter data and then the first problem is that main segmentation techniques automatic segmentation techniques based on addresses they don't reach the outflow tracks so we needed to I mean a PhD in our group Ruben really very painfully manually with the help of some undergraduate students to segment all these outflow tracks etc we use a slicer for this and have money manual corrections and have good results but quite computationally expensive then to construct the mesh was also very painful because of the thin layer structural layer in the outflow tracks so a lot of kind of manual corrections with blender etc but in the end we got kind of nice meshes and then we needed to incorporate fibers and fibers okay well you can use standard rule-based methods like a streeter like buyer more recent one or fit whatever DTI data you may have the problem of these fiber models in fiber data that is that all of them they mostly look at the left vertical and we needed information good information of the fibers in the right ventricle in fact we talked with the kind of anatomist to get this kind of histological data and we saw that what the picardium kind of this is Twitter model rule-based models work fine together yeah we're fine but then when we were looking at the endocardial right ventricle something was wrong because these models are given kind of question conferential fibers where when I mean when we see a histological data fibers here are quite longitudinal and in fact it looks like we have two types of fibers one going from the apex to the pulmonary valve and one to the tricuspid valve so we said okay well let's change the fiber model to make it more realistic at the RV so we are trying to modify kind of the Bayer rule-based models defining kind of some directions or reference system with some directions by solving the Laplace equation with some kind of landmarks that all these kind of reference system in the hard fiber reference system they allow us to define some angles and to play with these angles to define the different fiber orientations and then we are kind of defined one direction going from the apex to the pulmonary and one to the tricuspid valve we are now in the process to compare these results qualitatively with histological data and also what we can see that we got we have imposed longitudinal direction here and we are comparing with all the data we may have a straighter Bayer DTI etc and I mean there are some some difference okay and then the next step is once you have this model try to see the influence of these models or these fiber models into the final simulations if a particular fiber model it's making simulations closer to the carto maps the issue these days is that these carto maps are not very good but we are seeing differences in terms of the electrical propagation obviously so now we are in the process we have better cases six new cases that we need to process these days and to really assess what is going on there in terms of the fiber models in the purkinje it's another very important super structural information quite often in modeling we have these fast electrical pathways not really modeled at the structural level and while we had some papers that showing that depending on the question you want to answer for some problems like CRT it may have an influence and that you need to consider really the structural purkinje so Rafa Sebastian he developed some techniques to generate these trees based on L-based systems or L systems the problem is that you could generate really nice images and figures and trees but I mean was this realistic or not who knows so he worked a lot we needed more control I mean this could grow in whatever way so we needed to really assimilate some data in order to control these models and to control some parameters of this model and that make simulations or the purkinje the structure of purkinje more realistic and we use this well we also look into this at the microscopic level in a collaboration with Fran Saxe in Utah and while we did some image processing to analyze this with graph theory measurements the purkinje network all this information was embedded in a way in a model in order to extract information of the purkinje tree directly from CARTA maps because this purkinje tree the problem is that you cannot really measure it in vivo so we were kind of analyzing these activation maps with kind of the gradients divergence to detect some critical points and then use geodesics to construct the purkinje tree we tried with simulated data and even with three cases of ventricular tachycardia fascicular ventricular car tachycardia related to abnormalities in in the purkinje system and we were quite really surprised to see that well I mean even with various parts conduction purkinje trees we were getting reasonable simulation results tissue characterization another point of important to take into account so this is not a kebab this is a heart of a human that was expanded because the heart was really bad in bad shape and we need to we want to use information at the astrological level to validate what is available at the in the MRI what we see in vivo in the MRI so we acquired an in vivo MRI before the expand the explained process and then we send this heart we're conservating from all it's it we send it to Damien such as Quintana and Extremadura that he did a bit of carpaccio with it with this microton system and then all these things that you see coming out it just very thin slices of the heart that goes through a kind of a staining process in order to get really nice astrological images a lot of 2d slices the really painful work we needed to do is to reconstruct all these slices back in 3d to compare with the MRI so David very fond on flattening things so he used this kind of flattening technique in order to build a common reference space for the astrology and for the 3d data and results are really really good really promising we can really compare point by point the in vivo MRI information with the astrology and this is being very helpful to determine a lot of parameters and how reliable is the information of the scar and the heterogeneity of this car in the MRI and we are in the process of doing this analysis so okay but what about really clinical routine so you have all the tools now or a lot of background to see okay is it being used in clinical routine in 2008 we had this you have project where one of the main goals or the main goal was really to do that to have computational techniques including modeling for all these big objectives large consortium a lot of money a lot of years hospitals companies etc. I mean very structure very well a structure project so what could possibly go wrong what happened after the end of the project 2012 in terms of computational techniques researchers we were very happy we had a lot of publications a lot of advanced technical advances and data processing in modeling and clinicians were superset why because everything was applied to a very limited amount of data in four years Maxime managed to have two patients and this was even in 2013 we got eight patients so clinicians were saying look guys I mean we cannot do clinical trials on the with this and I mean what happened what happened here so this is a say speculation but large consortiums are quite difficult to work with data wasn't good enough or it wasn't acquired for the purpose that we wanted for the modeling computational models weren't advanced enough we didn't have maybe fast enough computational techniques and competition limited computational resources even not very clear and concrete clinical questions and even clinicians they didn't know at all what the simulation was and maybe we beat oversold the product sorry but things are definitely improving when you see kind of people like these Japanese that some years ago nobody knew who they were and they even had they weren't publishing they didn't care at all to publish the website was in Japanese so it was impossible with nice pictures but you didn't know what happened there and oh animations are a bit destroyed here so I'm sorry the thing is these Japanese now they are publishing a lot they are even rejecting metronic metronic wants to work with them and they reject them like that they create their spin-offs and they publish even in media etc and the web is in English now young Hopkins our friend try and over she went from having publications in Hindaway to milestones nature communications this year this is a picture from these milestone nature communications when you can see that she has the fibers wrong in the right ventricle but it's a milestone paper and we see the BSC people our friends they went from having toothpaste our winning hearts nice videos but completely unrealistic to really being consulting doing consulting work to metronics so that's quite an improvement and as we went from you hard to VP2HF a lot of similar phases and but we passed from having eight retrospective patients in four years to now having one patient every two weeks done in a prospective study so that's quite an improvement the objective of VP2HF is quite similar but very important a unique and single clinical software platform and using a wise approach to decide where to use modeling or not we don't need to use modeling for all patients for easy patients we already have data enough to see if the patient is going to respond or not for instance CRT so the more resourcing resource intensive tools like modeling they will be used just when they will add real value and that's quite important we have set up this kind of decision trees in order to have kind of this hierarchical management of the patient and in reputable in case you'll people they have work a lot to have this workflow where the patient arrived to the hospital image data is acquired and then in two weeks they have a really nice well I will show you later well here wait here a really nice report where in this report the clinician can see the results of the simulation the results of different decision trees looking at different parameters and some recommendation in fact this first state here in terms of the decision trees it looks at the clinical guidelines it looks at more advanced imaging parameters and for cases that with data is not enough the system recommends go for modeling and then you do modeling and the modeling will give an answer so a lot of we've done a lot of kind of development work through the rocket app in order to integrate into a single platform all the different tools it was brilliant work of Carlos Agua developer in our lab together with in real people and together with clinicians so this is kind of one snapshot of this tool that the clinician can look around the whole database of images and data available for each patient okay okay I will just go quickly so take home messages so the field is improving a lot but really a lot clinicians really start to be interested on simulations they want to know what they are we have better data we have better computational resources I just I think I mean machine was stealing me the conclusions like trying to be integrative try to talk with different people or people with different profiles biologists clinicians computer scientists modulus is preventable is et cetera and do it before a certain anything whenever you have a clinical question try to design everything and start to talk with people and trying to go for a more systems biology multi-scale approach to solve clinical questions you really need it the other thing is that go simpler as simple as possible don't try to over complicate your tools because it's nice well this guy said a lot of things but some not and well just thanks a lot of the ganks we have around kind of license group and our brother group symbiosis that you will know a bit more now and the partners from European projects national project funding agencies clinicians and a lot of other colleagues nationally and internationally okay thank you very much thank you for your attention okay thank you very much Oscar for this elaborate overview extremely complementary to Maxime's while you said like I have nothing new to say one of the nice things also that I think is important one of the nice things that you said is important is okay keep it simple but not too simple neither because one of the things that you clearly showed is that okay mean you're not a model developer but you start to use models and then you see that at some point you want to answer clinical question and then you say like okay there's something strange so the question is then what do you do do you say like okay this clinical question I'm not gonna address it or do you say like no maybe we should go and look for the problem and then try to find a solution more deeply I think you need both things I mean it depends and that there is a lot of work for a lot of people so it's very useful people that don't do patient specific modeling to better understand how the models work I mean we know our BSc people have been doing this for years or other people doing more basic science and it just no one can do everything so I mean it's just a matter of just trying to to have complementary work and just put the things together and and just this work of kind of just evangelizing the clinicians well we know that is very hard and even papers like our friend Natalia to some extent is helping because this works even if some people don't believe fully on them they are showing simulation results to clinicians and they are kind of starting to be aware of it so I think you need both and not try to oversell everything and say that you have solved everything and some people do that and how do you go from you talk to a clinician and you say I have my model that will solve your problem and then at some point you do it capaceos in order to look at scars well this was an idea of a clinician indeed because they couldn't well they wanted to have proof that what they see microscopically you see similar things at the histological level and in fact it was very funny because the anatomist that he is all days looking at the astrology and the carpaccio the first time he saw the MR of the patient he said no this is not possible you are not guiding the ablation with this I mean you don't see anything here and it has been after some time doing the 3D reconstruction of the histological data and put it together with the individual data that he starts to believe that what is not too too bad the in vivo MR data but you really need to work a lot on how to present things to kind of the different specialists and how do you talk with them so again like Maxime said is you need to couple different people from different backgrounds and a lot of cases as kind of a VPH researcher you have to be in between and try to find the right resources in order to get it on some other questions for Oscar well thank you Oscar for the great presentation I wanted to ask you do you find it really relevant to I mean the the scar information you obtain from Carter or do you think there may be some other better predictors let's say for certain or with me as for example well for for VT is crucial because I mean there are kind of heterogeneities or conduction channels that are too small to be seen at the resolution of the imaging and the clinicians electrophysiologist until very recently what they've done all their lives is to look at one of the signals and in fact this is what they they used to detect this kind of double potentials or fractured fractionated electrograms and this is done with the carto data it's not even the 3d reconstruction this map but to look at every acquired point the signal the one this signal that's why it's very very important to have better signal processing and these days is awful they just detect the maximum peak at every point and this is completely nonsense there is huge room for improvement there and then how to couple this together with the imaging information I mean you can imagine a really nice system where the preoperative information is going to guide you the acquisition of the different carto points and doing together in an integrated way the signal processing and just to detect inconsistencies between all the MR data and all the carto data as well I think you need to have both and with the new electron atomical systems where you can have thousands of points you really need to go for advanced signal processing and how to couple with another question is AF and the ATREA where fibrosis in the ATREA well no no no no it's well partially related to that also I think one really important thing is by the fact that you work with all these different people and you have your way of presenting something or modeling something you always have to translate it into a way that people are used to work with that means that if you show electrophysiological modeling to an electrophysiologist make sure you put one these signals next to it because they're not necessarily gonna understand the things that you like because they've never been used to working with it so I think that's important is do these translations not only talking the same language but also showing the same data even if you're not convinced that's the best way of doing it but they've so much experience in it and pattern recognition is still the thing that people are the best at at this moment. A very stupid example of this some weeks ago that it's quite hard to believe we were showing results of the outflow track of the red ventricle simulations to the electrophysiologist and Ruben put some views of the 3D model and they didn't know where was the red ventricle and the left outflow track and is this the outer and they were asking to us because this wasn't the standard views where they they have to look at the data and they were we are going to stop working with you if you don't put us the 3D model in the right orientation. Okay thank you very much