 Okay so we can continue maybe just a very practical thing is like be a little bit careful with your belongings with your computers and so on and maybe also try to wear your batches the whole time and if you see people without batches it's like let us know people that you don't know sometimes unfortunately random people come in and out and try to grab computers it has happened before we have people that control the room so I think the risk is low but let's not kind of challenge faith okay I think we are ready to move on so as we said before it's like today is the day to look at how to get our data how to manage data how to process data in order to get ready from for modeling and since we also in the cardio vascular field and that's one of the things that we do we actually ask Mathieu de Kerana here he is actually one of the people which is also involved in the cardio function project that is partially kind of organizing this also and Mathieu is background engineering did a PhD in medical imaging worked for a while here also but currently is working in Philips Paris in Philips research Paris so it's actually interesting to see a little bit the kind of industry perspective of this also and where you see that real research is not only happening in academia but is also happening in industry and although industry in the end finally wants to make a product in order to make products of course or more like a slightly more basic research is also needed so it's very nice to have Mathieu to give us an overview on cardiac image analysis from the basics towards the industry I hope so please thank you Mathieu we'll see is the microphone working is the microphone yeah connected now okay yeah so what I would like to do now is to take back a lot of things we have been seeing yesterday about the heart and we had like the clinical point of view the physiological point of view but now take them back a bit more from the perspective of image analysis actually one thing I'd like to ask at least to Bart is to be quite interactive so that I mean of course when we do quantification we could get things wrong that's very important to understand that well from the start because maybe we are not quantifying things accurately so it could be that we get quantification errors because of a lot of noise in the image because of that processing because too much regularization but also it could be that the index we are quantifying is not very useful so that's the bit what I would like to go over it's of course a lot of very different things so it means it would be like really more at all that really something very detailed but I'd like to talk mostly about first how we can extract from images information regarding the anatomy I have to go back this is okay so about the anatomy first of course the four chambers geometry so the two ventricles the atria but also the coronaries that we've been talking a lot about yesterday also about fibers because we saw a lot of beautiful images yesterday about fibers but what about seeing this in real patients and is it feasible or not that's more for the anatomy so the static part but of course the heart moves and so the we want also to quantify this this function and the function of the heart the main index that's quantified and used in practice is the motion and the strength so we'll be talking about that about what's global motion what's how we can go from a global description to a final level very quickly we'll talk about flow and perfusion and I will finish by showing one example that's actually yes it's so it's a company example you can say it's one example not in the sphere of diagnosis but in the sphere of intervention and imaging about how image processing can be useful to merge information so let's start with this first part about diagnosis and let's start with the quantification of the four chambers of the heart we have seen yesterday normal and pathological hearts and you've seen that the heart can adapt in shape when there is a problem being hypertension being a valve that is not functioning and we've seen that somehow the of course there are there are a lot of possible variations but the topology of the heart is quite consistent over subjects you have deformations in terms of dilation you will have thickening of the wall this can be global so global change in volume they can be local just the thickness of the wall we have seen like basal septum that's a typical signature of hypertrophy but still it's quite a consistent geometry and so that means that from the side of image processing what's been mainly done is to take a model of the heart so that's sometimes called an atlas and to map this very generic average model to each patient of course when we do this there are like three questions that will arise honestly the first one is more I would say a hobby for image processing people I had a lot of interest some years ago and I think it's progressively disappeared but it's how do you get the average anatomy what's an average then when you want to take this average to an image the question is how can you very grossly estimate what a heart is sometimes in an MR in a CT you have a lot of things around you want to get a bonding box around the heart and then how do you do really the refinements of the adjustments so that's about the how to obtain an average anatomy it's actually some research that's been done in UPF so that's why it's nice mentioning it well mostly what's involved here is a lot of registration so image registration so you try to align images from different subjects and what's done in to get an average that's not too biased to the source of one specific subject what you do is that you start to pick up one subject in the database you align everything to it and then you can average these aligned images and you would get something like this that's a blurry image that of course looks very much like your reference because you've been aligning everything to this reference so we say that there is bias in this average and to remove the bias you can also take all the transformations because what's interesting in this procedure is not so much the final image that you get but it's actually all the set of transformation that's map every subject to this reference and that's where there is information about the variability in the data set and so you can also take the average of this transformation and apply it to this averaged image and that's the way to remove bias and of course you can do that iteratively to get to remove progressively the bias that was in your initial shows choice of reference and you see that in the red circle here I put some something about clog this is just to mention that when we do statistics and deformation you need to be careful between codes because for instance if you take the vector fields associated to rotations and you just average them you might get something then that's not the average rotation and so you need so there are some mathematical tools that are this lock and exponential that are operations you do on a vector field to some to some whole scale it's done at in a space where you can do averages and then you can you can come back and avoid some artifacts that that actually come when you do statistics on big deformation then our second question is how to localize the heart so here we you'd like to two possibilities the first one on the top is really really common so I mentioned it because it's actually what's in the Philips product so I think it's worth mentioning is this idea of the generalized half transform so the idea is simply that if you take your reference shape you may index so you can go along your shape and you can look at the the normal of the contour and for each value of this normal you can build a lookup table that's the table you have on the right there where for each orientation you store the direction of the center of the shape and now when you want to find the this same shape in an image what you do is that you run an edge detector you will get normals by looking at the gradient and you can use your table that's kind of your representation of the shape to look at directions and this is like a voting approach so in this direction on the image you will count and you will get a vote for each voxel and so you increment on the ray that goes from the the pond in the direction of the normal so in this ray you increment the vote of all points and of course when you do that repeat when you that for all the parts of the shape then the one that should get the most votes is the center so it's quite a robust way to detect the center in a shape and it's called the voting base approach of course you don't have to do that there are many other possibilities so I just put another one there which is more based on registration where also you detect some key points and then you can take just like an average shape that that's been computed from a set of images and because if we take a fortune the view like this the heart is more or less always in the same place this is already a good estimate and sometimes it's sufficient to then deform to the to the heart and detect the heart like this then of course there is a lot to do to take this generic shape and align it with a good level of details to the to the image and that's where it's interesting to talk about pond distribution models so this is often referred to as PDM typically this this pond distribution are computed using a PCA which is a very standard tool in in machine learning I would say is the first one of the two kids somehow and so the idea is you build a matrix of covariance about all the points in the shape and you dig you you try to diagonalize this matrix so you will come up with eigenvectors that you store in a matrix that's called here phi and then what's important is that what you can do is if you take an input shape being X you subtract the mean because that's very important it's always a way to center your data and I'm sure Gemma will agree that no matter what you do in machine learning it's important to normalize the data before and it's not always easy so the simplest normalization is to subtract the mean so you take your shape you subtract the mean and when you project it on the eigenvector so these five metrics you get a vector of coordinates so that's a very simple way and it's really the simplest one but to take a shape as input and to project it in a space that has no real physical meaning but where you have a set of coordinates and every of your shape is mapped as a pump and that's very interesting because it's a way conceptually to get a big population of shapes and to map it to a space where you just have pumps but then you can navigate into there you can click and you can view the associated shape so I think it's very interesting at least something a highly interesting to develop navigation tools in that when you have a lot of and then you can come back so it means that from this coordinate you can always reconstruct the shape just using exactly the same five metrics it's the same matrix to project and to come back so this means that in the context of segmentation what we have here actually is a deformation a transformation model because I can formulate the problem of segmentation like I want to find the optimal b values to fit my image okay so that's a way to deform this very generic mean and to use not only the average which is not the most interesting part of the atlas but all the statistical information that's included in my atlas and often this is not enough so it's often combined with simpler transformation so just geometric transformation of course then you use this transformation as a regularization so what's typically done is for every point of the surface you look on the normal you find the closest edge in the image and you get a displacement that's very noisy of course because you could get a lot of errors in this process but because we have this transformation model that's some kind of statistical regularization we can regularize all these very noisy displacements and we can deform the heart to the best possible fit in the image so now I have a video from I don't know if they should be sound but it's not really important anyway because the sound is not but that's so that's the promotional material for this heart model these are all the concepts we have seen but you see them working on images so this is this average image that's been deforming to the patient really a marketing okay so you see that these concepts that are research concepts I think it's looping so we stop having been developed in universities they know our part of a product and I think it's an interesting example no doubt we'll tell you if that's useful or not it's a completely different story okay just to mention here I was a bit biased because as Bart said I was at UPF then no feel it so I saw a lot of these model based segmentation there are other methods I would say more traditional in the image processing community like the level set based and that's actually a nice example it's done at creatives by Olivier Bernard in collaboration with with Leuven it's a level sets method so I don't know if you're familiar with with what the level set is but it's a way to it's a way to represent a surface so typically when you want to extract an object in an image it means somehow you want to express one coordinate so let's say this x1 that would be the first of your coordinates as a function of the others and so because this is a function you may represent that in different way and here one way they used is to use it as using a sum of basis function that are radial basis function what's interesting is that they did it in a space of coordinate that's spherical and I put it here just to mention that sometimes the the physical space let's say XYZ is not the best space to represent the shape so we'll discuss about that again when we see strain but so here clearly it made more sense because the the surface we are representing is topologically quite close to a sphere you could deform a sphere to get that it means more sense to use this space of coordinates that's well adapted to the geometry of the problem and then they use that are actually using some quite classical level sets thing so you minimize a functional where it's actually a sum between an inside energy so inside the contour and outside the contour and each energy is like the variance so I'm trying to to get an intensity that's that's as constant as possible inside the contour but it works quite well and they managed to to apply that quite quickly because they have this representation is this basis function that speeds up things quite a lot so this was for the the geometry like the four chambers but of course yesterday we saw also a lot about the coronaries and we see how this is really important to look at which part of the myocardium could not be getting the right level of oxygen so you want to quantify from the image processing point of view if there is a stenosis in the coronary tree but first you want to reconstruct it and so this is like again one example you have the references there where what they needed to do is to define a lot of seed puns in the image and then they try to connect the dots they try to connect these seed puns by having against a level set so it's the same thing we just saw it's so but here it's the idea of a front that propagates from a starting seed pond to an end seed pond again it's optimizing some energy so I will spare you the details but what's interesting in this type of segmentation problem is that sometimes we might get stuck and just not reach the final seed pun so to make it more robust you need to consider a lot of hypotheses and what actually they were they were able to do in all these all these works is to have a framework that is testing for different combinations of parameters and then choose the best one and I think that's quite an interesting concept for us to say that sometimes we should not restrict ourselves just one processing but we should be running different tests and be able to fuse them and to take the best so that's also a concept that's now getting more and more generic in image processing and it was already there so then the idea when you get your of course you can do that in two iterations so it's what you see on the left you see that first they estimate a centerline that's actually a bit wrong but then they can correct it by estimating the radius so there you can go in a multi-scale approach and then what's typically done is once you have your coronary reconstructed you can unfold the image along this coronary and you get like just this straight image on the right and that's the image where they look at if you have any calcifications then yesterday we had Martha telling us quite a lot about perfusion and she mentioned specs she also mentioned ultrasound if you remember these are the bubbles but she also was quite careful about how useful this is she also said that MR is a new modality to look at that or it's like an alternative at least so the idea is that you will inject contrast it's typically gadolinium and you will look at the contrast uptake and so you see that of course the cavity are appearing appearing very bright first but what you're actually interested in is to look at the contrast evolution inside the myocardium and of course that's that's a very very challenging problem on the image processing side because these images are taken while the patient is briefing so there is a lot of emotion also it's very noisy data because they want to do a sequence that's really continuous so they they cannot take a lot of cardiac cycles and average them to reduce the noise and so that means that you need to do first the first alignment of all these images to compensate for the briefing so that's already a tough registration problem then you also want to segment each of these images because you want to look at the contrast uptake in the muscle and so then they try to have this type of curves to quantify the parameters that you see there so this delta in time in intensity and they try to fit typically some models so some typical distribution models to this now a variation of that is called the scar so that's so the late announcement sorry to quantify scar so there it means you wait a much longer time and you have sequences in a more that are designed so that the scar tissue that is that's really fibrotic will appear as bright so if you look on the top left images you see all the regions that are shown by arrows and that are very bright they designate where the scars this again is quite challenging data to process but it's very interesting because when you look at patients with ischemia that's where you know which part of the tissue is truly damaged and so you can confront this to what you would see in ultrasound which is the motion so this is typically an info an information that's very difficult to access in clinical practice so what Martha was saying yesterday is that all patients get ultrasound but very few of them get MR and this is something that we would like to be able to see an ultrasound but currently particularly the reference is MR so in terms of processing what's typically done is because these images are also very noisy and they don't have a very good resolution they are complemented by another sequence that's a scene a loop which is the usual sequence any patient would get when they go for a cardiac MR so there you get a first segmentation that's matched on these late announcement images so it means that again first you try to extract the cultures and then once you know where to look at you try to do some thresholding it can be based on graph cuts there are many possible techniques but then the processing is quite simple because you know that you just look for the bright tissue and you would like to discriminate the different areas in the myocardium specifically looking at the what's called the trans morality so what's important to know in this part is if the infarct is going from the endocardium to the epicardium or if it's just a small part and that there is still like half of them are half of the thickness that's preserved and that can function and then let's just say what about fibers fibers are very difficult to get from real images so it's very difficult to get for one specific patient the architecture of fibers so we have seen yesterday how important this is to explain a lot of things about the formation if you remember about doing this Egyptian figure like this the fact that you go from endocardium to epicardium with a change of orientation and the fact that because the fibers are mostly longitudinal at the endocardium it means that when things go wrong longitudinal function is first penalized so this all means that fibers are very important information that we would like to get for patients but the fact is that getting sequences MR sequences to image that is very challenging in vivo so there's been a lot done on ex vivo data but that's not really interesting to apply to patients it's not too bad because you can do statistics and you can take a statistical model to map it on the patient and then do your simulation but it means that your simulation is using for the fibers that's really a key component of the simulation something that's really an approximation and something that's not really specific to the patient so it's important to try to develop these techniques in vivo and that's why there is a lot of MR research on that you can see the reference on the bottom it's so these are people trying to do to take these sequences that take all really of acquisition and try to get them on a patient in vivo with briefing heart beating so they have to have a very small temporal window in which I quite data and it's all about optimizing the MR sequence for this problem but also how to do regularization afterwards because you'd like to keep all the information but you know that you will get very noisy data so you need to regularize and more and more you see that the ultrasound is a fascinating modality because it's changing a lot over this year and more specifically there is like a revolution in ultrasound that is called this shear wave imaging I'm not a specialist about acquisition so I will try to put it in simple words and you correct me if I'm wrong but I think it's like if you had a very tiny seismic wave in the heart in the tissue that you are imaging and you managed to generate this shock wave by ultrasound and then you try to image or this wave is advancing in the tissue and if you look at the velocity of this wave it gives information about the elasticity and more and more they are trying to extend this of course again is one thing is to acquire this type of information on a static piece of tissue something else to acquire it inside the heart so that's really two different problems but in terms of anisotropy so to estimate this orientation of the fibers there are no imaging techniques that are like considering different shear waves trying to combine them and you see that they are already able to reproduce something we saw yesterday about this change of orientation from endotwapy so you get similar distributions also the numbers are not exactly the same between echo and DTI so I think that's a very positive thing because if that can be later applied to patients that could be an alternative to the MR techniques of this diffusion tensor imaging that's been there for years but very difficult to apply to the heart okay then we've seen a lot about anatomy now I would like to say a few words about the motion and we will go progressively from very global features to more local ones so the first feature that you want to quantify from a heart of course is the volume over time so you get here the typical volume curve from if you get this volume you have a direct index that you obtain from it that's called ejection fraction that's one of the first index that's still used in clinical practice they always look at it so at least it's my feeling from what I hear also from people in Philips working in clinics they always compute it no it's a it's not a great measure and Bart could talk also about why it's not a great measure but it's the first measure and so to quantify a volume how do we do that from an image processing point of view while the way we do it at my lap at least there are other ways but it's we have a way to represent a shape where you see that you you take the so this is called the long axis of the ventricle and from this axis you can cast a lot of rays and you can measure the distances to the to the surface of the left ventricle so you see that we combine like a cylindrical representation with a spherical one and all the points that you get in this way are nicely and neatly arranged already by something that's called longitudinal and circumferential indexes okay and so of course if you take all these these small elements of volume you can sum them up and you can compute the volume so that's how we do it and that's how we reconstruct from a typical 3d ultrasound that you see there a volume curve of course if you have only two views it's important to know that in clinics there's also often estimate volumes just with 2d views but the idea is the same you can just interpolate the information while you've got it so even if it's just in two planes you interpolate somehow to the rest of the shape so this was for volume now let's talk about another measure that that was also mentioned yesterday that's called strain so there are many complex ways to define strength but I'm not gonna go for the complex way I haven't had a lot of discussions in practice any time you you talk about a mechanical engineering strain it's a nightmare because they have a two-hours discussion about which strain definition you are using so here I'm gonna use the dumbiest one that's called engineering strain and it's the relative change of length so here we just define strength as a one-dimensional measure where you compare this small L that would be the length after the formation to the big the capital L that's the length before the formation and you see that it's normalized and so the first thing you you you can do when you are looking at the left ventricle is to take to take lengths that go over the entire extent of the shape so you see that on the top we've been drawing let's say strings that go from the apex to the point to the to the base and of course if you track these so it's a bit similar to what was done for the volume curve but no let's let's imagine that we are not only quantifying the envelope but we are also able to track problems it means that you can compute this length of every of these strings you can compute this E value so you can come up with a set of values that represent on the top it would be the global longitudinal deformation and so that's something known as GLS which is also one number so because okay you have a value for all the strings but you can average this or take the median value if you want and this this value so this global longitudinal strength GLS is been used a lot as ejection fraction to be an index to differentiate healthy subjects from pathological it's not magic nor neither as the as is the ejection fraction so there is also a lot of limitations to this but it's slightly better than ejection fraction because as we saw yesterday longitudinal function is first penalized when there is a problem and so it's appears to be a little bit more sensitive than ejection fraction so of course this was set for longitudinal strings but if you take so kind of a sort of inferential one I'm not gonna manage to say it but it doesn't matter so you make circles from also based to apex you can also look at these length changes and you will get this global second for global six point so what we've been introducing already in the former slides is this idea that you can map the ventricle to a 2d map so this is some type of flattening and what you see on the top this circle is the most usual flattening that any cardiology is used to read a lot so the idea is that the center show the apex the auto circle represents the base it was be this top circle here and of course all the intermediate circles are going from base to apex like that and it's usual to divide this in 17 segments there are different definition from segments this one is the most usual what Bart will surely tell you is that this is a very artificial decomposition okay so they just decided to divide it in territories like this because it was easy it doesn't exactly match the perfusion territories of the coronaries you can put a map of the territories on this circle but it's it's not gonna going to exactly cross the borders of the segment but still it's a representation that's usual and just what I wanted to mention is that what we saw about the global strain in the previous slides we can do it at different levels here so all these strings that are horizontal and vertical here I can apply the same computation of the strength so I can get a strength value per segment or even per point no frankly we don't have the resolution to go so we've been talking about tracking but I didn't give details about this yet so this is this come from a paper where I participated we wanted to compare different tracking techniques and here you have the list of the techniques that were compared and I think it's just good to mention it because it gives an example of the type of techniques so you see words like elastic image registration optical flow block matching these are different types of tracking techniques you also see that some of them are labeled as working on the beam of data that means they work on the final image so that's the typical processing that knows nothing about the acquisition and works within the final image and you see also that there is here one working on our RF data so these before the image actually these ultrasound signals there are signals being sent at very high frequency and if you can access them you can also like do your tracking on this very high frequency signal and trying to do the matching base on these signals rather than on the image the image being an envelope of the signal so this is just to go with I'm not gonna go into much details now we will see with further details for the tag them are the same type of tracking techniques but what we have is typically an elastic registration what it means is you put what they call control pounds it means you take pounds of interest or sometimes there are just a regular grid of pounds and you put basis function of every point and you optimize this and you try to maximize the fitting between images to estimate an longitudinal transformation so to do the tracking you see that some people here also doing this approach but they do it again in a flattened representation of the ventricle so it means they take the input image and they wrap it to this image which is actually the unfold of the ventricle so the unfolding concept we have seen before it's not applied to the images themselves prior to the tracking then you have optical flow which is I think it's one of the oldest image processing algorithm so you could lecture about it for us but the idea is that you compute you don't have really basis function you try to compute what's called a dense displacement field so for every point you look for a displacement value so three numbers it's a very big object that you are trying to compute here and to regularize you typically go do go some smoothing and you do iteratively and mathematically the first one typically optimize a cost function and it's like traditional optimization techniques where you compute derivatives and the second one it's doing the same but on the variational point of view so it's kind of another branch of optimization but of course they are very well they are very connected and the last one is really block matching so that's the one where you just take a small block around your point of interest and very locally you'll try to look for how this block is moving to the next frame again either in the b-mode on the RF image then that's the spectral tracking but we'll tell you that this is rubbish because as we have seen there is regularization in there you need to put some regularization otherwise you just get crazy things but another way to get strength is also by looking at Doppler Doppler is giving giving you velocities so it means that if you do an integration you can get back to the strain rate and then to the strain okay so by integrating velocities you can get the same information of course because it's based on velocities there is not so much regularization and so this is an example where you see that some temporal patterns are not well visible here on the spectral tracking but they are more visible on the Doppler so it's an alternative so you have to know that there are two different techniques that they can be complementary just a word about the synchrony the synchrony is a very nice example of a lot of indexes that you can compute from images because we have seen tracking so it means that now you can start to trace curve that represents strength or velocity and compare these curves at different locations in the ventricle and you can compute a lot of indexes one of them being that you know that the normal heart contracts synchronously so all the peaks should be properly aligned while pathological hearts you will start to see some dispersion and so you could look at this distance between the peaks now it turns out that if you just get numbers out of this it's very difficult to get a reliable index and I think this is an example but where it actually means that the next step is that we don't just want to get a number we want to get a pattern and that's where I'm setting up the stage for Gemma and the machine learning that's where we have to evolve from just computing a single number being ejection fraction GLS and we have to recognize more complex things which is a trend in your data okay now I will say some words about tagged MR because I think it's very interesting for the image from the image processing perspective of course clinically you could say the interest is a bit limited because as we saw in many times you only have ultrasound images but what we're trying to do in the lab I'm working is comparing actually the ultrasound and the time to validate the ultrasound so these tagged images what are there they are images produced by imposing this pattern in the magnetization so you see that you managed to have like bands that are much brighter and this is like following the tissue so you will see them deforming with the tissue so it's a bit of dream because you have like a lot of markers in your image and you can track it and this is something you don't have as well defined at least in a in an ultrasound image and something that's also very interesting is you see that you can apply these tags in different directions so you can have like different channels to your input signal so against the tracking methods if you look at them they are very similar to what we saw in ultrasound so the first one the elastic registration I consider it is deja vu so I'm not gonna talk about more but it's exactly the same concept so this is a typical example of people just use exactly the same technique and they apply it as it is to another modality so there is really no change in the algorithm but then what's very interesting this modality is you have a lot of techniques that are actually really tailored and adapted for the specificities of this acquisition so the first one is called harp is actually the first processing technique that was proposed and it's still taken often as a gold standard for strength I would be a bit careful with it because what they do is actually to put it in very simple words what I'm doing here is a bit of a caricature but I think it can be useful is that they do a tracking without any regularization so of course when you compute strength from there if you look what the tracking looks on real images you have points that go completely out of the field of interest they go completely crazy and your strength which is this change in length is going completely crazy as well so I would say good luck if you really use this image processing tool as such because having no regularization is really too optimistic I mean you have too much noise in the data to be able to do that then you have some other techniques that are really looking and we will now see them in more details but they are really trying to try to do an analysis either in frequency or you will also see that the last one is actually a bit of I think I think the starting point was to say really that tracking is rubbish in the sense there is too much regularization so they really tried to get rid of the tracking so let's go one by one so the the historic one it's called harp so the the first step is a pre-processing step they compute what they call phase images and that's because initially the tagging the tagging image has what's called stack fading so these tags that they were imposing on the tissue are not persistent and so the intensity was progressively going down and the tags were disappearing so it's very difficult for the tracking because if you look for the same intensity you'll get wrong wrong tracking results so they needed to extract something that's not sensitive to this change of intensity and what they did is to extract the phase which is a really common tool in image processing and it's also used a lot in ultrasound and for here because we don't have much time let's just look at this as a way to go from the intensity to an image that's normalized between let's say minus pi and pi then they do tracking so that's still the original harp algorithm and what you see here if you have done optical flow you would recognize that this is an optical flow equation but there is a big difference that this what you are tracking is no vector so rather than having one channel you have actually three channels because each channel is one direction of acquisition so the vertical tags the horizontal and the perpendicular ones and so normally in optical flow you have a problem that's referred to as aperture problem which means that you cannot estimate with just one channel a 2d vector field or 3d vector field but by adding channels you can so this is really what's interesting in this in this modality so then what's what we did is we we implemented this technique for 3d images but what you can did and it's it in there when he's not visiting the cellular family but is that we try to do some regularization because you need to and what you see that we put basis function but they were inspired but by what we saw before of not using the XYZ space but a more anatomical space we call it so we divide the ventricle into patches and there we try to compute a local system of coordinates that's the radial longitudinal circumferential direction that we've been talking about in strain and we try to express the transformation as a linear transformation so an affine transformation in this space here it was our first version that was using a diagonal matrix but then we improved it and make it made the matrix in any affine matrix and I hope the videos will play but this is some example of tracking results it might be a bit heavy but it's to see these images in motion but if you're interested I can show you afterwards on my computer then you have like another processing technique that's called scene mode and that's really interesting because it's exploiting some properties of really a Fourier transform so which is that the fundamental property is that if you have a shift in a signal in space it gives a phase difference in the Fourier domain and so you can use this property to actually what they do is they go to the frequency domain they apply low pass and high pass filters and what you see appearing in the equations is that they've like yeah it's difficult to read it here but you see that there is the conjugate of image 1 versus image 2 and so this is to make appearing the phase difference and this phase difference that's this arc of PCC which is this reply to the filter you see that they normalize it by a spatial frequency which I think is a very interesting concept because it means you see appearing in the image the notion of the spatial frequency so all many tags are there at this point in the image and then you have these guys and I like all these approaches because I think they are very complementary and they say a lot about typical problems in image processing they said okay in everything that we have seen up to now you need a regularization and sometimes we don't like that because it means that we could miss if the part that's contracting abnormally is very small we could miss it completely so these ones what I said is let's go really to the image and let's consider that if this was attacked before the formation and this is after the formation I can directly measure difference in angles and different differences in lengths and this is like a direct estimate of strength of course the problem is that when you are like too monolithic is that they miss a point which is that any for any application you want to follow one material point and you want to see is the formation curve over time so you will still need tracking and that's typically something to overlook but it's very interesting because it means that sometimes you need to combine different different tools and that may be tracking is not the best way to compute strength but it's still useful to make sure you are following the same material point in the image then I'm gonna say but very quickly because I think time is running I'm gonna say a word about validation because a big problem in image processing always is we've seen about getting beautiful information from images anatomy function but the question is of course I can give you curves but how do you know that this is accurate and more specifically for tracking it's a bit of a nightmare so already when you want to validate segmentation what's been done a lot in the field is to do manual segmentation by different people and it's a very very painful task so these people really do you a favor because it means they have to color the and they have to control the images manually and you want to do that several times per observer and you want different observers to have an idea of the variability of this so-called ground truth so it's really a nightmare to do now that was for segmentation if we want to apply this to tracking it's even worse because when you have a 3d image even like a very simple phantom image as we have here so this is like a physical object that's a cylinder and it's deforming to a pump and it's image in the in the MR I mean you could manually trespass but it's extremely difficult and you will have a lot of noise and inaccuracies in this process so what what can we do about it what's typically done is to combine solutions so you see that on the top the first one is to take as I mentioned already you take images from a patient you take different modalities and of course you will quantify them differently you will get different results but if they are consistent somehow you will be confident that this is this was the right value of course it's thought to be like a really an accuracy measurement that's really quantified in millimeter so then people started to say now we need something else so another way still to be very close to the imaging modality is to do what's called in vivo animal experiments and so these are these experiments when you take an animal and the proofing you like block coronary artery and you look at the effect so we saw these type of images yesterday okay and of course then you know because you control exactly you know that you've been pinching a certain coronary so it corresponds to this sector in the myocardium and you can check that your quantification indeed shows a difference in the baseline and after the final fraction in this part but again that's very difficult to get measurements that are really detailed and that are so you will typically get measurements at spare points so you don't you don't really get accuracy measures on the entire geometry then you see the phantom which is which are these toy objects so there's somehow represent the shape here you see something that grossly looks like a left ventricle and you couple it to a pump but of course you don't really get the same motion patterns and when you put that inside an ultrasound machine or an MR you get an image that looks really in the sense this is the real modality but it's too easy compared to a real image and then also what's now more and more done is to do simulation and that's an interesting application because we're in the context of VPH while you try to simulate everything so you simulate both the imaging device and the and the motion fields which actually what I will present here so here you see that these are different types of synthetic images because we've been working a lot on this and on the left you see this the first type of images while you have a very simple ultrasound model a very simple geometry and of course you can validate your segmentation and your tracking on this but this doesn't look realistic so it means if you quantify your accuracy on this it doesn't tell you a lot about what's really the performance of your algorithm and then you see that we've been trying to improve them and now what we're doing is to mix reality and simulation which is the one on the on the right so that you you get images that are hopefully more and more realistic and we think this is a good way to verify the accuracy of algorithm before putting them into hospitals the context for this is that of course doctors realize that if they take different software from different vendors they get completely different values and this more and more is generated is making people very unhappy and I think it's really understandable so that's why we need this type of benchmarking data sets and I think that modeling is a way to make this type of validation I will skip this just this is like the general thing you want to do you combine all these modeling techniques you extract from the simulation that this you see this mesh so it's completely it's a modeling result there is nothing that comes from a real image in there but you can get all the indexes you would like to quantify so all the strength the motion possibly the torsion that we mentioned yesterday too and then you take this this simulation results you put it in another model that will mimic the ultrasound device we also do there the blending with a real image so that we get things more realistic you quantify these synthetic images you get your quantification results and because you did the mechanical modeling you can compare exactly and for every point which is something that we cannot typically do in the other types of validation strategies that we have seen just very briefly a word about flow I think this is a developing field and again ultrasound can bring a lot of information so I've been presenting spectral tracking techniques but you have seen that ultrasound has much more than that there is all the Doppler traces and more specifically at the valves you get these color Doppler this pulse Doppler traces that give velocity traces and I think they contain a lot of information also it was mentioned by Martha so it's interesting because I had it also in my slides but you see that they now want to look more at if you see this color image and you can really look at a gradient from these two graphics and we want to see a lot of people want to see or discourage it with some other indexes that are maybe much more complicated to compute so that's very interesting because flow could give access to indexes that are equivalent or better than others that are already existing and again that should be a movie but considering how this one works so you what you see in this image is that it's a standard image acquired by a by the epic device so it's the last generation of Philip's ultrasound but what you start to see so we had a doctor so Eric Salou he came back to us extremely excited because he started to see in the cavity things so puns that are moving and this is really cool because if we can track this it means we can have an image of the flow and you have people are already doing simulation of that so what you have on the right is this gao paper where they they try to generate it's a bit like we what we saw from for the validation so then they make the runner simulation a flow simulation so they get all these beautiful vector fields of flow that that's actually the middle column here and then they generate a synthetic ultrasound they quantify them and they want to see if they can recover the flow okay and you see that is more and more appearing there is something called B flow that's also appearing in in products so it means that is something that will progressively I think gain interest and I think it's really nice because we were working a lot about about shape about motion but getting the details flow patterns inside the cavity could reveal a lot and of course you can get the same type of information from mr images of course exactly as we have seen before it's very interesting to get it from ultrasound because that's something we could do potentially no patience but it means that we also we will be able to run comparisons to check if what we estimate is correct and then just to finish this talk we've been talking a lot about diagnosis but that's I'd like to really step out of this and say what about interventional interventional image processing in cardiology I think it deserves another talk really because it's it's really another field they work a lot on on x-ray images which are 2d images that are very very difficult images because you see a projection of everything so it's just in the talk we saw before you have this one don't transform this idea that you will here with the x-ray you will get a projection and you look at you rotate your arm so it means that you will get a set of projections or you just leave it fixed and it means that in these projections you will give a lot you will get a lot of overlay between the objects and there what you want to do is to do tracking of the tools detection and you want of course to locate yourself so the the physicians would like to know if he's deploying a valve or is exactly and that's where different modalities can bring different type of information so the x-ray is really the modality used in interventional imaging but no more and more they are trying to acquire ultrasound images at the same time and the ones they go for in this case are the trans esophageal images so is the what Marta mentioned yesterday if you remember the guy who swallowing the probe there these type of images and the question is how do you fuse them because you will see things in the ultrasound you will see things in the x-ray but if you have to get the mental picture of what's really going on and make the integration while operating the patient this is just too much so as image processing people we need to bring solutions to match these two things and to present them in an integrated way and so that's that's actually an example where you see so the modality so what you see that in the x-ray images on the left you see actually the ultrasound probe and so here in this image processing technique what's actually done is that from the probe as seen from the x-ray you can estimate where the probe is located and so it means that if you are able to detect the probe in the x-ray and if you have again a table of all the possible orientation of the probe and if by looking at the images you can say this is the current orientation you can really detect the probe in space and you can align and turn the ultrasound image according to technology helps you understand how the devices in the anatomy are interacting and here you see how the septum bulges where it is coming across what the trajectory is you can't do that in 2D you just can't put that together in your mind the same way so that's a very interesting example where as we have seen before different modalities will interact together and we have to merge them quite and really I think interventional imaging is going through a lot of changes now so it's is going to evolve and not to be x-ray based anymore and actually the Philips unit that used to be called x-ray no it's actually called interventional because they don't want to define themselves anymore by the modality but they want to define themselves by the fact that they do imaging in the context of intervention and that's really an evolution in the business and so just I have two final slides to talk about perspectives and actually the what I wanted to do here is to make a link between what we've seen yesterday and today and what you will see in the coming in the next presentation actually and in the next day so first of all machine learning you might know that there is a lot of there is a big evolution happening in the image processing field there is something called this deep convolutional network deep learning that's been I mean it's really like a wave so it's mashing everything it's a tsunami so in image processing conferences you see more and more submissions on this Oscar was telling me the other day that last year you are seeing this type of work at the workshop so kind of the satellite event of the conference and now in the main conference it's appearing everywhere so it's really a change in the way to do image processing really based on data rather than models so I have started this talk by showing a model deformed on an image that was the the heart model based segmentation here the idea is really we don't want to have a model we want to infer the model or the features from the data and that's why I call them no sweat features is just to mention that so in this is not only happening in in the medical field we are a bit behind as usual but like for instance there are people that took Photoshop they took all the filters in Photoshop and they tried to of course they apply them to many images so that's a huge set of data that they can feed to a learning algorithm and they try to replicate the filter with a convolution with this type of deep learning technique and they can replicate any of the Photoshop filter and so they can make a very cheap Photoshop so it means that you have people working on features like image processing computing phase gradients denoising trying to do filter design so it's really the old way so you take all this work you feed it to a deep learning algorithm and you can get the same results it might take one week to compute but you get the same and it will be much faster to compute so it's a big change in the image processing community which is do we have to continue designing features as we used to do or do we have to use this type of approaches and what I just want to mention is that so if you look this is a competition where what they asked is the ejection fraction task so they gave a lot of images as we see on the top they gave for all these training images the value of ejection fraction and they said no you have to use any learning approach to directly infer this number from the images and it succeeded very well so that is this competition and it's interesting because one of the people who came actually second if you look at this link he he actually explains all the all the steps they've been following and you see that there is a lot of pre-processing that's the point I'd like to make it's let's not go too fast here of course this looks very magic but if you look they had in this type of images the first thing that it is detection of the heart and of course they extracted region of interest so now if we want to look at things like for one same material point we might still do tracking as a pre-processing step so this needs this means that we don't have to discard too quickly the good old image processing type but what we need to think is how to combine it as a pre- processing step because we always will need it and for instance there is a big thing that's always hidden under the carpet in all this that's called data augmentation because you need a lot of data and you typically don't enough data they try to generate new samples by interpolation and so that's still something where I think much can be done to improve that and to maybe use other techniques or make get the data more specific or work with patches by doing some traditional processing okay so that was the first direction and the second is of course modeling and that's really what you will see with the talks of Oscar and Maxine so here what I just want to mention is when we take an ultrasound exam we get all these results for flow traces advanced it's a lot of information and it has all to be matched in the height of the physician and what they typically do is that they are testing hypothesis but this doing this integration mentally means it really has to be known by specialists so that's where we believe that models can do a lot and it's something we're looking at in the European project cardio function is to see if models can be an integrator of all the data and ultrasound is a wonderful example because you do get a lot of data and of very different types you get shape motion velocities flow so the question is how can we integrate all these data and check that things are consistent or not for instance if we have a volume curve of the ventricle and the flow traces at the output valve this should somehow match there is redundancy in this data can we use models to integrate it detect if it's consistent or not that's all I had thank you very much thank you very much first again maybe a little bit of philosophical question because what you see is like you show a lot of algorithms and when you see also when you look at challenges so you always have like universities competing sometimes the company comes in with an algorithm rarely a company comes in with a commercial algorithm but when you look at all these it's like what you see is the universities come up with very very complex algorithms but mostly they take a long time to compute while in reality when the company has a product mostly it's fast so where does this come like when you when you when you develop something as a company it's like which is important is the realism important are you having magical kind of algorithms where you can do it faster or do you just do kind of dirty tricks in order to make it faster it's like what's a little bit of philosophy behind it like how do you see these algorithms differently in in the university and in a company that's something I saw when moving to a company so in university you don't care a lot about the computation time and the difference is even so we are a research lab in Phillips but even at the at the very beginning of any project we look at the time it takes for the processing and if it's too much we just discard this option so it means we restrict ourselves to a very reduced set of solutions with in comparison to the academic then of course also when you have a given tool you try to optimize so there are a lot of optimization done and a lot of research is about how can you get things faster so this type of optical flow algorithm we make it faster by not computing the entire displacement field but only computing for some points of interest you try to add a lot of these lookup tables that we're seeing in the half-transform to speed things up really it's also the fact that we take a tool that that exists and we try to optimize all the steps and of course now I think also something completely new that's happening is the GPU card so this is a reasonable reasonably priced piece of software that can speed up a lot of things of course it's extremely specific but progressively you see that more and more part of the algorithm are being dealt with GPU and that's another way to speed up many things so yeah it's a lot of design and taking things that are as they are done in academia but spinning them up or just cutting this branch so if you see from the beginning it will take a lot of time you just don't start it okay and partially related to that maybe a little bit more controversial is what you see in reality is that the companies implement this algorithm especially when you go to the medical field it's like of course medical doctors use algorithms by the company because often it's integrated in the machine or it's like easy to use in some ways but these are not necessarily right they're not necessarily working and you see sometimes in the field like when you come with an academic solution from which you know it's superior to what the company does but just by the fact that the companies have distributed everything within the medical field it's like you're almost forced to use like like a kind of an inferior solution because that's de facto the standard because it comes from the companies so it's a little bit like a vicious circle it's like do you think that that is like kind of pushed by the companies in a marketing way or is it just something that the medical doctors are just too much believing in the companies or there's a little bit like like an attitude where validation is not the most important but maybe the marketing is more important and things like that how do you see this so up to now marketing has been more important and also we try to do integrated solution which typically appeal a lot to the doctors because they have both the acquisition device and the processing in the same environment and in the same workflow now I think this I personally think it will change but it will change not because of the company but because of the community of clinicians also of the academic community and I think the strain special tracking is a good example because that's that's an example where people just standing up and say no I mean look you companies who are not doing things validated enough because we are getting very different results and it cannot be that for the same patient if we put him in different machines we get different values and so that that's something coming from the community that just brought all the companies together they put them around the table and they're trying to design these new validation strategies of course it's very slow because when you put different companies around the table that don't really want to talk a lot so that's also a reason why in challenges you see that typically companies are not putting products and it's research versions and you only have one company because then they are not too afraid because they're all academics it will be alright but if you can flips and g fighting we are not that comfortable but it's it's it's your job to make us uncomfortable and it's I think it's really the job of the community to to challenge the the manufacturers and to call for more open environments where you could have multi vendor solutions so the quantification can be made better if you know a lot about the acquisition but you can also have generic quantification tools and now it appears a lot so actually Philips is also developing solutions that are multi vendor and of course many you have companies that position themselves only on the processing side and they are directly multi vendor like we know a few in in cardiac so I think this is a new trend and it will it will develop more and more but against it's not the company that's going to push for it but I can trust you for that any other questions okay if not then we can move on to the next talk