 Okay. Welcome once again to this integrative research seminar series. As the name implies, the idea is to get together and find out what the different professors in the Department of Information and Communication Technologies are doing. Okay? So today is my pleasure to introduce Miguel Angel. Miguel Angel Gonzalez Vagestar. He has been here since 2013. That means only three years, but he looks very young too, but he got a PhD in the year 2000. Okay? So he's over 30 by now. So he got a PhD from Oxford, and then he has been moving around quite a lot. Okay? So in 2001, he obtained a faculty position at India, Sophie Antipolis in France. And then in 2004, he joined the University of Bern, where he, in Switzerland, okay, where he led several research groups working on image medical analysis, computer-resistant surgery, and mainly for orthopedics and so on, and surgical robotics and mechatronics. Okay? He will tell us all about this and the work that he has been doing actually when he came to Barcelona in 2008, but not for joining the Bumper Fabra, but as an Ikea professor, but to work in a company, right? Okay. So, and he will tell us about all these and what he's been doing recently and he's interested in many things. So my pleasure, Michelangelo. Okay. Thanks very much. Is it okay with the mic? Okay. Thanks for the introduction. Thanks a lot for saying that I'm young. My birthday was two days ago, so I really appreciate that. My presentation will be a description of the things that we do here, right? So we talk about medical imaging, image processing, simulation, or digital virtual models of patients, and interventions, so robotic or devices for doing surgery, right? The good thing of this type of talks is that you can blamelessly talk about yourself, right? So I'm gonna describe a little bit, some of the things that I did before joining UPF that Héctor has already introduced briefly. I did my PhD, well, I started 20 years ago, my PhD, at the University of Oxford, led by Mike Brady and Andrew Cisserman, and there I started working on medical imaging, and the type of things that we were developing at the time were basically image processing tools. So my background, let's say, is more image processing, computer vision type of developments. So you have these active shape models that basically you can see that they are initialized with an ellipsoid, and you let them there. You impose some internal and external forces that are trying to impose some constraints of smoothness, curvature, etc., and they're trying to feed the data. And finally, through an interactive process, they get the shape of the brain automatically. One of the things that I was very interested at the time, and I'm still very interested, is about uncertainty and about modeling how accurate you are in this type of segmentation process. So for example, some of the things we were doing were imposing confidence bounds. So instead of having just one surface, have two, a bit like a sandwich, and ensure that the real surface is somewhere in between, and then analyze what's inside this sandwich with a Bayesian framework to try to estimate what proportion of each tissue is inside. So try to get an estimation of how accurate you are, try to do error propagation. I went to Japan for a couple of years. I bought a red sports car and traveled the world, learned Japanese and things. And one of the things that I did there was working closer to the machine. So in Oxford, I was developing methods for software, and here I was more into signal processing, into the formation of the image. And there we developed a method called speeder, which if you look in the web page of Toshiba Medical Imaging, it's still one of the core components of the MRI systems, and it's advertised as such. And through these, we got some patents and stuff, but basically it was my first contact with industry. As you said, I then got a faculty position in INRIA. INRIA, as you know, is a French research institute, so it's very theoretical, very mathematical, and working on image analysis, segmentation, all sorts of different things that I will describe through some examples. But just to throw some of the terminology, segmentation we saw is the identification of things in the image. Registration is data fusion, so when you have two different images, align them, sort of, you know, try to find corresponding points, like if you have the same patient in two different time points, and you want to see where the analogous regions of the image are. And then I started playing with applications related to surgery simulation, to biophysical modeling, so for example, simulating the hard electrical propagation, things like this. Then when I went to the University of Bern in Switzerland, I got closer to the operating room. I think this is a bit the jump that I did conceptually by joining this group. So applying all the technologies that I was, let's say, developing until then to the context of the operating room. And that means surgical planning software, that means predictive simulation of how the surgery will be, and also guidance systems. So robotics, but also things like tracking navigation systems that allow the surgeon to reproduce the plan that he did virtually beforehand. And then in Barcelona, we're doing software and trying to sell it to the market, and well, I was there for five years with reasonable success for the company. So about two and a half years, roughly, I joined the UPF thanks to an incorrect position. And what I will show now, the rest of the talk, is an overview of some of the projects we do, basically the things that we do in our group. I will start by a particular project that I will describe more in detail, because I think it showcases quite well the different things that we do in the group. So this is a European project. It's here you. It's a project that I coordinate on cochlear implants. For those that are not familiar with the term, cochlear implant is an electrical device, an electronic device that is implanted in people that cannot hear or that have very severe hearing loss. And the device has an electrode that provides some electrical activation. And this electrode is implanted in the patient. So it goes all the way inside the cochlea, and it stimulates directly the nerves that go to the brain. So basically it's an artificial ear. It bypasses completely all the mechanical processes with the small hammer and things that you study in biology in high school, right? So what's challenging about the implantation of these devices? There are several things, but one of them is that it's very difficult to have a clear prediction of what will happen once you put the device inside the patient. For one thing, the type of images that we have now routinely are very low resolution. So you cannot really get detailed anatomy of the cochlea of the patient. So a lot of the implantation process is a bit through, let's say, guesswork, right? There are some very high resolution devices like micro CT, but this cannot be used on live patients. So you can only use it in cadavers, right? So one of the ideas of this project was let's try to analyze this very high resolution data in several, so in many cases, in many patients, let's say, try to build a model, a statistical model of how the cochleas are and the variability they have and how the detailed structures are, et cetera, learn a model, and then see if we can apply this model to estimate the real shape, the high resolution shape of the patient. This was a bit the overall idea. So the first things we did in our group was dealing with this data. One of the cool things of high resolution data is that it's high resolution. One of the bad things is that it's very heavy. So you have a lot of gigabytes and terabytes of data, right? So in this case, up to 20 gigabytes per data set. Esmeralda from our group developed software for segmentation of this data based on random walks, based on graph based methodologies for image segmentation, incorporating shape priors, probability, confidence maps, et cetera, and combine them with other methods such as geodesic active contours to be able to provide tools to do this tedious process of segmenting these huge data sets. Once we have the data segmented, we have many examples of cochlea and we want to analyze the shape variability. Okay, so we want to build a model that contains what is the average shape of the cochlea and what are the main directions of the formation. And this we can do through something called statistical shape models that are based mostly on principal component analysis, which give this sort of results. So this will be the first mode of variation around the mean. So this is modulated through a scalar weight. This will be the second mode of variation. Okay, so you put the scalar weight to the second mode of variation, the third mode of variation, et cetera. So what this provides is a very compact representation, a parametric, very compact representation that allows you to generate virtual cochleas. Okay, so it's trained through real data. But it's compact and it allows you to generate many different cochleas. So this is one of the uses of these models to generate virtual patients. So you can test your prototypes through virtual reality simulations, et cetera. The other use is exploring this shape space to try to find the most likely shape that explains your low resolution image of the patient. So try to get a high resolution model from a low resolution image, which is what we wanted. And it's a patient specific model. So this is one part once we have this thanks to the statistical model and the instantiation through the image of the patient, we have a high resolution model of the patient and we can do things with it. So we can plan the intervention. This is work that Nerea and other co workers in the group developed on simulating the insertion of the electrode through a mechanical simulation. Okay. And it's, it can be configured to simulate different configurations. So different choices of electrode, different insertion depths, different models, et cetera. Right. The framework further more was completed by mainly by Nerea. So it automatically generates a whale, well defined finite element mesh, where we can run farther simulations. This was recently published in the and also biomedical engineering. Through this, we end up with a patient specific finite element mesh, in which we can simulate things. So we can simulate, for example, natural hearing, we can simulate the vibrations and the activations of the cochlea. We can simulate the implanted cochlea as well. So once we have the electrode inside the cochlea, what happens there? So the electrical stimulation, the electrical pulses that the electrode does on the cochlea. This is work mainly by Mario. And also coupling with a nerve model. We can also study the spiking of the nerves. So through different activations, we can also predict what is going to be the neural response and therefore the sound perception of the patient. This we we did in collaboration with NASA aims in California, a study comparing healthy subjects versus patients that have nerve degeneration. And in the particular case of cochlear implants, but in general in neural processes, let's say, we are also working together with the specs group with Paul for sure with a joint PhD student, Jordy, on moving this higher up. So now we have simulations of nerve stimulation and things, but this goes all the way to your brain. So what happens next, right? So modeling the cortex modeling, for example, cortical plasticity, how your brain adapts to this stimuli of the cochlear implants. Okay, so we have all this framework, which is really cool. And from an image of the patient, you can create a virtual model, and you can simulate different scenarios, different approaches to the intervention, and try to predict what is going to be the outcome. Now the thing is the patient is still there waiting, right? So we need to do a system to help plan the intervention. So to help guide the surgeon to reproduce exactly what he planned. This was worked on in collaboration with the University of Bern in Switzerland, a planning system to directly target the cochlea for the insertion of the electrode, coupled with a robotic system that does the drilling very accurately, avoiding things like the facial nerve and other risk structures that are neighboring the cochlea. Okay, and to finish with a bit on with this project, we see patient specific planning, we see patient specific intervention. But on the other side, another target of the project was trying to help implant manufacturers to design better implants. So remember, we have a model that is encompassing all the shape variability of the cochlea. And we can do virtual simulations on many instances of this model. So we can explore the whole variability of shapes, but also of performance of the implant. So what we did here was work together with the implant manufacturer with Medell. So they could propose designs of electrodes. And these designs could be virtually tested across the whole population to try to optimize these designs. There are many ways of doing this. More on the theoretical side, Nerea is working both on non intrusive and intrusive approaches. That means non intrusive approaches are basically sampling methods. So we can do silly Monte Carlo sampling of this space, or we can do more refined things, like evolving a level set on the PCA shaped space or doing stochastic collocation approaches, things like this. And intrusive approaches are stochastic finite element simulations, which means that you have to play with the partial differential equations to propagate covariance matrices and things like this. But this is more, let's say, challenging. It's also computationally much more heavy. So what you gain in theoretical power, you kind of lose in computing it. So there was a bit of an overview of the here your project. And the reason why I focused on this project first is because you see that we work on image analysis. You see that we work on models. So statistical models or models of disease, let's say, we do simulations, things like finite element simulations or agent based simulations, we will see also. And we do surgical planning and interventions. So these are a bit the four elements that we do in our group. And you will see in the different projects that I will be showing next. So just for the credit. So this was a collaboration across all these seven partners. What I will do now, like I said, is go through some other examples of things that we do in our group, I will put the photo of the, let's say, the main person that is in charge of each of these topics, focusing on the brain. And led by Gerard, who is a UPF fellow, a postdoctoral researcher in our group. We study things like atlas based segmentation. So the concept of atlases, although it's a very, let's say, vague word, means that we have a lot of examples of segmented images, already segmented images, right, identifying the structure that we want to segment. So we have a lot of examples, and we want to segment one particular image that is not part of this training data set, of course. And atlas based methods have basically two main phases. One is atlas selection. So you can work on all the examples you have, or you can select some of them, the most relevant ones, and also label fission. So once you have selected the atlases, you align them to your target data. And then you have to combine the different guesses or the different probabilities you have from different atlases. So Gerard has been working a bit on the duality, let's say, between weighted voting approaches that are trying to do a linear combination of the votes or of the guesses derived from each of the atlas to come up with the final decision on each of the pixels. So to say if this pixel is this structure or not. And machine learning approaches, which are trying to learn a function that based on the intensities of the image predicts a label. And this he explored in a number of ways. One of them is what he he calls matrix completion, which is a method that tries to combine the best of both worlds. So we have if we have all the atlas is here. So this would be a vector with all the intensities of all the pixels of this image. And this would be a vector with all the labels that are corresponding to the image. Weighted voting is trying to learn a combination of all the intensities that predicts the intensities of your target. And then apply the same weights to the labels. Machine learning is trying to go this way. So from the intensities is trying to find a function that predicts the labels, and then apply it to your target image. Through this matrix completion method that Gerard came up with. It's a bit of a combination. And if you want more detail, of course, we can describe. He's been continuing on this line of work also in collaboration with the PhD student while in Ben Karim on trying to have, let's say, intelligent atlas selection in a way that is predictive. So try to choose which examples of your training data are more relevant for the segmentation that you want to do. And also tries to try to estimate some confidence measures through a biasing approach. So it's a maximum posterior estimation. And this is being applied to Alzheimer's disease in collaboration with the Fundació Pasqual Maragalli here in Barcelona. Also focusing on brain analysis. Another application that you will see in these examples is some fetal brain analysis or on fetal applications in general. This is because we have a very good collaboration with the maternity dad here in Barcelona and also with Hospital San Juan de Déu, the pediatric hospital on different fetal applications in collaboration with Professor Eduardo Cratacos. One of them is to study brain development in the fetus. So how the brain forms through time. So there is work that is done on getting detailed or getting good MRI images of these fetuses, combine this also with ultrasound images because you cannot get MRI through many times because it's very disruptive to the mother and try to learn something about how the brain works, how the brain develops, sorry. In particular, the work of Veronica has been on the theoretical side on developing multi-level spectral image registration approaches and to do methods that are robust to anatomical differences because from one time point to another, the structures vary quite a lot and you have also different patients at different time points, et cetera. So to do something that is consistent across all this whole variability. The other part of her thesis that is directed by Gemma and I is on building models. So once we have all this data that has been registered, try to do some learning, try to have some low dimensional representation of how these brains evolve through manifold learning or through other approaches of dimensionality reduction. So this was for brain applications. We will see now also briefly some applications to cardiac modeling and cardiac or cardiovascular interventions. One of them, of course, is against segmentation. So this is one of our main focuses in the group, the work of Sergio Vera or part of the work of Sergio Vera, who is a PhD student that just finished this year, co-supervised by Deborah Hill from the Computer Vision Center in the Autonomous and myself. Part of the things he has done was on segmentation of the heart. So segmenting the heart in CT data sets or also automatic identification of cardiac scar, which maybe is difficult to see, but are the little bright spots that you see in these images. So as you see, the variability of the images is quite high. And it's quite, let's say complex to come up with a rule that works always for every data set. So finally, the thing we came up with as the best method for us was a combination of support vector machines based on an array of features that we extract from the images, combined with an evolution of geodesic active contours. And this was recently published in medical image analysis. Okay, other cardiac things. So Jema, who as you know, is the other senior person in the in our group, has been working during a long time on registration approaches and applied to cardiac images. So in particular, temporal diffimorphic free form deformations as a formulation for smooth deformations from one time to another in time series of the heart. So how the heart moves basically. This has been applied together with some of our students in the group to data fusion also. So Antonio, for example, was working on fusing different, let's say, modalities of the heart. So to have complementary information to analyze brain heart motion. Nicola, also a postdoctoral researcher in our group that is now in in India, worked on learning these motion patterns. So also on trying to come up with a low dimensional representation, something that you can act upon in a compact way through a small set of scholars instead of having this very high dimensional representation of all the pixels that you have through all the time points, and try to learn patterns of healthy versus disease. And this is continued by Sergio in the in the same direction. So in trying to do manifold learning and trying to learn features from heterogeneous data. So data of the motion of the heart, but also things like clinical measures, etc. Also related to cardiovascular disease in our group. As you will see in some other examples that I will show in orthopedics, that there is a part of the group that is very focused on on biomechanical simulation, and that is mostly led by Jerome, and part of this related to cardiovascular has been on modeling the formation of the atherosclerotic plaque, which is the nasty stuff that gets in your vessels and builds up and avoids the blood to flow and then you get a stroke and die basically. So through agent based simulations, he was able to do multi scale simulations of the different things, let's say the different metabolism metabolic activities that contribute to plaque and then combine these or compare these with clinical and biological findings. And finally, also related to cardiovascular and the work that Chong has been developing. She's also a postdoc in our group on a surgery guidance for vascular stenting. So abdominal aortic aneurysms. These are diseases of your aorta in this case, in which the walls of the aorta kind of deform and they can lead to very severe disease. So normally what the surgeon does is they go through a vessel in your leg and they go all the way up with a very thin wire and try to reach the area that is diseased and deploy a stent. So deploy a device to recover the original shape. This is a very challenging process because most of the guidance is done into the images. So they are these images here, which are 2D projections. They are done through x-ray and you have very little contrast or most of the time no contrast of the blood. So you're kind of guessing most of the time. So the work that Chong was developing is on graph matching based approach to register a 3D model of the patient that is taken preparatively to the interoperative situation and be able to adapt this model. So it adapts to the interoperative situation and helps guide the surgeon. So we saw applications to brain, we saw applications to heart. We will see now some applications to orthopedics as well. Also I start with this one because it's very much related to what we saw now of the work of Chong. There is a part of 2D 3D matching. The process of 2D 3D matching involves a registration and a projection. So it's a projective geometry process in which you have to estimate what is the best position orientation, et cetera, of the 3D object. So it generates the 2D projection that you are seeing, which is the ground truth that you have. In this case, which is the thesis of Mirella, who is an industrial PhD student that is working in Galgo Medical. In fact, we don't have a 3D model. So we don't have the 3D image of the patient beforehand. So it's much more challenging. What we are doing in this case is to build a statistical shape model, a bit the same idea that we did with the cochlea. So we have many examples of images of the spine. And we try to learn how is the normal spine? What are the main modes of variation? And through these, infer or guess what is the 3D shape of the patient at the same time as computing the projection geometry? To make things a bit harder, even is not only one vertebra that we want to reconstruct. It's many vertebra because it's the whole spine. So it's not just the shape variability that you have in one vertebra, but also the correlations you have or the or how the shape of one vertebra can influence the neighboring ones. When you're estimating what is the shape of the spine? You have to impose also geometric constraints because you don't want to have bone inside bone and things like this. And to make it efficient, you want to do it in a multi resolution approach. So one of the things we're exploring amongst other methods with Mirella is the use of a way flat based representation in which we have a course shape of the spine first. And through adding more way flat coefficients, we get more detailed representation. Okay, so I was mentioning the part more on biomechanics of the of the group led by Jerome Nayee has been very focused on orthopedic applications. And in particular, a lot of focus has been on modeling the spine. For example, the thesis of Themis has been on modeling the motion of the spine, but also how this affects the metabolism or how this affects the the interverteral disc. Carlos also developed a method in his thesis of modeling the interverteral disc, the different components, the factors that have to do with this nutrition. So not just the motion of the disc and how this leads to the generation, but also the nutrition and how it arrives to the nucleus of the of the interverteral disc, which is an avascular structure. So it goes through diffusion. And it has some quite interesting effects that he he studied. And also a very recent work that Jerome on I will be presenting in Korea soon is on combining finite element simulations with agent based simulations. This is because in this case, the simulation is trying to target again, the modeling of the interverteral disc. But some of the effects are microscopic. If you want to simulate metabolic activity, things like these nutrients, etc. And some effects are more microscopic. So forces the duct on the disc, etc. And doing all these models with the same representation is very challenging. Because you either you don't have enough resolution, your finite element model to do it, or you come up with an unworkable agent based model. So the the more theoretical part of this work was on combining these two approaches. So they feed each other. The information of the agent based model is used to instantiate the finite elements and vice versa. And finally, also on musculoskeletal biomechanic simulations. So the work that Simone, who is also another postdoc in our group is doing on simulating motion and simulating how this affects the the wear of the of the hip. To make things a bit more interesting, he's applying this to modeling Tai Chi. And yeah, this is Simone. So he's been studying the effect of Tai Chi as a rehabilitation approach for different diseases, and modeling this in a biomechanical way. Okay, then also related to to orthopedic applications. As we saw for the case of the cochlear implants, one of the targets, one of the things that we can do with our methodologies is also focus on implants, focus on implant design. In this case, we were working together with an implant manufacturer on a certain type of implants that they're put on bones on the surface of bones to join pieces of bones that have been broken. So if you have you go skiing and your break your break your leg, you may end up with one of these type of devices that are screwed into your bones. So they fix together. Of course, the way you design this implant has a certain effect, not only on the geometrical fitting that you have on the bone surface, but also on the bone stability. So if you have a certain bone shape and you have a certain implant that you want to test, you can do it virtually. You can do a finite element analysis. So this would be patient specific. In this particular project, what we wanted to do is to explore the shape variability in a similar way as we did with the cochlear implants by having many patients or many people that have city images. So that have images that allow us to have the shape of their bones, but also their bone density. So based on the intensities of the image, build a model based on all these data sets that we have, compute what is the mean, what is the average shape of a femur, for example, in this case, what is the average bone density also inside and then compute the modes of variation as we saw with the cochlear moving, et cetera. So you have the most predominant way of variation, the most predominant way in which bones are different between all of us, the second most predominant, et cetera. And through acting on these modes of variation, by weighting them through a linear combination, basically, you can generate virtually many shapes of bones and many bone densities from your population. And you can virtually test your implant design and see if it fits well or not your requirements. So this we applied together with the industrial partner to the design of some of these orthopedic implants, in particular to TBL plates here, by focusing on different modes of variation. So each of these axes would be the weight that you apply to each of the modes of variation. So each point in this box would generate a bone, right? And then try to do a variational approach. So try to do a level set evolution in order to get a parcelation of this space. Basically, in the end, what we want is to know how much of this space, which is the shape variability of the population, is a corresponds to bone shapes that are OK for this implant design. And through a meta optimization, try to optimize the design parameters to maximize this area. So this is something we tested and it actually led to some interesting modifications to the implant that went to market and everything. So this is the concept, to have an implant that fits most of the population. OK, and the final application that I'm going to showcase, so we saw brain applications, we saw cardiac, we saw now orthopedics, is related to endoscopic surgery. This is work that Sarah is doing in her PhD in collaboration with people from the clinic, from a company and from Brigham Women's in Boston on surgical planning and navigation. And this is quite a challenging scenario because as we saw before with bones, the variation is important. But in the case of bones or in the case of orthopedic surgery, organs tend to be rigid. So once you are in the operating room and you're trying to do something with them, they can move around, they can rotate and stuff, but they don't deform. In the case of abdominal surgery, you can guess that all the organs in your abdomen are deforming all the time, they contract, they deform, etc. And we need to have some models of how this happens. Also, the setup in the operating room is based on endoscopy. Endoscopy is a camera through a tube and the view you have through the endoscope is a bit like the view you have through a keyhole. So you have a very limited field of view and also you have many different artifacts related to reflections of the light, to blood and things like that. And one of the things that we are very lucky with is that our collaborators in hospital clinic have a very nice experimental setup in which we can do some phantoms, we can also do some experiments with, well, of course this is not a real patient, this is a pic. And then we can test different approaches to do surgical planning, to do navigation, etc. So in particular Sarah is building the system for surgical planning and navigation. She's also exploring the use of 3D printed models to have hands-on surgical planning before the intervention, image fusion of preoperative data sets and the interoperative view you have through the endoscope, endoscopic image analysis, so tracking the movement of the endoscope through things like SLAM-based computer vision approaches and data fusion basically to combine all the different sources of information that we have during the intervention. Related to this also Marta from the FISAN's group and in collaboration with us, she has developed an imaging system that can be put at the tip of the endoscope. And in particular we are exploring the use of this for colonoscopy. So colonoscopy, I don't have to explain you what colonoscopy is, but you use an endoscope and then you're getting images of the inside of the colon and you can find something that is cancer, right? Or that looks like it could be cancer. So most of the times if you find something you can biopsy it and then you can do some laboratory tests, right? The idea of Marta was to do a system based on microwave imaging on antennas that you attach to the tip of the endoscope and it can give you local images of what's around. And on top of that these images are functional because they are related to the dielectric properties of the tissue. So they can be used to characterize the likelihood of this tissue being cancer or not. So this emits some waves and then gets the signal, reads it and it can be moved around, right? We just filed a patent for this and it's kind of a cool idea I think. And finally a final application that I will show also related to endoscopy is a project that is mainly led by Mario in collaboration again with our partners in the Maternitat and in Saint-John-de-Deux on fetal surgery. And it's a collaboration with the group of Eduard Gratacos, a new project that we have been awarded a three-year project and funded by a private foundation. The idea of fetoscopy is to do endoscopic surgery in utero. So in the mother before the baby is born. Obviously you don't do this unless you have to because otherwise the baby would die, right? In this situation we see here, we see a case of twins and this is not a coincidence because most of these operations have to do with a disease related to twins but not only. We also target other interventions related to abnormalities in different parts of the fetus. So the setup for this is quite challenging. So you have a room in which there are a lot of people, you have the endoscopes that are punching the mother and this is the view you have through the camera, right? One of the things would be to try to have a system to plan these interventions and to guide the surgeon to do them, right? So what we see here is an ablation. So what he's doing now in this intervention is to burn some vessels and this is because it's a case of twin twins, so monochoretic twins in which both of them share the same placenta and there are some communications between the vessels and these communicating vessels lead to severe deregulation of the way the metabolism is working in both babies. So in the end this is bad for both. Normally one of them gets under nutrition, the other gets over nutrition, but this is also bad for the one that is getting too much, right? So the idea of this project is to do a system for planning and guidance in this type of interventions. Okay, so I showed a lot of things. So one of the things would be, okay, so you shoot everywhere, right? To wrap up a bit and to see a bit the structure across all these things that we showed. We have a group that is relatively big. Each of the subparts of the group has a specialty and for example, one part is based on more, let's say, traditional image processing, so segmentation methods, registration methods, data fusion, etc. Another part is focusing more on modeling, so statistical shape models or models of population and for example learning approaches would be within here and manifold learning, etc. Another foot would be on simulation, both on finance element simulation, agent based models, etc. and how to integrate different scales and different aspects of the patient. And finally, the last leg would be on the intervention itself. So on the surgical planning and interoperative navigation. The way we do this is through an integrative point of view. So of course we are very interested in each of the particular developments, in each of these particular things and we publish methodological papers on this. But the idea is to always keep in mind the application. So also another focus of the group is on cross fertilization of this. So especially in the last year there has been a lot of progress in the projects that we are doing in the group in bringing people together. So for example, people that are normally doing finance element simulations, etc., models of what happens with the human body, together with people that are doing image analysis and that are doing personalized models, that are doing, let's say, guidance in the intervention. So to end up with something that brings all these worlds together into a useful system that has a clinical impact basically. So that was the overview of the things we do in our group. I didn't want to finish without hinting at some things that we could do together with other groups, right? Because this is a bit the objective of these seminars, right? There is a big cluster of groups that have to do with biomedicine, right? And obviously we have a lot of collaborations and that there is an invention we need, Barcelona MedTech group, whatever, we've got a sort of technical label for it. And it would be too long to list all the collaborations we have with all these people, okay? So one of the directions is of course collaborating with these guys and bringing to practical use. Other intergroup collaborations that I mentioned were for example, combined PhD students with specs in the case of Jordi that I mentioned. I realized I didn't mention that the PhD of Guillermo Ruiz who is in the company Chrysalix and it's a joint PhD student with Federico Sugno from CMtech, it's an industrial PhD. And just to mention a few, we are preparing joint proposals or we submitted joint proposals with a couple of groups as we mentioned here. So the group of Leo, the group of Hector. I also listed a non-exhaustive and sort of, you know, thing that came to my head this morning of possibilities for collaboration. So for example, the groups of CBC, it is obvious that things that we do for image analysis for the brain models, etc. could be applied to some of the research they do. Of course we have had many meetings, explorative meetings, someday we will find a way of doing it. I mentioned that we use quite a lot of geodesic active contours or Eulerian approaches for segmentation, etc. So obviously collaborations with Coloma and Gloria would be very interesting to explore or with Martelo, for example, on different aspects. One would be on the endoscopy imaging, which I see a lot of potential for applying some of your methods, but also things that you do, for example, on combining HDR that could be related to things that I did before for imaging physics with the MRI. Applications for surgical planning, obviously the group of Giseb Blatt and what based or more in general or interfaces and interaction with the users. I mentioned also that the group of Albert on information theory because most of the similarity measures or most of the metrics that we use to fuse images have to do with information theory. So we use a lot of MDL, we use a lot of neutral information, we use a lot of these things. And it could be interesting to chat and explore things. So we do things with a more theoretical ground, which I think would be very interesting to explore and also to see if there are some potential applications that they could be interested in exploring. And finally, so a lot of groups that are doing machine learning today, bloody convolutional networks are everywhere. And in general applications that we can explore with music, with web research, artificial intelligence, et cetera. And more on the networking side, many of the projects we have have to do with ubiquitous monitoring with health monitoring of the patients and how these can be, let's say, communicated through central repositories and things like these. So there are many possibilities for collaboration there. And of course, whatever other crazy ideas you have while drinking beer tonight or something. So that would be my presentation. I would like to thank all the funding sources which I realized I didn't list. And all of you for your attention. Thanks very much. So excellent presentation. I have many things to ask, but I only ask two. One would be, do you work on noise models for some of your problems? So is that a concern for you? Do you know anything before you do segmentation or some other problem? And the other one would be for the applications in which you have to interact through a display. Have you seen an influence on the viewing conditions on the results of the doctor using your methods? That would be your questions. Thank you. Okay, thanks very much. So I start with the second one. In fact, yes, I think it's important. And I remember one of the European projects that they participated was together with Barco. And they were developing displays specifically for medical applications and specifically tuned for conditions of the radiologist or the interventionist, et cetera. So I think there is a lot of influence on how you show the display or how you display the information. There is a lot of work on these. Big companies developing displays specific for this. But then on the contrary, you see that many doctors are exploring their images in their iPad. So something somewhere is being lost. But yeah, the influence is there. About denoising, it is very important for some applications. So right now we don't have, let's say, a specific line on denoising, but it's in most of our projects. So I would say that, for example, that projects that are closer to image acquisition would have to deal with it more, let's say, explicitly. But for most of the applications nowadays, so we use standard methods and isotropic diffusion and things that we do for filtering the images. But yeah, it would be very interesting to explore specific models for specific modalities and how this could affect. Thank you, Maryl Angel for the talk. I wonder, in the simulation case, when you simulate, for example, that a certain implant will be put in a patient and the patient will evolve in such a way, how do you actually validate that afterwards when the actual operation is being done? So this, of course, it's application dependent. I will fit on the, I guess you refer mostly to the cochlear application that I mentioned. For the cochlear case, there are, let's say, there is experimental validation where you can validate each of the steps of your simulation through controlled experiments. But to control the final outcome, it's quite challenging. So some of the surrogate measures we get are related to the perception that the patient has or the evoked potential, so that the signal that the implant receives through certain sound and try to correlate these with this, with our simulations. And I remember Mario devised a tool or a setup in a tank for simulating and validating the electrical propagation of the tissues. But yeah, this particular application is quite challenging to validate. I fully agree. For some other applications, like the ones we showed for the spine or for bones, et cetera, there is quite a lot of literature of experimental validation measuring some of the properties of the biomechanical properties of these tissues. So you could validate if the predicted outcomes somehow correlate with this literature. That's a bit... I always have the same questions, so I'll repeat it again. So image processing is something that comes out in many of the research groups. So I never have completely clear what's the relation among the work time. Can you sort of give us, I think, I mean, how does it relate to what you do with the kind of image processing? Mm-hmm. So this, in fact, this was, I remember, this was one of the comments of the advisory board of our department, that we have a lot of groups in certain areas that should talk to each other. Yeah, so, of course, some of the methodologies we develop are very related to some of the methodologies and some of them are very much inspired and some of the methodologies of other groups in the department. We organized, for example, a set of talks together with a group of Shabebe Nefa to see if the applications he does on, for example, tracking people for security applications and things like this could be applied to the environment of an operating room, to track the movements of the surgeon, to track the motion of the patient, et cetera, so it's less invasive, so it can be done through cameras. And a lot of interesting ideas come up there. There is a lot of things that can be done also on the endoscopic image analysis that are very much focused on methodologies that are relevant to this group as well and that we are not directly developing. About integration, like you're saying, so I know perfectly that this presentation was going very fast through many things. And it's always a bit of a decision to make when I make a presentation whether to take the application-oriented presentation or the methodology-oriented presentation. And probably the final message that people get is quite different, right? So you can end up with something very methodological or with something very applied. Like I said, every time we define a new PhD project, we always think of both. So we think, of course, of a main methodological contribution and at least two applications that could be applied where we have clinical collaborators, et cetera. Then, yeah, for the image analysis group, so we can discuss more in detail later. Thank you.