 Okay, so now we'll start these presentations of the different research projects and the first one in fact is the biggest because it's the one that has the most collaborations among faculty members. Basically most of the biomedical type of faculty members got together and they proposed and they are working on this kind of unified initiative. So that's a great way to start these presentations and Barth will be the one to present it. Thank you very much. Better now? Yeah. Okay. Good. Thank you very much for the introduction Chavier. So what I try to do is present the Maria de Mistu project of what we call now BCN MedTech. So what we started to do here in the department, as Chavier already mentioned, is that quite a lot of groups around bio and especially medical image analysis or especially also medical image modeling or application physiological modeling have been kind of developing in turbulent historical growth path, I would say. And what we decided to do now is that while there are different groups working on their own projects in some way, I think it's good to have an umbrella to do things together. And for that we created the BCN MedTech kind of unit within the department of several groups that work in this area. And also from there, we said like, okay, let's try to approach this Maria de Mistu so that it's very complementary to what we try to do here. And so that we can leverage all the initiatives that we are taking. And just to quickly mention it, the groups that are involved is the group of Oscar and myself. We're mainly doing cardiovascular applications. The group of Michelangelo and Gemma, who's doing at the one hand fundamental research on machine learning registration and so on. And for the rest also, image analysis and simulations, especially for therapy guidance at this moment. The group of Tony Ivorra, who is doing really kind of electronic, bio-electronic devices, whether it's again for therapy or for example, for rehabilitation. The health is our signal processing specialists and who is doing fundamental work on signal analysis of large data sets of signals. And then last but not least, very recently also Jerome joined the department here on mainly like biomechanics and mechanical biology. And with all of these it's like, although this is a very wide diverse field with a lot of very kind of different approaches in some ways also. Of course the over the kind of the umbrella that goes over this is of course the biomedical applications. And so we try to leverage this and try to reuse as much of the initiatives and tools that we have. So coming back to some of the goals of the million MS to partially translate it into what would be important for us is that one of the things is that when you start to look at, for example, our joint publication list, it's clear that we have excellence, we have impact within the field. But for some reason also partially the combination of the department, the way we work and things like that is like our visibility might not be the way that it could be. And so what we want to do is of course use these initiatives to strengthen especially this impact of the work that we are doing. And that's partially by leveraging on the kind of the knowledge that we already have translating that to other fields, but also especially promoting this. What you see nowadays is when you're in biology, when you even go to clinical conferences, everybody says like what you need to do now is machine learning. Machine learning is the buzzword. Also in medical journals you will see that now in every issue you will see something about machine learning. And obviously not that we have to follow kind of this trend, but I think it's important to see how we can use this, how we can integrate this. And this is a thing that has been working or has been kind of boiling within the different groups. And of course within the department there's a lot of kind of collaborations that we could do there. And especially one of the things that I think also is visibility comes from of course research output, but visibility also comes from showing what you're doing and especially teaching people. Teaching people first of all your knowledge, but additionally also your tools and partially related to that is of course your data sharing. And so that's why we decided to do like in a way a bio image analysis, biosignal analysis project that we try to both internally within the department, but especially internationally and within the Barcelona context try to work around. And one of the things that is partially lacking within at least the engineering part of our university and many universities is for example the analysis of bio images. We're very good in the analysis of medical images, which are actually images from patients. But what you see nowadays that in the biology community, there's so many imaging modalities being created based on microscopy and so on, which have some things in common with the medical imaging community. But on the other hand are also very, very different. And what you see is if you start talking to biologists, the analysis of their data is the biggest challenge. Both for the combination that there is no tools in order to do it. There's enormous data sets, I'll come back to that later on. But on the other hand, there's also very few experts from the engineering part, from the analysis part that can help them. And this is one of the things where we can try to transfer our knowledge, link our knowledge from the medical imaging community towards the biology community. And especially when you look in this context, for example, within Barcelona, what you see is that there's extremely good groups in Barcelona on the biology side. And you see the PRBB, but there's many others not only linked to the UPF, but to other universities. And on the other hand, we have our department here with a lot of both the medical images, medical image experience. But on the other hand, there's many others in the multimedia, for example, the machine learning community signal analysis and things like that. And what we need to do is bridge this gap. And my MMS2 project is a way to try to create both visibility and to create units where we can try to translate this. And as Xavier already mentioned, all of this is in a way data driven, because especially the research that we do, it's all based on data. Whether we do medical imaging research or especially when you do biology research, a lot of discovery is to data gathering data analysis. And even now, every day almost, there's a new imaging technique coming on the market. Or there's an improvement in an imaging technique so that we get different data at higher resolution. Higher data sets, for example, the data set that we are soon going to acquire within the group is, for example, one image is 15 terabytes. So these are totally different ways of working with this data. But everything is data driven and we need to end, gather the data, work on the data, acquisition, data manipulation. Making this data available for other people, of course, because these are very unique data sets sometimes. But then on the other hand, of course, try to analyze this. What I try to do is show you a flavor of some of the things that we're working on and that we will show also, or that we will try to make available to the community within this kind of projects. And I'll show you some of the examples. And here is one example of a heart at very high resolution that we are trying to work on. Because for example, the knowledge of the detailed structure of the heart is something that we have some ideas, especially from the past, especially from literature. But a lot of this is not enough to do the contemporary research. For example, to do computational modeling in very high detailed. So we have been looking for ways to find a kind of access to this detailed information. And these are some of the data sets that we recently acquired. In this case, using syncretron x-ray imaging. So you have to go to these large facilities, large particle accelerators in order to be able to make these images. And these are the images that made up this big amount of data. And of course, from this data, what you need to do is try to extract the relevant information. And these are some of the projects that we do, that we need to do, where we try to transfer technology from the medical community to this more like biological fundamental thing. Where we need to try to do like fiber tracking of the fibers within our heart. Where we need to extract, for example, the vasculature from these data sets. But the combination of both, that these are challengingly large data sets. And that's very difficult to structure and to analyze these data sets. Means that we need to get new technologies or new ways of doing this. And actually, this extraction of vessels is based on the machine learning approach where you locally learn the data, the image data that is available there. And this way try to distinguish different types of structures. And so these are the approaches, the classical, more classical approaches don't work here anymore. Both based on the challenging content of the data, together with the large data sets. Here's another example of what we very, very recently obtained. Again, you see very detailed structures within the heart. But you can go for almost a whole heart to almost cellular level where you can extract cell information from this. And this is giving unique data sets for both research, developing research, but especially for this data driven what we would call like physiology anatomy research. Because in the medical community nowadays, what you can do is either analyze data that you have and try to do statistics. And then you can prove that one thing is better than the other. But in a lot of cases, you can only get good therapies when you really understand what's happening. Understand, for example, how the heart works. I mean, there's hard cardiology research for many, many, well, almost millennia. But what you see is we still don't understand how this works. And we need knowledge in order to develop new approaches. Similarly, there's also structures that sometimes we have been ignoring in a way because we couldn't measure that we couldn't analyze them. And here's an example then of the complexity of the inner side of the heart. And especially when you see that hearts are actually really complex inside. But in the clinic, we don't really see this data because we don't have enough resolution. But what you start to see is that several diseases might actually be linked to this. So we need to understand this. And what we do there, and that's a collaboration with the University of Minnesota, actually, in the United States, where they have hearts from people which kind of were diseased and then were actually donated their hearts for science. And what we do is we get high resolution data sets. And we try to find ways where we make software in order to kind of match the data sets from different patients so that you really can compare different individuals and can see what's different, why is it different, how is this linked to disease. And this is, for example, the software in order to do this is actually based on some generic software that we now trying to use in different applications and that we will very soon make available also for the community. Another application that we are working on is in this case, for example, cochlear research. So the cochlea is your inner ear. So what you see is that there are therapies in order to when you have deaf people and you need to do like implants with electronic stimulation. But what you see is, for example, in clinical practice, this is the data sets that you get. So this is the resolution that you can obtain in a patient. Of course, it's very difficult to do anything, to learn anything, to plan things from this kind of data set. So what you do again is go back to we need more knowledge, we need more information. You go back to very high resolution data sets. Again, here, of people that died and we get the information out. And ex vivo, we can get very, very detailed information. We're then through image analysis. So we have segmentation tools. And based on this, we can make like atlas, statistical atlas of what can you see in the population, what can you expect. And then you can try this to match this to the data and thus obtain much more information. And also these tools is like some are generic and we can use in several things and some of the approaches are generic. But on the other hand, also these data sets are very unique because we were forced to create these data sets because they're not there. And again, these data sets we try or we hope to be able to put available for the community very soon. Similarly here also is like what you see is you get clinical information, you get scanner information and people say like, ah, this is this, this is this. But again, there is very little validation of some of these things. And so again, data might be needed for validation. And here again, this is actually an x-planted heart from a patient that went for a heart transplant because they had a very, very damaged heart. And this has a lot of information and we have preoperative information from the regular clinical images from these patients but we really need more detail. So in collaboration with some people here in Spain what we do also is try to get really detailed information and what we then try to do is analyze this and match this. And again, these data sets are not available. Some people might have tried that in the past but they're definitely not available for the research community, partially because people didn't find the need or found the ways in order to do it. And for us, this is an opportunity in order to be able to and create these data sets but especially curate these data sets and make them available for the community. But you can already imagine that these are huge data sets. It's not so much that is big data in the sense of many, many data sets but it's big data in the sense that one data set can be enormous and one data set is so unique that it's really important but we need the facilities, we need the hardware and so on in order to do that. Other projects that are partially related to this is really based on clinical information. So here you see clinical information and this is actually from a European project. What we started to do there is try to use machine learning techniques again in order to extract this data. But as you might know people that are working in this field or heard about it it's like working with clinical data is very challenging in the sense that you have these problems with patient confidentiality. Also with that a lot of medical doctors when they have the data sets because they know these are unique data sets they're not very eager to share that. Even after publications they're not eager to share that for different ways of saying like okay this is my data this is one thing which we still need to do is like work on the perception of data sharing of course in this community. But on the other hand also it's like especially medical and clinical data is actually kind of I would say crap data almost. It's like people are sometimes afraid to show it because you will never have a good data set. Having good data in clinical practice is almost impossible because there's always either something missing or the patient is moving or there's something going wrong or things like that. So data is not ideal and I think we also have to work within this project to work on like we don't have ideal data set that we can give as an example to the community but what we should do is try to share whatever there is and kind of acknowledge that this is reality that everybody works with and then people will start to work on the same framework. And then you can see that we developed some tools to do kind of machine learning in this data. But what you see is also kind of when you look at the current what I was saying these machine learning papers that come out in the medical community or in the biological community is in a lot of cases for decision making. So you do like a deep learning black box to say this patient has this disease or that disease and things like that. But this, although this is in a way data-driven research to make decisions but what you get in the end is a black box to make a decision. But you don't know whether this decision or how this decision is being made and it doesn't help you to try to understand and to know okay which is the next data that we need in the future which are the new approaches. That's where we very, very much like the idea of using machine learning to start learning instead of making decisions. And here is an example of where we do unsupervised machine learning where we try to understand the group of patients and what you then see again is that reality is not clear cut. And that you see that patients which have been cataloged binary by the clinician as having this disease or not having this disease. But there's a lot of people that are in the gray zone and when you start looking at the data then you see like, okay, non-diseased patients actually have a lot of disease properties while the other way around some diseased ones might be almost similar to normal but something is going on. And this understanding will be much more important and I think some of the new technologies that have been developed in artificial intelligence and machine learning can be translated in order to do like really physiological learning and trying to understand what's in this data. And this is a thing that we want to really strongly work on. Some other examples is here is like some spine data that we're working on. So we have collected a lot of data from patients where we then extract the information of the spine, make models for biomechanical simulations. Again, these are unique data sets that should be made available to the community and that we're working on. Then talking about the images, some of Ralph's work is where intracranial ECGs are collected. You can already imagine that to do that in humans to get intracranial data, which you have to do of course intraoperative. This is also, this is very unique data. And again, it's very, very difficult to find groups and clinicians that are able to share this data and are able to pool this data for the community, for the analysis, to try out new algorithms. And Ralph has managed partially also now with an extension in this Maria de Mesto projects to be able to acquire this data sets or going to acquire some of these new data sets and then really make this available to the community. And some of the things, some of these are already available but one of the things that Ralph also said to me is like, when you see if you take your paper and together with the paper, you link the data, even if it's a lot of work to convince the clinicians and you almost have to threaten that you don't want to work with them anymore, once you do it, you will see that the impact of your paper is much, much higher. He's saying that one of his highest cited publications is one like, okay, the technology and the science behind it is of course good and interesting, but the fact of adding this data set and making it available makes the impact so much larger. And I think this is important, but here, especially in the medical community, we have to fight for this data because we cannot acquire this ourselves. We always have the need of medical doctors, biologists and we have to convince them that this is the way to do it. And then of course, these data sets go towards trying to have the tools available. We need to extract information of it. We always, the way you see this, like what I said is we need this data because at some point there was a need in order to get this data because we needed the information. Now once you have the data sets, then of course the challenge of getting the information out that you want to use is again, quite large. And so that's where within this project we also work on this. And then what I've been saying is like, as I said, like our experience is mainly in the medical community, but as you see, like now people start to work on biological applications with microscopy and some of the challenges, for example, is here you have like growing tissue where you want to understand how is growth taking place, which of the cells are growing, are they getting bigger, are they getting more, which are dividing. So what you need to do is over a long time tracking of huge amount of data, for example, and these are things which are big challenges in order to do that. Or this is an example of a data set from a brain, a high resolution electron microscopy data set of a brain of a mouse, which is available. But there's so much information there and there's so many millions of brain cells there that just trying to extract and see how these brain cells are connected, this is a huge challenge. And in the past people would do that by hand, it's like microscopy was analyzed by hand, still nowadays a lot is analyzed by hand, but if you have these huge data sets with millions of neurons in there, there's no way you can do that by hand. But that also brings you to the challenges like how do you validate the algorithms that you have. So you need in some way for both for machine learning but also for any other analysis techniques you need to have ground truth in some way. So how do you generate ground truth when it's almost impossible to really create this ground truth? So a lot of the research that we do also has to go towards this is like how can you reliably try to label data sets without having to do that manually. And there's like two things that we need to do is once, it's like one individual cannot do that anymore. So what we do is we create initiatives where we might try to make, for example, we organize taggathons, we will do that quite soon again here also, where we say like, okay, you gather 30, 50 people that are two days or three days just drawing on these images in order to get ground truth or partially do crowd sourcing here also, right? There's other initiatives for that also. But on the other hand, you also need algorithms not for the analysis, but for the training of your analysis. So that's another challenging part that we work on. And one of the things that we also realized is that when you have these applications and especially when you give an application to a biologist, when you give an application to a clinician, it's like mostly just giving an executable or something like that doesn't work. What you need to do is you need to create an environment and especially in the community of both medical and biological image analysis, it's like a lot of tools might be available, whether it's open source or not. But at the moment that you try to compile them, you see that, ah, it only works in this version of Ubuntu with this library and with this kind of thing. This is just a pain in the ass. It just doesn't work. So we also need ways of trying to do this. And when you look at the current recent developments also in informatics and so on, is what you see is that we have to start working much more in a container-based way, like working with applications. And so one of the things that we started to see is we actually started to develop that out of need for our clinical applications, but we now see that this is actually translatable, for example also to microscopy, is that we're creating a platform which is much more like a mediator in some ways. And a mediator between data sets and large data sets which might be hosted here, but sometimes there's a data set hosted somewhere else. And if it's a data set of 10 terabytes, this is not something that you just download on your computer. When you want to do processing there, what you actually have to do is likely put your algorithm on a computer there which has direct access to it. And so instead of moving around data, we have to move around applications. And so your data sets are one thing, but your applications should be made in some ways that we can run them in other places, that you can run them on your local computer, that you can run them on these kind of high-performance computer facilities. So for example, also we collaborate a lot with the Barcelona Supercomputing Center for some of these applications. And then on the other hand it's like then, okay, you need the applications, but you need also the data interpretation. And putting all these things together is not of this time anymore because it doesn't work. It doesn't work with the applications. It doesn't work with the developments that are going on. So we need some ways in order to develop that. And we, within the Maria and Esther, we started to think about this kind of architectures and we're working on this. And here is an example of some of the like really dedicated applications in this case for the analysis of some medical images. But one of the most important things is that we need to have some kind of reusability of both data sets, of both tools and all these kind of things. But we need to try to approach that in a similar way. So what we need to do is, and that's what we developed within this platform, is try to define ways to isolate data sets, to isolate specific applications and make sure that they stay compatible, that they are compatible, but especially stay compatible in the future. And one of the things that we are really trying to do now is then also make sure that this approach is useful for the different things. So we have expertise in medical image analysis. We start to translate that in bio in image analysis and actually there's a lot of things which are similar. There are other things which are different, and within this project what we try to do is also, we try to identify the similarities and try to structure our data sets, our algorithms, the way to share all of these in a similar way that we can use the same tools for them. What I also mentioned is that I think that exploitation and outreach is very, very important, especially for us, it's like we are a small university that is not too much known at this moment in the world, and I think we can do a lot better. And one of the things is, of course, the active collaborations, especially internationally, which are important. The thing is we are involved in a lot of European projects, so as individuals we actually quite known, and that's something which is also typical for the whole department, is individuals are known within their field, but the whole institute is not known that much, and I think there we can work on. And one of the things that I really think that helps in trying to promote the place locally, the unity of the department of the university, is trying to get people here in order for conferences, training courses, and things like that. And so what we started to do is try to organize, especially these specialists, specialized, and I think also research-oriented training, because there is a clear need, especially as I said in the community of biology, there's a big need, the community of the medical imaging, there's a big need, so I think we really have to try to use that, and especially if you see, for example, this auditorium and things like that, we have great facilities in order to do that, and I think it's underused, so this is a thing that we work on. And Xavier was also already saying that entrepreneurship is something which is really important, and the way we currently try to do that, and which I think is a very good way, especially towards young researchers, not only researchers, also young engineers, is trying to get them into partnerships, get them internships, or get collaborations with the industry. So we try to very much promote industrial doctorates, for example, we have a couple of Marie Curie industrial doctorates, or a kind of networks and things like that. I think this is a way to get closer to the industry and learn our students, learn our PhDs in order to do this, how to work with the industry, and the entrepreneurship, which not only, as Xavier said, leads you to the industry, but will help you in any part of your career. And some of the initiatives that we have recently done is we organized very recently here, for example, a virtual physiological human summer school, so that is a modeling of parts of the human body. So that's what we did, which was actually very successful. The students especially liked this. And as I said, especially in the image analysis, the bio image analysis, we start to do more and more training initiatives where some of the courses have started already internationally. The postdoc, Chung, who is working in this project, especially in this area, was already involved in courses abroad, in Germany and things like that. And now we are translating this in Barcelona. And in September, for example, we will organize some of these courses. And we will do courses both for the local community, but especially internationally. And I think this is very, very important to get the context for the future because these projects are intrinsically interdisciplinary. So we need both the engineers and the biologists and trying to get the biologists to know our institution by training them, even if it's very, very kind of basic training sometimes, it helps you to think on what are they need. It helps them to discover what engineering can do. And this way we can try to collaborate together and get for future and progress in science and collaboration that can help us funding, for example, these science. So I think that's what I wanted to do, is like give a presentation of the first start, actually the onset of this project. And I hope in the next years that we can start to give you some more concrete results where we got in this way. Thank you very much. Question, for me, just one comment that it was not clear, the issue of the software, the data is, I think it's quite clear. Concretely, do you have some clear strategy of how to reuse software? For me, it's always the issue, a new student comes and has to continue some existing PhD work for someone else. And basically they then have to start from the scratch. This is indeed a challenge, I agree with that. But partially, in a way we will partially approach that I think for now is trying to make sure that a lot of these projects that the specificity of the project is partially isolated. That means that access to the data, how you present it to a clinician or to a biologist, we try to make that in a way in a common way and then make sure, so that's like what we start to see is like that we want to work with these kind of containers more, like let's say like, okay, when you do an algorithm, you cannot force a student to use a certain language or use Matlab or C++ or whatever, because you have to do what they're comfortable with. That's the most efficient. But you have to make sure that you can try to contain kind of what's, I mean, how you can incorporate that in a more bigger environment, I think. And this is a thing that we try to work on, which we saw the need of. But I don't believe in a full common platform because these become mastodons that you cannot use anymore, but I do believe that there is a common framework that might be useful and can be more so that that's the work of developers. I mean, visualization tools in the end, how you access to data, how you distribute data, how you put the database, these kind of things, it's like not every researcher needs to do that. So that's why we start to think about this kind of approach where we try to isolate it. And even like when you have an algorithm, you know it works to ensure that it works. The only way is create a virtual machine, for example, which can be there forever, whatever version of whichever Unix this was, you don't care, but you can keep on using it. So that's what we try to do. We'll see if it works. In three years, we can tell you. Any other comment? So my question refers to the green slide where you had your roadmap of how you would like to see cyber infrastructure developed that supports the research. So my question is there are a lot of new tasks in there that are not in the traditional role of a researcher. So for example, even sort of platform development, you're talking about the containerization and all of the software development takes more and more time than it ever has traditionally as well as sort of working with the data and so on. So my question is how do you imagine or how are the people who are doing this were getting rewarded for that? When typically the reward is really for the publication and the sort of traditional research output. Yeah, so for this you need indeed developers, software developers, but I think in a way when you look at which are the possibilities for a software developer, well, one of the things is you go to a very big company, you get quite a good salary, but you mostly do very boring work. When you go to the university, the challenge from them is like, okay, you try to use the newest technology, you try to be embedded in this. So the thing would be really kind of in a way an intellectual stimulus to make it that they can learn more, that they can develop more, they definitely get less money, we all know that. But in a way, it's the same thing as in academia. You're an academia, of course, because of getting the publications, that's your reward, but your reward is normally intellectually, it's like you say, like, okay, I did it. Not I have this paper, but I solved this problem. And in a way, I think it's very, very similar. And there, the kind of, whenever you have a software that is being used by somebody, I think that's a really, really big reward. Because a lot of the software developers also in big companies, maybe they do a small part of something, they have no clue who uses it or are not interested. Well, and this kind of things is you're very close to either the researcher or the biologist and so on. And you see them using the tool and you see them telling like, oh, this is great. We should have had this years before. I think this is a great reward. So this is what we have to do with. We count off our money, so.