 speak a bit the presentation today. So the last part is current and future challenges I made very short in the very end and yeah, we can also discuss them in the panel discussion. So focused more to show a bit the different streams we have been using to do pulmonary imaging at Tomcat maybe just the side one. So I am a England scientist at Tomcat and we are currently also involved in building up a new line. So you have two imaging being lines which will be a bit complimentary but yeah, we can also discuss later I don't have that actually in this presentation. So I'll briefly touch like the methods and instrumentation that we've been developing and then show some results on the vivo imaging and some fixed samples which were also in collaboration with some of the people here. Yeah, and the outlook and challenges were short. Yeah, I guess I will make this short as well because most of you are familiar with why we actually do lung imaging or why we're interested in lung. So it's a very important organ. It's one of the leading causes of death and let's say morbidity worldwide. And the problems are that the diagnosis is really already at the progress state when there is a let's say lung disease developing and it would of course be good to have a very early let's say screening or identification of the disease or a complication. And so as Lars was showing there are many different clinical pre-clinical imaging modalities available but I try to let's say underline what is the main goal of course is to have a higher resolution higher sensitivity in any let's say imaging modality. So this was already shown a bit today so there are different clinical diagnostics and imaging techniques. The gold standard is HRCT. Also what's been done now or in the recent past is to develop a bit a computer-aided diagnosis and go the route of artificial intelligence. There are pre-clinical animal models and imaging techniques which are kind of established. So there are 2D imaging in vivo 3D also fixed samples and somehow our part which was initially developed during my PhD but those of the follow-up project was to develop let's say in vivo imaging at the micrometer scale. The setup is summarized here briefly. So we imaged small rats seven to 14 days old and adult mice. So these are somehow the sample sizes that we can cover postmortem and in vivo. The pixel sizes we used and are between one and three micrometers also corresponding to very small field of use. And the acquisition times are here between the 300, 400 projections and total scan times of one to two minutes. And I will show it later but basically this is given by the heart rate. So in fact, since we have to take an image trigger to tweak the heartbeat, this makes the scan time much longer but if you just sum up the extra exposure it's below a second actually. This is somehow the end station that we've been developing. I just show here different parts of it. So we have small animal ventilator, ECG all connected with the detector and the rotation axis system, isoflurin flow to keep the animal anesthetized and stable. And basically this is how the imaging in the end looks like. And as Martin was already showing it's very, I really skip this because it's been already shown. One of the main advantages we utilize that the synchrotron is to have better contrast from a face contrast imaging. And in fact, we are not so much interested in, let's say improving the contrast as much as we are actually interesting in reducing the dose. So having high resolution at the very low dose but low dose you will see it's quite relative when we were speaking at the synchrotron level. So these are the type of images that we were getting with this technique. And as I was mentioning before, the challenge was really to record the heartbeat to evaluate at what time was the heartbeat it's most favorable to take images and to have a follow up, let's say acquisition cascade where the shutter opens where we expose the animal to X-rays and go for the next rotation. And basically the results that we get is that the lowest achievable dose was in the range of five to 10 gray per tomographic scan for this small region of the lung that we were scanning at the very high photon flux which is of course available at the synchrotron total exposure time as I said was below a second but the whole let's say scan took like two minutes. The achievable resolution which means the features that we are eventually seeing in the lung are in this size like the pixel size as I was showing are roughly below three micrometers. And of course this all was part of an acute experiment where we had no immediate effects from the radiation dose but of course the dose is very high. And recently or not so recently but a few years ago, we have installed a new microscope the tomcat. So this lowest achievable dose I would claim we can go below one gray actually and keep these experimental settings the same. So and I expect also that we will be able to further push this in the future. Yes, so I brief, I didn't show these images. Maybe yeah, this again just to underline that this is in vivo image that's different pressures for instance. So but yeah, I'll actually I want to show this which shows some of the challenge. So this is live animal which we put in the X-ray beam and there is a constant pressure set in the lung and the big movement that you saw that's actually when the lung is breathing. And so the challenge is that at this resolution we get motionless images in a tomographic setting and to be able to reconstruct it because otherwise you get something like on the left side and that's not useful at all. And yeah, so basically in order to be able to have this type of imaging it took us a couple of years or many years so to say but what we also evaluated in all different directions was that once you set let's say a constant pressure in the lung you really have pressure oscillations in the lung from the heartbeat and you need to trigger very precisely but what you see also that there are really time points in the heartbeat where the motion in the lung is very small. So actually it is feasible and it is possible to do this type of imaging just needs to be done in this way. So that's what I was talking about till now was mostly the technique that we were developing during my PhD but the idea why we wanted to develop this technique was to also apply it to some disease models and yeah applications coming from the biomedical or clinical community. So one of the collaborations we had a group in group salon and with group in Grenoble with St. Biot who was mentioning before was to study acute respiratory distress syndrome and yeah, that's a very let's say serious condition which is triggered by different complications that can occur. And the biggest problem is essentially when a patient for instance gets administered in the ICU the mortality rate is actually very high and then it's a very expensive treatment of course. And one of the main points there is that well, the patient needs to be kept stable weight but needs to be ventilated but also the ventilator treatment is further injuring the lung and the question is always how to tune it. And this is just the statistics that took actually from the Austrian TV not so long time ago. It just showed a bit that yeah worldwide the top one disease is cardiovascular or R cardiovascular diseases but during COVID it was really significant that complications coming from ARDS pneumonia and so on significant which was visible in the national statistics. So that's essentially understanding ventilator treatment and improving it's quite significant so to say. So what we developed here and what we studied here was actually a mouse animal model where we used some sort of aggressive ventilation technique to induce in the mouse model ventilator induced lung injury which is a very similar how to say condition as present in ARDS. And the imaging technique was somehow let's say adapted to this problem but it's based on what I was showing before. So we had to be set positive end expiratory pressure and we barred that one and looked also how the lungs are let's say how the air is being recruited or not into small airways and alveolin but what we could actually resolve was that from the onset or at the baseline how actually the lungs look like when such a condition is present and we could follow up the animal with let's say different testing and blood tests and so on. So just to mimic these conditions. Yeah, the other thing was also to visualize at which positions in the lung there was a lack of a ration or over distention of the lung. So that's, so I have to admit here that the experiments they're already been a while that we took them, you're still analyzing the data finalizing the paper and so on. The current hypothesis is really that's due to this over distention which is very heterogeneous. There is a some sort of self acceleration taking place when this animal is being ventilated. And yeah, one of the challenges we face at the moment is to really precisely quantify where is this happening, how much and so on. In parallel to that, we've also used a similar animal model but in a rabbit at user rep where we basically somehow just remodified the bits of the technique to enable really this type of 4D imaging. So where you can really decouple the heart movement and the lung movement and have everything studied together. And here we could really study, let's say the displacement and strain within the lung. So also here, the model was a bit different but it was also really model which was in the used in rabbits. And here the scam parameters were not so high resolution. So it was 22 micrometer pixel size but you do the, let's say higher flux available at synchrotrons. We could actually do this in vivo and very efficiently. So maybe to stop here regarding the in vivo imaging. So one of the challenges that we face also now when writing up the study and so on is always how to quantify these changes in 3D and I'll go to the next slide but maybe just here to show in lungs that we want to essentially quantify some sort of non-linear changes. And essentially also what I was showing before when we have a healthy lung and the lung deteriorates that has some liquid inside. We are basically comparing the same tissue which doesn't look the same tissue at all. And that puts some challenges also in terms of computation. So what we developed earlier was really to do some sort of thickness map analysis so where we could really analyze the thickness of these structures but also what we want to look at is really the surface between air and tissue and air and liquid and have some quantitative analysis with that as well. Now why I say this is that based also on all the instrumentation that we had we were then in fact able and to image the whole lung in a very short time. So this is like the whole acquisition here takes about 20 minutes, 15 to 20 minutes and it's composed of 63 individual volumes and scans. And then we end up with the very high, the very big volume which is roughly one terabyte in size. That's also very similar to the paper that Martin was showing before from user app where they image different organs. I always claim we were the first one to do it but the reason why actually we tried to exploit it was we wanted to mimic really fresh conditions. So this is not an embedded sample that's really animal that has been sacrificed and this image tried after there is no heartbeat anymore. So it's really kind of similar in vivo conditions. And the reason why we made this scan time so short was just to be able that the lung doesn't, let's say change and so on. Still there is some degradation process happening because there is no blood flow anymore and so on. So basically when we stitch the single volumes together there is additional processing and deformation necessary on the computational side. One thing also I mentioned here, I don't show anything just to mention. So this fitness map analysis that I was showing before applied to such a big sample without propping it or scaling it would take weeks of computation but we also developed an algorithm to do it actually when a cluster in like 20 minutes or so. So that's also huge improvement but actually what I want to say here is not that we achieved it and now it's super great. I want to say that this is in essence possible with many of the analysis steps that are being done today. And I think we should just realign to the fact that we will in the future be dealing with very big data set and everything that we're doing today like in analyzing the data should also go into this direction. And we should also ask, okay, I want to do this on such a large data set but in a let's say foreseeable amount of time not that they wait weeks or days to be calculated on a cluster. Here I just mentioned quickly also the work from a current's group which we started a couple of years ago and they are really returning users in our facility to do imaging. Yeah, it's just it was also mentioned before standard paraffin blocks which are being scanned and yeah, I don't give more details because we have the experts here. Now I wanted to discuss actually this a little bit and I tried to mimic somehow what is actually the life cycle of such a typical synchrotron experiments and who is doing actually what in this type of collaboration or yeah experiment. And of course we have the user group who comes with their scientific questions with an idea of how to realize an experiment and ideally then bringing insights to the scientific community. On the other hand, we have also the beamland staff which helps the user group somehow to evaluate the technical feasibility but also can act in helping to design the experiment and of course is the main help in the data acquisition process. So of course there are very advanced user groups which for instance have also ties to a synchrotron facility so they maybe don't need that much help from the beamland staff. On the other hand, there are user groups who don't know anything how what the synchrotron does what it can do and need really significant help from beamland staff. And that's essentially also the way I understood the workshop today is to reevaluate this a bit how to further advance this in the future. And one thing is also so what's important here for the scientific questions to be placed and really work through. I put here one point which I call just admin administration and I just put it here to make understand. So going to synchrotron requires really personal and stuff to do such an experiment and it's not something where one puts a sample in and one gets a result out. And sometimes when I see new users sometimes they I don't know if they think or if they not realize what the whole setting is about to then realize, oh, I have so much data now what should I do with it? And I think that's something to keep in mind and to think about how such a collaboration can work. And once this is set from the beginning on I think then there is no problem. Another thing is that we rely more and more on data analysis experts to basically help with the data analysis, which essentially the user group may or may not have bean line stuff may or may not have. And of course the interpretation which is together with the user group. I highlight this here because if the data analysis becomes more and more challenging and there are more and more advanced algorithms that could be utilized to do the data analysis it doesn't mean that they can be used by the bean line stuff on the other hand it doesn't mean that the bean line stuff or the user group knows about those. So for me what is mostly what I want to somehow end with the discussion here is also to keep in mind that with synchrotrons we really have access to huge data, to very high resolution data as was shown today. And this could actually also mean that we can have a different look at the data. We could use this data for instance just to image this type of organs and to work on modeling this data or to work on let's say simulations which were not possible to be done because this type of data was not available today. So one thing that we are currently trying and that's also what somehow my role is also now in our bean line is really to make this connection between data analysis and data acquisition to make this much more prominent. And the reason I say it is that the classical let's say synchrotron experiment was one comes with the data takes the experiment, takes the data home and analyzes it. I think what we need to become better in is really bringing the sample in, scanning it and having real time feedback to see, okay, is this what I want to see or where do I want to scan? Do I want to have a let's say higher resolution in some places? And since we will have two bean lines in the future I mean, it's also one thing to think about this multimodal imaging, maybe one part of the scan is being done at one bean line and the other part at the other. And for this to work essentially is also that this connection between bean lines and data analysis experts needs to be better pronounced. So that's why I say it can be better or it needs to be better is simply that as I said deploying some new algorithms in our architecture infrastructure that we have in the lab is not straightforward today but it could be. And on the other hand, for these experts to understand what are exactly our problems is also not straightforward nowadays but it could be because they have the domain expertise in dealing with data. So to conclude a bit and to just underline my personal kind of outlook. So we have achieved this higher resolution in vivo imaging where we can study different preclinical models. As I said, so the airway and alveolar structure was resolved down to a pixel size of three micrometers and interdisciplinary approaches I was just showing on the previous slide is really necessary to further develop the techniques and the experimental design. And some of the topics now I kind of described in a very verbal way was really to have real-time imaging to have real-time visualization post-processing at hand and to enable for the future high field of view, high resolution imaging and to go into the direction of low dose imaging multi-region of interest imaging, multi-scale or to have some sort of landmark driven imaging where you come with a very specific, let's say, question of imaging a paraffin block or whatever the sample is or looking for features inside of the sample in an automated way. That's something that I think we can start on working on to provide this, yeah, standard. So that's basically it, yeah. So both for structuring the problem that so smoothly open the discussion. So, but right now, greetings for Boran. Yes, thank you for a fantastic presentation. It's an amazing job. I'm just wondering, as obviously we're looking forward to the Max4 in line, but did you say that it's a relatively close, close to live imaging to do immediate post-mortem imaging that you showed? How fast does the, for instance, the lung as an example, how fast does the organ decay, kick in, how, how? You did 20 minutes, right, so to scan. So how, did you see the changes moving through your view? Yeah, I mean, that's really just a very empirical. I don't know the exact medical mechanism behind, but empirically, what we always experienced was like half an hour where you can really keep it stable and do the imaging. So that worked very well, yeah. I'm thinking just as an alternative, if there are no animal facilities nearby and you can have some kind of. Yes, yeah, I mean, yeah. And one thing I cannot go, maybe you can go just one slide back. I mean, okay, nevermind. I mean, one thing I wanted to highlight also is if you think about having, thanks. So I was showing this arrow here, right, between data analysis and data acquisition. If you think you are doing some sort of in vivo imaging and then you sacrifice the animal and do like such a scan in the end, I'm sure there is a way to feed in the data back to your in vivo image and improve the image itself. That's actually something which we want to tackle, let's say for the future, but so I just say we have the data at hand, we just need to use it. So yeah. Hi, it's wonderful to talk. So touching on this point between data acquisition and data analysis, in your mind, do you think it's like a service that the Sync for John should offer that like a prepackage GitHub of this data analysis pipeline that the user could take with them or I mean how? Yes, so that's really my personal view. I think it should and I think there is technical possibility to do so, but I don't see it being realized at any synchro from so far. And the way I see it, I think the synchro, in my opinion, the synchro throne should provide clearly defined governance on how a user can bring his or her algorithm to the synchro throne, to use in parallel with the experiment, that should be provided by a synchro throne facility, that as I said, it's just my personal view. And the other thing would be to provide, let's say templates on how these things can be used. So like a documentation where it's like 10 minutes tutorial on how do I run this type of thing and I see this missing because we don't have tutorials for anything and not just me. I mean the synchro throne community, but that's something which then I'm able to say to a completely external user, look, this is the webpage, go through it, work it's true, and then you will figure out what you will do at the experiment. That's something what I hope for the future to be able to provide. Yeah.