 So good morning, everyone. So today I'm going to talk about how to deliver information from medical images to end users. And so this is going to be about image analysis. So this talk is not going to be about image processing that we do. So it's not about us. We all use FSL, SPM, Free Software, and your favorite software. What I'm going to focus this talk on is how to deliver those image analysis methods to end user clinicians and neuroscientists. And for which image analysis is very much a black box. We do some magic with software. And what they really want is to get their results so that they can publish their findings. So we thought very hard on that, and we had multiple iteration. And we ended up looking at what kind of users we wanted to address. And as a general categorization, we describe when users are either a radiologist or clinical trial team, so often as heroes or medical researchers. And again, those medical researchers are those who have a minimum background in image processing and image analytics. And usually they use, you see on the left, I've put the three main sources of software for image analysis that those three segments are using. So either the scanners themselves comes with image processing. So this is more and more the case. Can buy commercial software, or there are plenty of research software which are available, which are many are free open source. I made the distinction here with this horizontal line that the research software very often are for research only. And there is quite a barrier to use for medical because of the requirement to get a regulatory approval for using in a clinical setup. So the solution that we have worked on in the past several years is cloud delivery approach. And so on the left, I put this box representing basically a cloud service. So we have those image processing algorithm running somewhere on the server. And they can be interrogated. So either, so the top arrow is through a web interface. So the end user can just upload their images and get the results and download the results. But by using an API, we think we can also interface the three main channels of delivery for those methods. So the scanners could actually send us the images and we could process them. We are talking to a few commercial software manufacturers, vendors, and some research softwares can also link through our API to cloud computing servers. I'm going to show one example of such a tool. And the example I picked is the pet processing. So at the moment, we have MRI, pet and MRI. We have a natural field report and soon a flare report. But this talk is going to be about pet imaging. So one of the markers that have been rising recently, it's amyloin markers. So most of you know that in Alzheimer's diseases associated with the deposition of amyloid plaque of the brain. In 2006, there was this new marker that came in, the Pittsburgh Compound B, which binds to those amyloid plaques. And the redder, the more radioactivity, the more concentration of amyloid in the brain. So then the problem is to quantify those scans and to use methods to provide the scientists and the clinicians with numbers they can use for the study. So the first step that pretty much everyone is doing, you have your MRI, you have your pets, you align the two together. Once they align, that allows you to define an area in the brain in the case of the Compound B. It's the cerebellum cortex, the white area. So some more you segment this area, you take the average, you normalize the image by this average. Then from the MRI, you have your gramator, you pass it at your brain into areas, and then you can compute the average on those specific areas. And then from then on, you can do your science with an estimate of amyloid deposition in the brain. So the objectives of our study for this particular project was to develop a similar technique but without the need of MRI. And the normalization area that I show, which is the cerebellum cortex, depends between traces. So we wanted also to have something robust to that. So the tool that we're developing can be found on this URL mixcloud.siso.au. And one of the very popular softwares around for analyzing PET images has been Neurostat. So this has been around for many years. It's been developed by Professor Minoshima in Japan. Now he's working in the US. But basically what this thing is doing is taking the surface of the brain, rough convex all of the brain, then you take perpendicular to the surface and take the maximum or an average of the first pixels going inward to the brain. And this has been developed for FDG. So that's a glucose marker. And this works very well because FDG, most of the signal is in the gray matter and there is almost no signal in the white matter. So if you take a perpendicular to the surface and you take the maximum as you go inward, you can project on that surface the signal and this is what you see here. By using a database of normal subjects, one can compute a z-score. So this is what it's reported. So basically we try to do something similar but for all the markers. So in the case of amyloid, so PET-piba surface projection, so that's using the MRI. So this shows basically taking the surface of the white matter for each point on that surface, take the average of the gray matter which is the closest and then painting the surface with a color code that represents either the actual SVR value or the z-score coming from your database of normal. So this works very well with MRI. When you don't have MRI, so this is the method that I'm going to be briefly describing. So the tools work by uploading a new scan to this website. Then there is a special normalization which is happening. So the scan is registered to an art class. There is a surface projection that computes the local PET signal and then the reports is generated and that report is sent by email. So quickly if I go through the various steps, the first problem is that your PET scan has a different appearance whether the subject has a negative scan or a positive scan which is not the case in MRI. So we have to create an art class which varies from healthy to Alzheimer's basically. And when you get your new subjects, you have to optimize these parameters that select the art class that is the closest to the subjects. So this was one step that needed to be done. As I mentioned, various tracers require different normalization area. So this is the list of the tracers that are currently that can be used in our system. So there are five, I think, amyloid tracers. There is FDG. We have a new tau tracer that are also working but are not shown here. It shows, so from the left, the normalization is the cerebellar cortex. In the middle is the whole cerebellum and that's more relevant for floor beta pair. And the right shows that the pulse is the recommended normalization area for flutometamels. So that's the GE compound for amyloid. So this is the tricky part of the methods when you don't have MRI. So from the, if I read the slide from the left, so you get your PET scan, you have normalized it to your template, which has been, which is adaptive. And then because you have done that, you have also database of pair, of MRI, and PET that you have already characterized. So the moment we use 20, we found that this works remarkably well. For each one of those 20, you have a surface representation that has been simplified. So that's why the surface here shown is rather smooth. It doesn't include all the foldings of all the subjects. And basically what it does, it shows, for example, on the right brain surface, this red box, what we do, we look at the same red box on the 20 subjects. And we compute a weight, which is related to mutual information, localized mutual information. And basically we wait for each location on the surface, those from those templates, which are the most similar to that particular subjects. So this is a bit involved. If you want more details, I can explain, but it seems to work pretty well. So that shows those two lines show, for example, on one subject, the top is the method. So looking at the PET, if you don't have the MRI, and this displays on the surface the PET signal, and the line on the bottom is the same processing if you have the MRI and you know where is your gramator and your white matter. And so you can see that the result is quite similar. There are some differences. The table emphasize shows that there is the absolute error between those two methods, it's about four to five percent, which is within the noise of the PET signal. So this is for flitometamol, so the G compound, but it works, it's about the same for all the compound. So basically we can get similar results without an MRI to identify where is the gramator. So we tried out on many subjects, and the first thing that our clinicians did is to throw at it a very difficult case. So we got a lot of cookie points with them here because it worked on that particular case. So this is probably the worst case that they had in the Able study. So the subject, you can see on the top left the three images, the head was very tilted. And we managed to register it properly to the template on computer descent signal from that. So as I said, it works for various stresses. So this is the piece by compound B on the top for amyloid positive subjects. So those would be classified as Alzheimer, the Navier compound, the second line, flow beta pair. So that's the IV-45, flow beta pen, flutametamol on the bottom. And they gave pretty much all the same signal. So this is in SUVR. There is also a method to normalize the units between the various stresses so that you can compare subjects together who have different marker. It's very useful to detect and to look at evolution of the signal of the time. So the first line is at baseline. So in that case, the subject was MCI and it had a bit of amyloid deposition. And you can see that the middle line at 12 months and the bottom lines 24 months later, you can clearly see an increase in the signal which shows as a more red here with these characteristic patterns that of Alzheimer's amyloid deposition. It works also for FDG pets, so that's quite useful. It shows, for example, it's very useful for differential diagnosis. So the top is a typical Alzheimer disease pattern. The bottom is what you would expect from subject suffering from dementia with Lewy body, DLB. Conticobasal syndrome on the bottom is yet a different pattern which is mostly negative for amyloid. So clinicians gave us some different pathologies. So you have four different pathologies here with very typical pattern of amyloid deposition. Actually, yes. So the behavioral FTD, progressive non-fluent aphasia, so affecting the language area, semantic dementia, logopenic aphasia. So it's very useful to have a very quick read on the scan and to identify those different pathologies. So as I mentioned, there is also a similar tool for brain atrophy. So that computes the cortical thickness. And so that slide shows that, for example, the top line is one subject, FDG. So that's related to glucose metabolism, which matches very well the atrophy that can be computed from MRI. And similar for the bottom, semantic dementia, where the stem area is affected. Another, that's another result showing progression of atrophy between 18 months. So you can see the signal increasing. So what we hope to do is use those kind of tools to provide a spectrum of biomarkers, imaging biomarkers to clinicians and to researchers and working with a group in France led by Gaël by having in the same framework multiple signals. So in that case, I put a gramator in the middle, it's FDG, glucose metabolism on the right, the amyloid plaques, different areas combined together can provide a more specific response to a more specific characterization of pathology. In that case, Alzheimer's. So that's new results using the new tau tracers that Chris Radistin is trying. So basically the tool comes like that. There is a website you can select from your computer one or more PET scans. You have to select which tracer it is, and then it's free access once you have registered and then you can upload up to two gigabytes of data sets and then it runs. And at the moment we are using 10 nodes running in parallel and we can process I think about 25 or 30 scans per hour. The results are sent by email and basically that's what you get once the process is done. You get a one PDF page report per subject. So this is the recto and verso of that page. So the recto shows the main display of the signal over the surface of the brain. This is normalized. You have some numbers here which is a quantitative signal on various parts of the brain, various slopes and some basic information for helping the doctor to make a decision. On the back, there are these course maps with various normalizations. So this one is the cerebellar cortex normalization, the old cortex and the pulse normalization. In top of that, we provide also a basic CSV file with the signal for different parts of the brain. So if you have 100 scans, if you have uploaded 100 scans you would have 100 lines with as many columns as areas. There is an extra reports for quality control where you can check if the registration has been done properly. If the area has been correctly identified and you can download if you wish, all the data sets, all the actual pets which have been registered into the MNI space. Then also for each subject, we can create a web report so you can go online and check the various images and graphs for that subjects. So I think that's it. So to conclude, I hope I convinced you that we managed to do a pretty robust tool to quantify pet scans and it works for the most of the amyloid tracers, FDG and soon for some of the tau tracers. It was similar performances between Carbon 11 and F18 compounds and so it's really easy to access. And so at the moment it's an evaluation version and we would be very happy to get much feedback. So to finish, I'd like to thank all the people involved. So there are many people involved from the Able studies and including the participants in the studies and those are the people in CSO who really spend a lot of time and effort to develop this tool. So thank you very much for your attention. Okay, just while it's switching over to projectors, we've got time for one question. You can baffle them in science, Olivia. Okay, we might keep moving. So our next speaker.