 Hello, that talk is in one minute. Please take your seats and we'll have Mark Ruiz and Adria Julia presenting the project Star Viewer. Please welcome the speakers. Hi, I am Mark and this is Adria. We are from Gillat in the University of Girona and we are the developers of Star Viewer. In 2004 a collection agreement was signed between the Graphics and Image Laboratory and ED, instead of the Technosic per l'Image, in this agreement Gillat provides research knowledge and ED hydraulic experience and the objective of the agreement was to develop this. This is a Star Viewer which is a radiological viewer. Radical viewers are essentially in the medical imaging workflow. First, a technician acquires images in a medical scanner. This image is then stored in a central server called PAX. Then, a radiologist can retrieve these images through a radiological infraction system and view them in a radiological viewer. The radiologist can retrieve these images from the server and view them in a radiological viewer. For example, a Star Viewer is the part that we developed. All this is governed by a standard called DICOM. DICOM is a very thick standard which governs how medical images should be stored, retrieved, saved, communicated, etc. It's very extensive. This standard also gives the vision of the wall, which is the following. According to DICOM, in the wall there are patients in the wall and patients usually go to the hospitals to take a study. For example, when you go to do a CT scan. When you have a CT scan, you are being done a study. In each study you have different series. Each series stores several DICOM objects, which may be images, graphs, encapsulated documents, etc. A viewer, the main goal of viewer is downloading these files and then displayed to the radiologist, but not only these. If you have a series, which is CT, which are, for example, slices of your brain, this is volumetric information. From this volumetric information you can make what's called a 3D volume. A 3D volume means that then you can do reconstructions and see your brain from different viewing angles. In the case of a star viewer, if we also have, for example, a PET image which is from nuclear medicine, you can overlap them and then, for example, show the parts which are active. This we will show in the demonstration. Now, on the technical side of a star viewer, it's developed using C++11. We use several open source libraries, which make our life easier. Projects are based on QMake from the QT framework. We have a multi-platform software, which runs on computer hardware, having at least OpenGL 3.2. It's multi-language and in 2014 it was open source with a GPL license. The architecture can be separated in four main parts. At the bottom we have the external libraries, which allow us to communicate with a pack server or to hit Decon files from the local computer. Then we have the core, in which we have white collections classes, which can be divided in four different categories or more. For example, we have widgets, which include basic 2D and basic 3D viewers, which allow you to see the images but don't implement any interaction by themselves. We also have layout classes and hanging protocols, which is a special kind of layout defined in Decon. Then we have a collection of tools, which are used to implement interaction with the basic viewers. So you can combine the tools you want with the viewers. Then we also have the input and output classes to communicate with the packs, hit local files, or communicate with local database. And the extensions engine, which allows you to create extensions, which we will explain later. Then the part that the user can see are the core user interface and the extensions. The core user interface includes several windows, several windows which are central to the platform, not related to any particular extension. For example, we have the studio management window, the configuration screen and several others. Extensions are one of the most important parts of a star viewer. They are intended to support general or specific workflows or to add new features to the platform. At this moment, we have five stable extensions, but new ones can be easily created. For example, here to the right, we have a screenshot of an extension, which was created using just over 100 lines of C++. Now, extensions are implemented as static libraries. This has the downside that you must have combined star viewer to add them, but you could have the advantage of using linker time optimization. In the future, we want to support both static and dynamic libraries. Now we will show you a few images of the extensions and later a demo. The most important one is the 2D viewer extension. It allows you to see images in two dimensions and implements image fusion. The 3D viewer extension allows you to view the image using volume recasting or surface rendering. Both of them will be shown in a demo. The multi-planar reconstruction extension allows you to see images in two dimensions, but in a non-orthogonal plane. For example, here, there is an oblique plane, which is aligned with a vertebra, which allows you to see the disc of the vertebra. Then we have the uncomprint extension, which allows you to send an image to a decomprenter to have it in a film like in the old days. Finally, a PDF extension, which lists the uncafloided PDFs in a study and lets you open them in the default PDF viewer. Now, Adrià will show you a demonstration. Okay, one moment. Here is the star viewer main window, and one of the things that radiologists usually do is what's called a slicing. Slicing is changing the slice. As you can see, as I change the slice, this green line you can see here moves to show the slice you are seeing right now. I will invert the colors, because I think it will ease your task of viewing this. Then these are called reference lines. What we can do is taking several slices and generate one single image, which summarizes them. This is called Thickest Lab. I put, for example, 120 millimeters. As you can see, now we see the pelvis or the ribs, and we are visualizing this thick block of images. Okay, I disabled this because it's quite CPU intensive. Then there is what we explained before, which is the hanging protocols. Hanging protocols are templates that can be customized with XML files, that if you are a doctor, you want to set up your diagnosis environment quickly. But in this case, I will put a manual layout of three viewers. In these three viewers, now I have, if I right-click, I have the series 2, which is the CT scan, and here I have another series, which is the PET scan. This is a nuclear activity. In this third empty viewer, I will put the fusion of these two series. I will invert the colors to ease your task. Then if I move this here, we can see that there is a more active region. There is the fusion. One is the CT scanner, and this is the resulting fused image. Here I can change the color, for example, of this representation. Now it uses a different color scale. What we can do also is doing measurements. For example, I can measure the distance from here to here, or I can do the calculation of several statistic values with a region-growing technique, like here. It gives technical information that we want to explain now. Now I will move to the 3D visualization module, which does 3D volume rendering, which is done usually through CPU, but it can also be accelerated using a GPU. I will choose this series, a moment, and I will apply this transfer function. It's the way you paint it, and I can rotate it, and here we have this poor man laying down. With volume, this is all filled, so I can change the opacity to see occluded parts. Then I showed what's called 3D volumetric data, but we can also have 4D volumes. 4D volumes mean that you have also time data, and this is the case of a hard magnetic resonance. One moment, I will open it here. Okay, this is a hard of a person. I will put two viewers, and here I will put this image. Okay, I can slice. I am changing the slice. You can see the reference lines. This is one... I will invert the colors, sorry. Okay. One is one series, another is another series, but because series have metadata, we can know especially how is located one, respect the other. Okay, and you can see the heart is moving. I will put play so you can see it better how a heart is moving, and this is 3D. I am slicing. I am changing the viewing plane, and this is moving. So, well, here ends our demonstrations. If you have... Ah, one moment. What is it here? You can find contact us with this mail, and you can find us on GitHub, and if you have any questions... Okay. Questions. They ask where it is used, this software. This software is used in research, especially in main hospitals in Catalonia, which there are centers that they do these CT scans, et cetera. Okay, and they use it there, and these are the people that paid for the development of this software. Okay, thank you very much.