 We're students and a chair for computer-added medical procedures and the technology we use in the hearing, and today we are going to talk about how mental reality can be used in medicine. So, have you watched a Hollywood movie and actually your jaw dropped because of the technology that has been displayed? Did you ask yourself, can I have this at home? I do, all the time. As you can see here, in this example from the movie Iron Man, we see new kind of technology. What we know nowadays is that we have a computer and we have keyboard and button buttons and we also have mouse. And that's how we interact with the data. We click and we are able to open or close different kind of data. But now we see something different. We see a virtual object that are floating in the air. And we see the actor that is using for example Hitching in order to interact with this data, is also using his hands in order to be able to enlarge the objects, virtual objects, and also to move them on the side. And maybe you're wondering right now, do we have a concept that can do this from the scene right now in this example? Can I have these virtual objects in my home? Can I also interact with them? Do we have this concept? I can tell you that something similar exists nowadays and it's called augmented reality. But what exactly is augmented reality? As a concept, augmented reality represents overlaying virtual objects over the environment that we see. And not only virtual objects, we can also overlay sounds, we can also overlay text. It goes beyond overlaying because it gives us the opportunity to interact with them. Since we both studied biomedical computing, which is intersection between computer science, engineering, and medicine, we wanted to show you how this concept can be used into the medical record. In one way or another, how we can use this augmented reality in medicine is to change the way that information, for example the measures, is displayed to the doctors. And nowadays in medicine, they're very important because imagine what doctors would do if they had no imaging. In fact, up until 100 years ago, doctors did not have any imaging. So to find a diagnosis, they could look at the patients from the outside and take samples, for example, of blood or urine. But to really see what's happening inside the body of the patient, they had to cut the patients directly and look at the inside. So this changed a lot in the last century because a lot of new imaging technologies have been developed. And now we have a variety of different imaging available all of the time. So the question is how do we display these images to the doctors? These are examples of medical records, how it is in today's practice. You can see, for example, scenarios of the surgery and you can see multiple monitors that the surgeon is using in order to have this different imaging and this different information. So now I'm going to do a surgeon and we have to do operations that you're focusing on in the event models, on the area of the operation. And at the same time, you have to see monitors and have the opportunity to look at these monitors and get the imaging information that you need. And as you might imagine, that can be a very challenging task. So we are thinking, is there maybe a technology or maybe something that can help us improve this situation? And yes, there might be a methodology. So here you can see that our methodology can help us to reduce the number of monitors in the operating room, can display the images directly in front of the surgeon and share with the images directly at the patient where we need them. Also, since we have a lot of different imaging or a lot of data available at each moment, we can reduce this to only the really essential information that needs to be worn in time. Also, our methodology helps us to have a very intuitive and sterile way of interacting with the data that we gather during surgeries. And as you can see here, the doctors are all the time wearing special equipment, special glasses. And these glasses are right now the most famous devices for having this augmented reality. And in fact, the most famous glasses here are called the Microsoft HoloLens. But what actually would they have to provide for us in order to have this new technology? So if you're wondering where the magic happens, the magic happens between the glasses only, they have the power to display virtual problems. For example, right now I see the audience, but also I can see a lady inside the audience. And also HoloLens cannot take any hand gestures. For example, if I do right now tapping the HoloLens nose and I'm selecting that button from the menu, and it's actually taking the action of tapping, then they can also detect the environment. For example, HoloLens right now knows where is that bench, for example, or where is the audience. And they know exactly where to position these virtual objects. Also, if we have some markers where we want to use in our application, then they want to have an object that depends on that marker, HoloLens can detect that marker and then show our virtual object in that exact position. Also, they are able to track my position. So for example, when I'm moving, I don't want my virtual object to follow me. Of course, if the application doesn't recognize the amount. But usually what we want to have is we want to have the virtual object as a 3D model in the real world. And then I want to walk around and be able to see it from different angles and different sides. So we want our object to stay in the exact position. And last but not least, they can also recognize speech. So for example, instead of using the tapping question, you can use voice recognition. For example, tap, and then HoloLens will open the button created selected. So how this corresponds to the technology behind it? The HoloLens itself, they contain four cameras. We have four cameras here and depth cameras. Those cameras are used to detect, as I said, to detect the environment and also to detect the engines. Then, as you can see here, we have this glass. And the glass is used to see the environment around us, to be able to see the real world. Again, inside here we have second pair of glasses, which actually are used to display our virtual objects. Then we have the internal measurement unit, which is kind of GPS for the HoloLens. So whenever I'm changing my position, the HoloLens can detect that. Also, last but not least, we have a microphone that can detect our voice and our visual lens. If the microphone has a speaker, it can actually send us some voice commands from HoloLens itself if necessary. So we showed you, like in a nutshell, about the technology behind HoloLens. So now I want to show you our project that we did during our first semester and our studies. So for that we used different kinds of imaging. Then the first imaging was used in order to create a patient's minimon. Usually, the doctors use X-ray. X-ray was one of the oldest imaging technologies that they used in order to create images from the patient. Nowadays, we have something from our, it's both computers in our field. So as you can see, we have X-ray generator from one side, and we have the detector from the other side. The rays go through the patient, and then using the computational power, we are actually able to take the patient's body and put it in the computer. So actually, we can reconstruct it either as a 2D image or as a 3D model, as you can see here on the image, and use it for our application. The second imaging was a typical X-ray that we used, and this is the information that the surgeon needs during the surgery. So we want to know why it's happening in a patient at a certain point of time. As you can see on the left side, we have this machine, which is called X-ray machine, or CR, which can generate the X-rays towards the patient. And the surgeon needs to position the CR in a way so he can be able to see the information. The traditional procedure works in a way that they communicate between each other, and they have specific language how the CR should be changed in order to reach the exact position. We think that there might be, or must be, a better solution, which will give the surgeon completely sterile, but intuitive way of interaction. So now we want to show you how it develops, and based on that, our solution. So for our project, the pipeline is following. So after the surgery starts, the X-ray machine will take an image and the initial position, and then at a certain point in time during the operation, the surgeon might decide to change the position of the X-ray machine. So what he does is he selects a position on a 3D model of a patient, and then afterwards the X-ray machine is going to adjust to exactly this position, and will generate a new X-ray. And in the end, this X-ray is then displayed to the surgeon again in order to see the actual imaging. But let's have a look. So here you can see the virtual model of the X-ray machine and the virtual patient. Of course, in the read scenario, both will be, for example, the X-ray machine will be a real X-ray, and the patient will of course also be a real patient. So then we have also the X-ray on the left side that is displayed for the doctor, that is actually what he wants to see. And then we have on the right side the 3D model of the patient, and then a blue cone, which basically tells us from which direction the X-ray is currently shot through the patient, and the imaging is basically done. So now we want to change this X-ray, we want to change the angle from which we shoot the rays through the patient. So we take our hands, and now remember this X-ray, because now we're going to change it. So we take our hands and basically rotate this model. And once we're satisfied with our position, we click the apply button. And as we can see, the X-ray has changed, the X-ray device, see how it has changed, and also the X-ray that is displayed is now on a different one from actually this position. Now we can repeat this whole procedure as often as we want. And again we will see the one or two hands in order to rotate this patient model. And again once we're satisfied with this position, we can click the apply button. And again the X-ray has changed, and the device has also changed its position. Now again we can repeat this as often as we want. And once again, when we're finished, when we saw everything we want to see, then we can just stop the application and continue our surgery. Different kind of applications and how many of reality can be integrated into the medical workflow. But maybe you're asking, are these solutions available nowadays? And I can tell you that these solutions are in research. And mostly by my opinion, why they are still in research is that we still face some challenges. And the first challenge is that they shouldn't make the workflow more complicated than the workflow actually is. So they should make it as simple as possible. Also they should be easy to use. So we don't want to have doctors or nurses to spend some more time in order to learn how to use the medical applications for their hormones themselves. And last, which is the most important thing, because we're talking about medical applications, they should benefit the outcome from the surgery. And when we're talking about benefiting the outcome of the surgery, we are talking about reduced time of operation. So we want to have shorter time of operation. Also we want less time for the patient. And we want to have better results for the patient. When we achieve these concepts, or when we are able to reassemble these concepts, I would say that augmented reality will definitely grow into production. So if we go back to the first clip that we saw, we cannot say that we reassembled everything 100% from this side of the movie that we see here. But we have the interaction. The one thing that is different is that we need the special equipment. If we can only ask ourselves what will the future bring. Thank you very much.