 And just this presentation is dedicated to the platform Swedish natural research infrastructure called interest and my name is Emmanuel. And I'm connected to various institutions and platforms of the university also make the links. And when it writes, I will explain briefly. So in progress has my Swedish partners, which is financed by the Swiss Research Council. And the goal of in progress is basically to help users coming from Swedish academia, initially and then as a step to see if we can also extend the help to the industry. But at this step, it's open for all Swedish university, even those which are not partners with Paris. And the idea is to help users to visualize the data. And this can involve, let's say during virtual reality, mental reality, machine learning of the data in order to then come to visualization or even working with text data or 2D 3D data sets. That's, and there are also a set of visualization centers connected to some of these universities, for example, University of the University and University of Gothenburg, where they have these big visualization books. And this is basically a small map of all the experts and administrators, coordinators, working in influence. So in total there are around 50 visualization experts. There is also a steering committee. Melvin Davis is here on spot in the audience today. Most of you are online. And there's also scientific advisory board with my right to listen to Max Ford, where it's healthy house from the University of Bergen and Michael from the German climate. And on this, let's say, experts, we are also a set of nine coordinators or eight. And we are basically trying to work as a compact content, organize, let's say on each perspective, know the university. And at the university, we are actually a set of, let's say, it's the last connected in Paris, so there is the CPAP for later in the process analysis. We have the virtual reality laboratory, which is located at the science center in the basement. And they also teach virtual reality students. The lunar, the computing cluster with a new cluster for cosmos. And there's also the team initiative, which is a virtual platform between Denmark, Max Ford, and the university. And then we have the man to slap. And also a island and more detail the math, let's say, the problem with examples. Yeah. Yes. And then we also have you must do it. And also has a shared position also at CPAP. And you might wonder what can you do for you. So, you have these key points. So basically, we can try to increase your research impact. We can basically help you to write research grants and input the visualization part which might help you to better. You can construct a better application, let's say, could involve society as a third, let's say, assignment, which exists at all. And also, hopefully, get some further insights into your data and without various data analysis and tailor make certain installation solutions that you might not have thought about before. And if they're presented by these experts, you might be like, okay, but this lecture, I really want to really help me in my experience my research. And then, basically, we can work a little bit across the whole spectrum, let's say from data collection and creation with a small kind of instance that I will speak about soon. And to let's say machine learning and AI of your data sets to performing statistical data analysis, data visualization of course, and also interactive and collaborative analysis. So, if you switch back to the data collection and creation, if we think of, let's say, X-ray and image tomography, I feel I need to point out so it's not really that we spend, let's say, five days with a user at a beam line at max for collecting data but it could for example, involve that perhaps we will come and visit the meantime during the initial days make sure you have good contrast. Let's say in your, in your images, perhaps to get something on the exposure time, great number of reaction music in tomography because if we can interact here, I think that it's a little bit, when you're collecting the data to speak about images, I mean, it can also be just related to open microscope any company. It can actually help our further work along this time, because if there's not enough contrast, we might need machine learning to segment something but if there is a lot of contrast and high-resolution and expected spirited acquisition parameters, perhaps normal grade-scale level thresholding is sufficient, so we can minimize certain things. So, and this is also what many people like to say, values around the world try to strike, have a better, let's say, interaction between the ones that analyze the data than the ones that acquire the data. So, that is why I want to relate to this, but if you think of scraping words from, let's say, databases or applications or, as in one example, it's where they under-influenced the collection of certain words from social media and they make plots. I mean, that is another type of collecting data. And as an example of this word, let's say, I want to present this, which is not yet related to income, but it's related to policies. This is a project we ran under the team called Law and Lives on Food. And there was funding from the Swedish Immigration Agency, sorry, the Swedish Immigration Agency, that's this way. And these users, Alexi Rioski and Matvonson from New University and Ulthik, they have access to seeds from a bank. And usually, they would do live macros to be a fluorescent macros to be. And these are two demons. And we wanted to, they wanted to have free representation of the seeds, especially a certain cell structure. So we did a lot of scans with lab-based macromography and we realized that the conscience was not sufficient and we scanned, let's say, the entire seed and we iterated this process many times and we cut the seeds to make them smaller. And that idea was just to mean that make more high-resolution scans. And we realized that we had to get more contrast. So we borrowed something from the biology world. So we stained the seeds with p-gate, which stands for spastic tungsten acid. And then we could actually, much better, see servant as a structure in the cell layer that they want to see because this is where they will find the so-called different Ukraine. And then we scanned the samples, we scanned a lot of samples, probably more than 50 or 70 different scans with it. And then we were sounding that with a lot of data. And since we had stained it, it was difficult to find classical image processing and segmentation. So then we looked in more to some machine learning-based methods. And this is Alexander Superstakis, who's also part of E-Provis and SIPA, and he took a look at this data. And we also hired two summer students in summer who sat and annotated hundreds of these images that they then provided to Alexandros and then he could refine the machine learning-based segmentation methods. So this is an example of how, let's say, now this example has involved, let's say, links around the universe and other affiliations of us, but in a parallel word, let's say, being able to work across here and suggest, let's say, image acquisition parameters or how to work your data, that is also something that, yes, Infra could suggest something so that in a parallel, imagine that you would have come with this data here, that middle, upper part, and then we would have to do a lot more work. Now they are quite as contrast and so on. And perhaps Infra will be involved if there will be some further crystallization, but right now the focus is on just trying to expect numbers from these forms and, for example, quantitative. And then they try to feed this information back to the industry so they can get it for how to grind down the scenes, let's say. So this is one example. There should be a PR publication about this in the links newsletter, quite soon as well. It's from the presentation. Yeah. Okay. And this pyramid here shows you the type of support that you can get from Infra. It's like, on the lowest, let's say, on the first step, let's say, you can contact, let's say, Infra is the homepage and you can send in a support request, so-called helpers. And this could be, for example, that you were just asking a few questions, which software may to be smart, it might involve a few hours of support and so on. So let's say it's emailing, let's say, and in the middle of it, you can actually request free of charge support up to 80 hours with help of visualizing your data and so on. So if you want more than 80 hours of support, well then you might need to apply for another project or you can send in this really, which we have at least once created the next step, like should be some spring, the next step, where you can apply for a lot of your project, basically the same way that you send in an application to, let's say, a sync account or a mutual source. And this could involve, let's say, you getting hundreds of hours, maybe 300 hours, we'll see. These are some pilot products that were done prior to this in that open call. And you can see across which disciplines in progress works. And basically looking into climate changes, helping users without visualizing online activities and factors, and this is what I mentioned before, spreading information from social media. This could involve visualizing some models on Airflow, let's say related to cars, let's say, drive or airplane fly. And these other ones which I will show a little bit more in detail is, let's say, doing VR applications in the cloud without having the need to have a really strong stationery. And visualizing patients in, let's say, creating a different field of yourself and doing more modifications in terms of training and nutrition. And that is an example. This is a pilot project that was supported by University and they create the medical cream of this person. So they first made a face scan, and then they map the face on the 3D model, and then they created this different screen. And this is a short video where they can actually write, let's say, how much this person is training, that the person is eating, and they can see how this affects, let's say, the health in the digital world. And then you can try and realize how it is affecting me in the normal world if I would follow all this work out between some suggestions when it comes to eating. And then if you switch into the x-ray world, let's say, I already presented one example prior to that with the data position I gave some advice. This, you said for example, is from the 3D lab at the Department of Pollutionary Quality. He actually has this setup called photogrammetry, which is basically a setup where you, yeah, you spin the sample and you make high resolution photographs of the sample. So you can, in the end, obtain, let's say, a 3D surface scan of your sample and these type of scans take 8 hours, possibly 5 hours or more. And the other week we made a small test at the Department of Biology because they're more than you see the scanner. And we scanned a bug that we usually scan with this photogrammetry machine. And this scan took roughly 5 or 7 minutes. And then we visualized it and it was, let's say, quite quick, let's say. And the setup goal is basically to now it's something that's working for us because in some scenarios, they might just want to have less than a 3D surface. Seeing an inside might not be needed, but doing that with photogrammetry within so many hours, if you can do it so quickly with x-rays and then just fill the void inside. We just extract the 3D surface and then he would like to upload it to these online cloud platforms that he has scheduled. Spin it around and show it for evolutionary purposes. Instax and others. Going back to the more in-depth support projects, there was an application that ended in June. And 45 projects were seen from footballing universities. Some of the universities are most of our public interest, some are not, but this doesn't matter. It's open to all of these universities. And you can see that the application span from humanities to natural science, social science, biology, engineering science and medicine. And we will take a look at one of these applications because one of them is actually able to extract tomography. So, here it's actually Lena from the University and this is Richard with Venus, Marco and us. And they sent an application and they wrote, I think, put it by pages describing what they need help with. So, they have actually extracted these so-called microfossils that they have drilled down hundreds of meters in the sea between Denmark and Sweden. And they can extract these microfossils from the soil. And then they have scanned them. They have done hundreds of scans at both Silesium from, and also they have scanned at Spring-Aid in Japan. And these microfossils, they will have small pores and these pores will be related to how much oxygen and pH these animals receive during their lifetime. So you can actually relate the shape and the structures of these microfossils to the climate, actually, and see how they develop over hundreds of years so you can go back in time. And I can also mention that Lena is also part of another link scheme called Environment and Climate, which is a co-group member of the work group one. And we need this project. We are assigned as a five-experts and in front of the university. So we're trying to, the data will be uploaded in a few weeks and we will see if we will or will not need machine learning and trying to separate these samples from each other and also to refine certain implementation of these microforsils. And then we will see, we have some distance and ideas on how to further visualize this and perhaps even do a nice interactive virtual experience from going from the sample site to the lab to some point as well. And then something we have worked with both internal and external and better let's say explain the skills of all the experts in the interface. And so we're trying to find the skill part because up to now we have a so-called competence map. So here on the Y-axis you have the list of all the future experts and they might have checked in what skills they have with respect to certain programs, machine learning, visualization tools and so on. And then something, and this is quite difficult to overview, you might need to sum up the numbers and so on. So let's try, at least now I don't know, and more to follow up with. We have a design resource called skill cards where you can see, let's say what skills of certain interest expert pass, which projects, some example projects that have worked in and so on. For my part, I like to, let's say, work with the so-called tomography standard with the civil life where I can scan certain samples for educational purposes and people can train in tomography and also in reconstruction, and then in some, let's say applied projects I have quantified the values, let's say, defects in steel for switch companies that can probably save it and then we can extract numbers on this with respect to all the samples. And the other person is Alexander Sokolsakis that I mentioned before, this is his skill card and it comes from the continent mathematics and he has actually done here on the left side, the Anja Schmitz, this example is also a university, they annotate a set of samples of numerally and he managed to run that in the machine learning network that he set up and managed to quite nicely segment its features. And I can also mention that Alexander is now quite, let's say, overbooked with similar, let's say, data sets coming from the syndrome world because you have so higher solution that the contrast mechanism might be more tricky, let's say, so you can really realize this machine learning can really make a difference there. This is Carl Troy also from the Luminum and Zepha and he's very skilled, let's say, in trying to create graphical user interfaces, especially when it comes to spectroscopic imaging. And he's also been involved in many different projects and he's also trying now to get for experience in X-ray spectroscopic imaging. And this is Jonas from, let's say, Luminum from Zepha at BMC. A few years ago, I did in this segmented line sample from the left side, and he took that and he made a STL file, and then he created this virtual world of a future green line, let's say, a Max World and Mad Max green line. Basically, the user can drag to the green line, try and understand scanning reconstruction and utilization, and then if you press this button, this line will appear as wash in this room and they can go in and make it score. And in another example, he got data from the school of Sebastian Basterström at LDIC, which is a convocal image, it looks a little bit 3D, but I think it's actually a 3D image. And he managed to convert that into a scientific rendering, and he actually made a movie of that. So this is, let's say, the user exploring in the real world the blood vessel. Actually, so this is another type of way where you get a photo or you get some real data from one modality and then you try and create a new model, let's say in the virtual reality world. And then you can explore this for how long you want, let's say, exploring the different blood vessels and channels and so on. And perhaps you will see some effect on the blood vessel, maybe some fact or something, and then you can realize that you never have an issue. And from the lunar computing cluster at the university, we have one single man who works with modeling and also really construction of buildings. And there's also an understanding and understanding has, for instance, been involved in this project, which I mentioned briefly before. So they have the channel from the university, who has developed a virtual reality program and platform, and they would usually run it on the stationary and then require a lot of computational resources, what they did under this pilot, they managed to go from this upper part where you have a stationary PC connected to headset to a situation where the virtual reality world will be rendered on the cluster and then it will use the streamline directly to the VR goggles. So you personally don't need a really strong stationary PC. And Gunther from the design lab, he was also involved with this one, as you can see in the lower right corner, and he's also working with, let's say, applications for educated pilots and drone pilots and say. And I was thinking that we should take a look at this example and this is coming directly from the user and their virtual reality situation. And this could be, let's say, demonstrated on their station or PC or, let's say, on the cloud on the lunar cluster. We don't know, let's say, which type of streaming using here but at least this is showing you how they can interact with data and they're actually exploring various cell parts. So you can see from the expression and say, so you can see it's quite amazing. And then there is you want your comparison from the virtual virtual reality laboratory at the university and as you can see here they have a set of users in the explore virtual reality lab to, let's say, explore various whole scenarios that could boost the user and give it up something unpaid. So you can see on the level of the detailed interaction that you can make an application you need. So I'm here from from the same lab. He worked on some data colleagues. This is actually in the lower right corner. This is my cosmography data sets of a butterfly from users, she had a lot from the university and they scan these butterflies that need to scan it before they move up to the university. And they are mostly interested actually in the type of this butterfly. So they are not so interested in saying that in the wing parts here and so on. So they need to clean it up. And what the answer colleagues did is they designed a circle tool that will help of the VR controller, you can let the user go in and actually segment away these parts that they don't need. And we have a small video of that as well here. So you can see here comes the virtual controller with the spherical, let's say, that feature that the red part. Now I think we can see it a little bit. I'll switch to the next view. Yeah, there you see imagine the way the parts that you not want. So I mean, you can think of it like for certain applications, you might realize that, okay, if we have hundreds of data sets, we can use machine learning to remove certain things and trade. If you have fewer samples, perhaps such tools as this one could be user friendly enough to let the user segment way these parts that they don't think, and then that would say more of the head part. And yes, one, another expertise and. Manatee labs where they explore the interaction with a few months and how the, let's say the human body. And how the muscles or legs might be moving, let's say, from the Uppsala node, they are the first node actually made a video about their node, and we will take a look at that one. And that comes here it will be a five minute video. Let's just be silent while this is playing. This is the Armstrong laboratory, one of the largest campuses at Uppsala University, and home to thousands of researchers and students. We are going to talk to you that needs from with the node coordinator at Uppsala University for the national research infrastructure in France. Welcome to the inverse Uppsala. Scientific research in a growing number of fields relies more and more on analysis and large amounts of data. Modernization techniques may provide a greater level of understanding of the science behind the data, revealing features and behaviors that would otherwise be difficult to see. This is a brief release receive support through a national health list, where an application expert is assigned to their program. The support typically includes analyzing the data, selecting suitable software and scripting and tools. In my role as application expert, I can help visualize research data in various scientific domain. For example, automatic visual inspection in manufacturing, healthcare, and life science. I provide guidance, technical support and training in visualizing and understanding large volume of data, as well as assessing the quality of data and extracting insight from data analysis. The English Park campus behind the stunning humanities theater is the center for digital humanities. There is a lot of scientific visualization tools for the arts humanities and social sciences. We have graphs maps, trading models and so on. But in the humanities, it is often the case that the visualization is not the end product, but the purpose is to think with images. For example, a 2000 year old narrative visualized might be able to help us reflect on migration transformation and environmental change of place in space. An important mission for interest is to provide user training. We offer introductory workshops in different software and tools, as well as more advanced program sessions. The autumn is now that on the entrance floor at Amsterdam laboratory is a venue specially prepared for visualization training. All scientific disciplines can take advantage of modern visualization to transform large complex data sets into visual forms that enhance human interpretation. It is either used as a general analytical tool throughout the research process or the present results. There are three examples of projects that have really benefited from visualization. Combining visualization and surgical planning reduces time spent in operating. A system developed by the Center for Image Analysis provides surgeons with virtual 3D environments of patient specific anatomical models of the law or plan surgical procedures. We've used the system for visualizing and planning bottom reconstructive jaw surgeries for soft tissue reconstruction and for complex trauma cases. So virtual surgical planning is present in all reconstructive cases we do today. Evelina Radeviva, the main obsolesity library contains thousands of old handwritten documents that are gradually being digitized, thus making large collections of handwritten material easily accessible and surgical to everybody. The aim of our research is to do transcription of handwritten documents and we train our audience by using the images of the documents and man made transcriptions. And by visualizing every part of this process, you can also improve the different algorithms that we are using. Life scientists use modern techniques to sequence RNA and DNA directly at the original biological tissue. This allows for precise link between the genetic information and its cellular location. Evelina Velvy and her search groups have developed the tissue maps tool for interactive visualization and exploration in millions of data points, overlaying higher resolution images of the tissue samples. Evelina Velvy's experts provide state of the arts, the sensation for any scientific domain, as well as equipment support and training. We can help you explore your research data. Yes, that was a very cool video from the Psollano and you can also find this video on YouTube actually and also on the Inferis homepage. And I can inform you that we're coming basically to the end of the talk. But this is the current page to go into Inferis or SC right now. You can sign up and receive the Inferis information emails, let's say what's going on and so on, you can update yourself on various events, call for applications, let's say the large ones where you might need hundreds of hours of work, that one should be published sometime next spring. But you can also right now fill in your needs, if you have visualization needs, let's say, so you can at least receive up to 80 hours of support for free. But when you sign up, let's say, it's also nice if, let's say, your project could at the end be showcased on the Inferis website. So it becomes PR both for the user and for in-clubs so that, let's say, in the Swedish research council who supports this can later on see how much Inferis can have the Swedish researchers I think. So it's a win-win for everyone. But in approximately two weeks or so, a new Inferis that will be lost. So if you're going in a few weeks, you will see the difference and that there will be a lot more information with, let's say, details on the experts, upcoming events and news and so on. And you can read it in detail, use, yeah, project stories, let's say, some of them I showed already, but there will be more. So I mean, you can either, you know, go into Inferis or you can scan this QR code. And with that, I thank you for listening. And as a last word, I always like to show this image in the upper right corner. That's the last thing called message, which basically shows you that if you give an image in sign, it's from my position to image analysis. And so I always say next question, you give them a little bit more time. We can actually create quite some magic from the data. And we'll have to thank you for listening.