 Alright, so the next lighting talk and the last one for this before we break for lunch is SKA. So, an amazing scientific experiment and we'll be hearing a lot about it, so, yeah. Hi, hello. I am Igor Yulmaz and today I'll be talking about clouds. Wait, that's not right. Yeah, I'll be talking about space in particular SKA Observatory where we are doing next generation, radius, and even big data. And I know that I am the talk that's keeping you from having lunch, so I'll tell my best to be quick about it. So, we are trying to build one observatory, two telescopes, one in South Africa, one in Western Australia, on three continents. As an intergovernmental organization, you can see the member countries on the map dash, and our mission is to build and operate cutting-edge radio telescopes to transform our understanding of the universe. So, what do I mean by that? To understand that, our first idea was to investigate neutral hydrogen in space, which is only visible in radio frequencies. So, for example, you can see in these pictures, there are three isolated galaxies, but if you look at them in hydrogen, you can see that they are interacting with each other. They are connected, maybe part of a larger structure, so that's why that's what we wanted to investigate science-wise. So, we are trying to look at the hydrogen all the way from the present day, on the right-hand side, right back to the very soon after the beginnings of the universe on the left, with two telescopes because of the radio frequency differences. And we need an incredibly large collection of, a collection area to get this many data, and hence our name, SKS square kilometer area comes from. We are very great at naming things. And with great collection area comes great computing power needs, which I want to focus today. So, going a little bit more into details about our data processing needs, we are aiming to have producing huge amounts of data. You can see that we are talking about petabytes or terabytes per second of raw data coming to our first signal processor facility, and then outputting nine terabytes per second of data to be processed in our science data processors, which will then have to be shared with the actual scientists in the member countries over internet, which we are aiming to saturate the link all the way here. If we want to talk a little bit more about numbers, that's more in line with the HPC scale. And I have selected a couple of high-pride science objectives here. We have to wealth of them in total, but you can see that if you focus on the first one that I mentioned early in the beginning, we are talking about over 40 petaflops per second compute power that we need to just process just one of our science goals, which also means that we will be generating a lot of data. A disclaimer here is I'm told by our scientists that all the numbers in this table are estimates with high error rate, but the truth is we will need a supercomputer one or two to process this data in real time. And I did want to focus on these large numbers to highlight the end game so that you know that we are aiming for stars. In reality, we are grounded because we are ground-based radio telescopes pun intended there. So how do we do that? How are we developing our systems? We are doing it with cloud native and open source principles in mind on the right-hand side. You can see that we have a layered platform approach that's based on OCI images to standardize all of our environments across the globe because we will have these environments in each member country. And we are focusing on web-based control and monitoring, which will also help us installation for hardware vendors because as I said, we will need supercomputers. We don't still know them. And one other point is we wanted to have a simplified API with clear separation between layers so that many different teams from many different cultural backgrounds can work on them together. And we want to resist the urge for doing anything special because the observatory will be operational for more than 20 years. So we will have to maintain the system for more than 20 years. So we want to keep it simple. Currently, the current numbers are not that impressive. Currently, we have two main clusters for main software development work. One is the CI-CD clusters that's serving 25 teams around the world. That's also our runners, Binderhub notebooks, different integration staging environments, cloud development environments, our logging and monitoring stack. This is also replicated a little bit on an AWS cluster. And we have a data processing cluster with GPUs, specialized hardware, other FPJs, TPMs that is mostly used with a very large nodes so that we can do exploratory and prototype science. I need help in resource management, job queuing, multi-tenancy, all the topics that's been discussed here. So come find me or I'll try to find you. So thank you. You can see my contact details. And if you want to hear more, Rohini will be giving a talk, a little bit more detailed talk on Thursday 11. So come and join us there as well. Thank you.