 Yeah, thanks, Tim. So a few weeks ago, I had the opportunity to visit the University of Cambridge and not only met with their infrastructure team, who is working on OpenStack, but got to meet with some of their researchers who are using the OpenStack Clouds. So Tim and CERN are working on one of the largest research projects in history right now, but we're about to hear from someone who might top that. From the University of Cambridge, please welcome Dr. Rosie Bolton. Hi, everyone. It's really good to be here to talk with you today. So I think it's fair to say that the LHC that Tim was talking about is one of the most exciting thing that's happened in physics recently. I'm going to tell you about the most exciting thing in physics that hasn't happened yet, but it is going to happen. So the Square Kilometer Array Project is a billion-dollar-plus project to build a 50-year lifetime radio observatory. I'm going to talk about the first phase of that. So the first phase of SKA will be finished in 2023 and will consist of two separate instruments, so two instruments, one observatory. These instruments will be in very remote desert sites. So the first one will be an array of antennas spread out in clusters in the Western Australian desert on baselines up to 80 kilometers. Each of the little white specks is an antenna, an aerial, the size of me. So that's in Australia. All these receivers work together as an interferometer. And the second instrument in the first phase will be an array of 197 dishes collecting radio signals, and they will be spread out across the South African Karoo Desert. Again, another interferometer. Now, SKA will be much more sensitive than any radio telescope that we have today, and that gives us access to lots of different science. So I want to just wet your appetite with two examples of the science we can do with SKA today. I have very little time to tell you more, but do find me afterwards, look me up. So if you have sensitivity, you can look deep into space. We can use the fact that if we can collect photons from very far away, we can see far back into the beginning of the universe. So this is a graphic showing how the universe is expanding. So down to the left-hand side, we're looking at the very early universe, and then it's expanding along. We can see lots of structure in the universe here. SKA will be able to probe right back to the time when the first stars were switching on. And we will use the fact that SKA is going to be designed with 65,000 frequency channels to distinguish different emission from different parts of the universe very clearly. So an overall goal is to build up a survey of around a billion objects in three dimensions, and we can look at how the structure in the universe has evolved over time and compare it to our cosmological models to see if it's behaving as we would have expected it to do, and if not, to amend our models. So that's looking deep. But also, if you have sensitivity, you can look quickly as well. And a nice tool for doing that is to look at pulsars. I hope many of you will have heard of pulsars, but essentially, a pulsar is a dead star spinning very, very rapidly with the same mass as the sun, spinning round at a very high angular rate, and often with a magnetic field misaligned to the spin axis, which means that it can send out a beam of radio emission rather like a lighthouse beam. If it happens to line up with us in Earth, then we see this rotating very clearly like a clock ticking along. Now, we already know about pulsars. We found several thousand of them. So the yellow points on this graphic show the pulsars that we've currently found in our own galaxy. You can see that they're actually clustered locally to our own location, which is the red circle at the top. With SKA, we predict we will find every single pulsar in our galaxy that is pointing towards us. That gives us access to a whole array of surveys that we can do using the pulsars themselves as tools. Now, there's many different types of science. I'm going to just talk about my favorite one, which is represented by this graphic here. So if we have a nice survey of pulsars, we can choose a few tens of the best ones and choose them so they're spread out across the galaxy. And we can look at how their timing pips come in regularly. We can keep monitoring these timing pips. If a gravitational wave works its way across the fabric of our galaxy, some of the pulsars from one side will have the metric of spacetime squashed between us and them. And on the other side, they'll have the metric of spacetime stretched. That means that there will be an offset in the time delays of the pips coming in one side to the other side. So we'll see one half of the sky coming in early, whilst the other half is coming in late. And we'll be able to infer a gravitational wave rippling through the galaxy. I think that's pretty nice science. So that's all the science I have time for. Let's talk about why it's difficult to work out how these things happen. So luckily, I said the two sites are very remote. They're in the desert in South Africa and in Australia. But the processing centers are not in the desert. They are down in Cape Town and in Perth. So we don't have to deal with off-site HPC. The science data processes for SKA that I'm working on are the thinking brain of the SKA. So it takes the signals that have been combined and then analyzes full data sets to ask, what is the model of the sky and the telescope that best fits these data? After the science data processor, we then have to pass the data to SKA regional centers which will be globally distributed, which allow the scientists access to the data products to ask, was my experiment successful? How does it compare with the theory? So to make an analogy, the SDP, the science data processor, is like the conscious mind of the telescope. And we then have to pass the products on to the regional centers where the kind of cultural aspects and community comparisons can be made in regional centers. Okay, so let me give you some numbers. I'm gonna skip this slide, it's being a bit too wordy. We have many challenges for SDP, the science data processor. The first is complexity. We have multi-axis data sets. We have iterative convergent pipelines that need to run and we have to be able to predict how much time they will take to run. We need about half an exoflop of compute to do this. That's quite big. And we have to orchestrate the ingest, the processing, the control, the preservation and delivery of these data products. And we have to keep up with the incoming data. We are, of course, cost constrained. This is a public-funded science project. The first phase has a budget of 650 million euros. And the science data processor will be around 10% of that in total. So we don't have lots of money to throw around. And we're also gonna be power constrained for the same reasons. We can't afford to switch on all of the compute that we might need. So we need to make things much more efficient than our current assumption of a 25% efficiency. We need to find ways of making things scale better. But we also have to design a system that has to allow for software and hardware refreshes over the 50-year lifetime. And when we think about the regional center and the delivery of the products to the scientists, we have to consider which facilities might be available in national infrastructure projects as well and how we build a federated system for that. So this is my summary slide. We have 400 gigabytes per second of data to ingest into the science data processor. Each graph of tasks for a six-hour observation will have around 400 million tasks in it. And we require around half an exoflop in total of peak. This is my favorite. We need 1.3 zettabytes of intermediate data products for each six-hour data set. These are data that get created and then destroyed every six hours. And in terms of final products, we need a petabyte a day of science data products to deliver to the rest of the world. So hopefully, that sounds interesting. Thank you.