 So Lauren do you remember who the first super user award winner was well, of course it was the CERN team It's hard to believe it's been two years since they won it in Paris Wow, okay, so I think that's Tim Bell. I've been talking about him lately, but you know you can say I guess that OpenStack synonymous with science and science and OpenStack go together So OpenStack was founded by Rackspace and NASA so you could say that Scientific research is really part of the DNA of the community Amazing. Well without further ado, I will introduce you to Tim Bell from CERN to tell us what they're up to these days Hello, and thank you for the chance to come along and give you an update on where we are with OpenStack at CERN So if you were driving between Geneva and the Jura Mountains, you might go past this strange globe This is a CERN conference center But behind it are the Atlas experiment control buildings So these are the surface buildings that are a hundred meters above the largest machine on earth This is the Large Hadron Collider 27 kilometers around for experiments and We fire around beams of protons in opposite directions and then collide them at the experiments The experiments so there are four this one is CMS It stands for compact muon solenoid It's a bit of a strange term given that it weighs 14,000 tons to call it compact So when we fire these beams of protons round, what do we get? We get around one billion collisions every second So each beam has bunches around a hundred billion protons They pass through each other at the experiments and then out of that We then get simultaneous collisions occurring inside the experiments and this is one of the things that's driving the computing needs Which is that we have to be able to handle all those collisions and then separate them out into separate different and distinct collisions But CERN isn't just the Large Hadron Collider I have the honor of having an antimatter factory just down the road from my office and There what we do is we take anti protons positrons, anti electrons and slow them down Put them into orbit around each other and create anti hydrogen This allows us to study items like does antimatter go up or down under gravity We host at CERN also the control center for the AMS experiment, which is actually on the outside of the International Space Station Looking at the solar winds and particles from space without the problems of them having to come through the atmosphere 2016 has been a great year for the LHC We've had extremely good performance the beam has been very successful in staying in for extended periods of time which leads to more collisions We've got about half a petabyte a day coming in at the moment And with this we're accumulating more currently the data stores about a hundred and sixty petabytes in total But looking out When we have a look at how we're going to be distracting these collisions from each other Then we're looking at about sixty times larger compute capacity required by 2023 Moore's law will only get us about a factor five less than that even if we manage to keep that going So how are we looking to address this need for scalability? So we started production with OpenStack in 2013 in 2014 in Paris. We had 70,000 cores We're now a hundred and ninety thousand Which is roughly 90% of the compute capacity at CERN running on top of OpenStack We do migration of long-running service VMs. We're doing around 5,000 this year and We're currently looking in place the process to get around another hundred thousand in the next six months So with this we have to have a platform that is scalable and that allows us to grow But at the same time the users are looking for more functionality not just more capacity So we've been looking at containers the users been very enthusiastic about reworking their applications for microservices We've also had a number of collaborations with rack space and with the European Union Indigo data clouds To try and work out how best to apply containers to science We've used the OpenStack Magnum project This is attractive for us because we can use the existing OpenStack infrastructure our security arrangements our capacity planning our accounting I just add Magnum as an additional functionality rather than having to do the same thing with mesos One place Kubernetes and other technologies But at the same time we have to look at how can we grow and we've been looking at public clouds here For a couple of years we've been running the Large Hadron Collider workloads on public clouds We've tried around 10 in total the vast majority of these are OpenStack based and what this allows us to do is to take The in-house tooling that we've been using for the on-premise cloud and use the same tooling for running on the public clouds So thank you very much for all of your help With communities like this working groups like the scientific working group and the large deployment teams We're going to be able to take on the computing challenges of CERN's experiments going forward. Thank you