 Gentlemen, welcome. How are you doing? Good. Introduce yourselves. Tell us who you are and where you are from. Stig, take it away. Hi. I am Stig Telfer. I'm working with Cambridge University. I'm just an open-stack HPC contributor there. So my background is in HPC and going back through supercomputing and scientific research, but also a little bit in how we can do that with open-stack. So that's what brings me here. Very cool. Blair. So I'm Blair Beth White. I'm a HPC consultant at Monash University. We run our HPC on open-stack at Monash. And so that's why I'm here. Well, cool. We're definitely going to talk a little bit today about high-performance computing. You both mentioned it and why it's cool and what you can do with it on top of open-stack. But before we get into it, let's go into an example. Stig, you were talking a little bit about how you first got involved with HPC about potatoes of all things and the beauty of potatoes. Why is potatoes relevant to high-performance computing? It's a good example because it says that science and scientific compute, you don't really have to put it into a box. And I was working on a machine which was, it was a deep learning neural network algorithm machine and it was using image processing to identify beautiful potatoes from ugly potatoes so that a supermarket could take them and they could pack them according to premium or economy. It was HPC. It had 28 processors in it. It was, I don't know how many gigaflots just to process those images, but it was an HPC application which didn't involve physics or science in any great data. Absolutely brilliant. So you've already hit on the core principle here. You said, you know, it took 28 instances, right? So high-performance computing, what are we talking about? Give it to me in layman terms like how would you tell your mom about or your dad about actually what HPC is? I think HPC is really anything that you can't do with your workstation or your laptop. You need a bigger machine. That's sort of the base definition. Other people branch off into many and specialized areas and divide it up into high-throughput computing, high-performance computing with tightly coupled networks are required, that sort of thing. But really, you know, you can make it quite a broad church, I think. There's definitely an intensive component to high-performance computing in whichever form it takes, so I guess that it is computing that is stretching the limits of the system that you're running on to get the most out of it. Very cool. So talk to me. How big are we talking with HP? So we got Monash and Cambridge University. Obviously, this is something that scientists really care about, you know, being able to do their work on, whether it's from beautiful potatoes or I know you guys have chatted with NASA recently. What's happening at Monash and Cambridge, which is cool. First, give me the size of how big are we talking here? How many servers are we talking about when you actually are talking about your HPC instance on OpenStack? It's hard to say at Cambridge, actually. We have a number of systems, and I've never actually rolled them all into one ball, but I think that's what we are going to be doing fairly soon. Are we talking hundreds or thousands of computers here to the average person? Well, one of the systems that we have is called Wilkes. It was the number two system in the green 500 of the world's energy efficient supercomputers. So it's a big system. It gets there through using a lot of GPU power. That machine is, I forget how many nodes, but it is a couple of aisles in the data centre. So it's a big machine. We also do a lot of work with high-performance data analytics, and those systems are equally big. I mean, we have a million-pound machines at a time. So lots of money and very serious stuff, because obviously science is based on it. Blair, what about you? What at Monash? What are we looking at? So I think maybe a little bit smaller, but I'm managed by OpenStack at the moment we have maybe about 7,000 cores in total. The biggest single logical entity there is a new HPC cluster that we're just at the tail end of commissioning at the moment called Massive 3. And that machine is maybe a little bit interesting because it has a whole pile of GPGPU as well in it. And so that actually makes up probably the bulk of the computational muscle in that machine. So walk me through a little bit of the differences between, and I'm going to say this wrongly, but the old HPC of doing things, the grid-based systems, and it has a very long history. It goes all the way back into physics and the way that they do their experiments. And now we've kind of got this new HPC on the cloud thing. What's different? What's new about HPC on the cloud? Flexibility is one of the key things. So when you're supporting a diverse population of users, OpenStack enables you to bring together a lot of different requirements into a single system. And for research computing, if we were buying individual machines on a rack-by-rack basis for a project and then another project and not really sharing the resource, we lose out. We don't get the utilization that we could do if we were to build it all into a private cloud and then manage it using the software-defined infrastructure that OpenStack provides. That's what it does for us. Love it. Flexibility. Fantastic. Anything else? Yeah, I mean, I think the other thing that it does is it can lift your HPC operations if you're going to actually run a managed HPC service atop of OpenStack. Then it lifts that up into the new world of DevOps and continuous integration and so forth and deployment, which I think is actually that's a really powerful thing and it's very useful in the HPC community, which is previously run with quite static environments, very little change, but can no longer go on doing that in the face of, particularly over the last couple of years, was in a whole pile of big security issues where it's no longer viable to keep just stay on the old kernel just because that's what we run on this system. It sounds, HPC has this very technologically advanced image about it, but the system management is really very static, as you say. It hasn't really evolved in a massive way since whenever. Yeah, in HPC, traditionally, the system management configuration is actually pretty much 95% in deployment of the node. So cool. What I think is exciting about this is since we're on SuperUserTV, I guess it really boils down to actually how you're helping the researchers and scientists do their research better. So as a bit of a game and a bit of fun, I'm hoping each of you can actually ask some questions of one another about your users and kind of what they're doing on it. So take a minute, take a minute if we can always do a cut here and then come back to it, but ask each other a couple of questions on these topics. Okay, oh well. Shake it up. We have everything, I guess, you know, because it's sort of like they're a typical campus HPC resource where you've got users, users on the one hand have some MATLAB code that is taking a long time to do on their desktop or suddenly they want to do it 10,000 times for a parameter sweep or something like this and, you know, so they then need to scale that out on a cluster or just even on other distributed resources. That's probably, that's kind of actually the most common use case and then there's other people that are actually doing MPI or OpenMP type parallel programming, others that are doing visualization, instrument data processing. We've got the lot really. The list goes on and on. Quite a few users on your cloud as well. How many users are there on average in the Australia National Cloud? Well, so we run our infrastructure as part of the Nectar Research Cloud in Australia which is a bit of a pioneering project. It started up in 2012 running OpenStack. Almost 10 data centers, right? Yeah, there's eight nodes actually. Eight nodes, got it, yeah. And so at any given month on the Monash node of the Nectar Research Cloud we have about 350 active projects on that. And at the moment in broadly speaking in Nectar there's over, I think there's about 1,200 active projects on that Research Cloud. Fantastic. We have a very interesting project in bioinformatics, which I was just thinking about, which I'm looking at how we can bring that into an OpenStack environment in a way which is as performance as possible. Looking at how we can do a kind of a genomics workload in using the latest or the best of the modern tools is something that's of a huge interest within our group in Cambridge. So that's something that I'm going to be focusing on with the team there. I'm really looking forward to that talk tomorrow actually. I think that's something that we're very interested into. Brilliant. And so if people, obviously for the people who aren't here, what's the best way for people to get involved, especially with HPC and the scientific community? Well we've just, within the foundation, we've just created the scientific working group. And the working group is not necessarily a direct bond of contact, but it's an advocacy for how we can actually help scientists, researchers who are looking at using OpenStack, looking at bringing OpenStack into their installations. And it's really about how we can share information, how we can bring people together and exchange ideas about the best ways of doing scientific compute in all its varied forms in an OpenStack environment. We're trying to give some structure to the hallway conversations that we invariably have at every summit with more and more people starting to become very interested in moving their scientific compute or HPC onto OpenStack or somehow leveraging OpenStack alongside those other resources. Fantastic. So it's as easy as going to Google and typing in scientific-wg for the scientific working group. Throw an OpenStack there if you're wanting to find it on Google. Thank you both so much for coming to the conference and thank you for putting forth your time to make this community initiative happen. Obviously OpenStack cannot do these things unless good people like yourselves actually bring people together under a common umbrella and actually try to change the world for the better. That brings me to an interesting final point perhaps in that Nectar was quite pioneering back in 2012 going out and deploying OpenStack as a community cloud to the research sector in Australia and it was the community model in OpenStack that really made us make that choice. We had a small little group of people that were evaluating the options at the time. I was fortunate to be one of those people and it was really not about any particular technology thing. It was about the community and the open source model around it. Absolutely. Fantastic. And it was a really great choice. Thank you both so much. We'll see you around the conference showroom. Excellent.