 That is Okay. Good afternoon. My name is Travis Broughton. I work with Intel corporation in the open source technology Center and Today we've got a few of our OSIC Users the opens OpenStack Innovation Center that we've opened and we've been showing in our in our booth over there We have a couple of the users that have actually been submitting jobs and have launched Work to to participate in the cluster and what we thought we would do this afternoon is have a little conversation about what types of jobs they're using and basically what What sorts of things OSIC has enabled them to do? So with that, I guess maybe if you guys can introduce yourselves first. So I'm Joe Tolerico. I'm Engineer at Red Hat doing an open stack performance and scale work Hi, I am Stig Telfer. I'm an open stack HPC contributor with Cambridge University. I'm James Thorne and I'm a cloud solution architect at Rackspace All right, and so Joe, maybe you could tell us a little bit about what Red Hat has been doing in in the OSIC cluster So we had the 132 node cluster and we were using triple low to do a deployment To determine the scalability of triple low and you know issues that we could run into with that once we had triple Up, we have a project we're working on called Browby to do performance testing monitoring and validation of open stack at scale and Stig So my up my allocation is a 66 node bare metal allocation and actually I have it right now But I can't use it for obvious reasons, but I'll be starting on it next week when I get home and What we're going to be doing is our group comes from a scientific compute background And we're looking at how we can use the OSIC cluster and test scale for creating a an HPC star formulation of open stack and use it to do performance telemetry of HPC applications and whether we can create an infrastructure which will really scale for gathering performance data okay, great and James, maybe you can tell us a little bit about how how Rackspace and Intel are are using OSIC ourselves to To advance open stack. Yeah for sure So I'm definitely kind of the oddball out of this group because I don't actually use the cluster Myself and RPC or Rackspace and built the Dallas cluster and that's what these folks are using to run their tests So between Rackspace and Intel we're working and using the OSIC hardware to do things such as accelerated ironic development Because that's something that we want to incorporate into what we call cloud one So for any of you that have requested access to An open-stack project in the environment you've probably gone to cloud one that osik.org That's part of the environment where we want to add ironic to in the future so we can Provision bare metal instances through an open-stack project rather than pure bare metal deployments Which these guys are using and that requires a VPN connection And it's a much more manual process instead of using an open-stack project to allocate bare metal via ironic And actually that raises a good point You guys are using bare metal. We're also seeing some Quoted allocations within the open-stack cloud you guys talk a little bit about why you chose to use bare metal Yeah, so first to use trip below. We needed a bare metal allocations And I mean no one can really replace bare metal for us Especially nuances when it comes to networking not having port fast enabled and trying to have ironic deploy onto that is just an absolute mess So there's really no substitute for bare metal in this case So The projects that I'm interested in to do with scientific compute are really about how we can get the benefits of Software-defined infrastructure and open-stack being the obvious example And how can we get the benefits of all that flexibility and all that capability without paying any overhead? And so a bare metal option is we're really working at the infrastructure level So we're creating the open-stack formulated to the application needs and then we're looking at how we can bring back monitoring infrastructure to Into that from that bare metal environment as well Okay, and do you have any insight into what you know some of the other users have been Doing inside the the cloud what kind of differentiates those use cases from from bare metal? Yeah, for sure So we have quite a few people using an open-stack project right now So the largest user right now is the open-stack infr team So the open-stack infr team has a bunch of different providers Whether that's Amazon or Rackspace public cloud or digital ocean or their own gear and there's a public facing Grafana dashboard Somewhere that shows all these different environments They use with graphs detailing API response times the time it takes to build instances instances The amount instances they have but they came to us when they learned that OSIC was or the OSIC environment was being built And asked for a large quota for an open-stack project So they're the largest user right now of that environment and they're constantly spinning things up and down to gate various projects That need unit testing and all that sort of stuff next up There's the OSA engineering team so open-stack Ansible, which was founded within Rackspace They use it to do all-in-one installs and tests as they push updates to the repos Also the OSIC engineering team themselves use the environment for all-in-one installs of OSA or whatever They may be working on that allows them or all that they only need a VM to do with a VM to do their testing with But they don't need bare metal it's much easier for us just to create an open-stack project for them Set some quotas and then give them it giving the environment and let them go do their work And then there's a handful of individuals doing very small-scale testing that don't require large quotas great and so in in terms of you know some of the The goals of OSIC we you know we really want to influence and benefit the community and push things upstream Can you talk about you know what what you've encountered and what sort of? Upstream changes you're you're you're seeing and hoping to see your influence sure So some of the stuff that we driven with triple low or some heat template changing and even the tooling we use like rally We found that we launched 24,000 instances, but rally only deleted at 20 instead of the time So while took less than an hour to deploy 20,000 machines it took seven hours to delete all 20,000 so we had to patch rally to Be able to configure to delete more than 20 machines at a time So a lot of the work that we're doing is driven upstream. Okay, and and sick I know you haven't you know begun yet, but what are your plans in terms of upstreaming? I know you have a kind of specialized use case What what are your plans for introducing this into the oh as much as possible? So I mean it's interesting what you're saying about OSA because but one of the things that I'm definitely interested in evaluating is is using OSA with ironic and There's a new a new playbook just just in the final stages of formulation Very interested in using that at scale and trying it out and working through that as a byproduct of what we're really interested in doing It's like the end goal I really want to try out all of these other new technologies that we just can't do on our you know Test environment because it's just a little bit too small. Okay, and what about community sharing? I believe when we you know when we spoke last week you mentioned you know a working group for for HPC You know what are your plans there in terms of sharing and publicizing these things? We had a meeting today and There's huge enthusiasm for how the different scientific researchers from across the world can gather together and really kind of share their Experiences what's gone? Well, what's what's gone badly how to do it right that kind of thing? Everyone is really really enthusiastic on how we do that so the working group itself is going to be creating a lot of product in terms of just in terms of knowledge rather than in terms of software or all capability and I'm really excited about that. So I'm really pleased that we can actually Bring together people and and share this information Of course, there's going to be a lot of stuff going upstream as well as we find it I mean them coming from Cambridge University We should we publish everything that we that we can that's relevant So you are very excited to be contributing great And so for folks who might be interested and you're participating in osik Can you talk a bit a little bit about the? the application process and and you know that Any tips you might have for getting something accepted to letting people get access to the cluster to the capacity Yeah, so when I signed up on I think it was a Google doc when I first signed up I think that's a more formal process now But I just put in there the tools that I was using and what I intended to use osik 4 and then I think Andrew Or somebody from Rackspace reached out to me and started the conversation. So great Yeah, very simple fill in a form and about within a month. I had the Had the allocation so it worked very smoothly. Yeah, very easy to do great Any other thoughts from what you've seen on the Rackspace side? Yeah, I mean if you're looking to get accepted Definitely have very clear and concise descriptions of what you're trying to do what you want to accomplish And how you're going to publish that and any any way you can make that easier and more clear Will help the approval process on your end and if for some reason you get denied It's no big deal just apply again, and it'll go through the process next time around have more detailed descriptions of what you're trying to do Maybe examples of something you've done in the past and that helps the voting process that happens every two weeks Great. Well, I think we're going to throw it over to questions in a minute here Do you guys have any final thoughts on your experience thus far or any plans any future plans that you see? That you might want to do is follow up work Yeah, so at Red Hat and Rackspace We do have intentions to try to do another large deployment with triple low and try to scale it out even further probably in the Newton time frame, right? I Think the experience so far has been so positive It's gone so smoothly that I I can't imagine us not having another go I think that I Can think of all kinds of projects where it could be a really useful resource to have where where we have this this scale Which is you know that many times bigger than what we can gather together in our test lab So it's a really exciting thing to have about sure great Yeah, any closing thoughts and ironic is the biggest thing we're working on right now at Rackspace to get deployed into OSA and we're of course Leveraging the OSIC environment to do that because we have had requests that come in that only want maybe one to ten bare metal nodes And right now with the way we manage the clusters It's not easy to just serve that up because we have to reconfigure the networking to make sure things are isolated from all The other environments in the cluster so once we can get ironic into the environment into what we call cloud one We'll be able to meet those requests and have more people using the environment Thanks a lot guys. Do you guys have any any questions for the for the panel? anyone Right there training materials We so if you go to osic.org there is some material there on What's available we don't I don't think we have any Processes for do we have processes for deploying we do yes That's something we Rackspace will provide to you ways you can more quickly get deployed Some people already have provisioning processes and they don't need what we have but for those that don't we I have I personally have provided support to various people to get their clusters up and running You get a good detailed inventory of your nodes and a network diagram, and it's usually enough to get started But in terms of osic.org we're in the we're in the process of building the site So some of that will be appearing on there as well Other questions. All right. Thank you everyone. Thanks everybody appreciate it