 Live from Vancouver, Canada, it's theCUBE. Covering OpenStack Summit North America 2018, brought to you by Red Hat, the OpenStack Foundation, and its ecosystem partners. Welcome back, I'm Stu Miniman with co-host John Troyer, and you're watching theCUBE, Worldwide Leader in Tech Coverage, and this is OpenStack Summit 2018 in Vancouver. Happy to welcome this program, first time guest, Melvin Hillman, who's the governance board member of Open Lab, which we got to hear about in the keynote on Monday. Thanks so much for joining us. Thanks for having me. All right, so Melvin, give it, start us off with a little bit about your background, what brought you to the OpenStack community, and we'll go from there. Sure, yeah, so my background is in Linux system administration, and my getting involved in OpenStack was more or less seeing the writing on the wall as it relates to virtualization, and wanting to get an early start in understanding how things would pan out over the course of some years. So probably starting OpenStack maybe like three or four or so years ago, I was probably late to the party than I wanted to be, but through that process, work started working at Rackspace first, and that's how I really got more involved into OpenStack in particular. Yeah, you made a comment, they're writing on the wall for virtualization, explain that for a sec. So for me, I was at a shared hosting company, and we weren't virtualizing anything, right? We were using traditional servers, we were dedicated servers, installing hundreds of customers on those servers. And so at one point, what we started doing was we would take a dedicated server, and we would create a virtual machine on it, but we would use most of the resources of that dedicated server. And so what allowed that shared hosting was to tear stuff down and recreate it, but it was very manual process. And so of course, the infrastructure, service, and orchestration around that OpenStack was becoming the de facto standard and way of doing it. And so I didn't want to try to learn manually or fix something up internally. I wanted to go where OpenStack was being, how you developed a lot of people working on it in their day-to-day jobs, which is why I went to Rackspace. Okay, one of the things we look at, this is a community here. So it takes people from lots of different backgrounds and some of them do it on their spare time, some of them are paid by larger companies to participate. So tell us about Open Lab itself and how your company participates there. Sure, so like I said, I started, I'm at Huawei now, but I was at Rackspace, and that's kind of how I got more involved in the community. And there I started working on testing things above the OpenStack ecosystem, right? So things that people want to build on top of OpenStack. And during that process, Huawei reached out to me and was like, hey, you're doing a great job here. I was like, yeah, I would love to come and explore more of how we can increase this activity in the community at large. And so Open Lab was essentially born out of that, which the OpenStack community, they deliver the OpenStack APIs and they kind of stop there. Everything above that is that you do that on your own more or less. And so as a, also as a chair of the user committee, again, just being more concerned about the people who are using stuff, Open Lab was available to facilitate me having access to hardware and access to people who are using things outside of OpenStack and use cases, et cetera, where we want to test out more integrated tools working with OpenStack and different versions of OpenStack. And so that's essentially what Open Lab enables. In Open Lab, projects come together and it's basically, it's an interop, boy, the networking world, they've had the interop plug and plug fest for a long time. But in essence, projects come together and you integrate them and start to, you invite them in and they integrate and start to test them. Starting with, I mean, I see for this release, Terraform and Kubernetes. Kubernetes, yeah. Yeah, so a lot of people want to use Kubernetes, right? And as an OpenStack operator, you don't really want to go and learn all the bits of Kubernetes necessarily. And so, but you want to use Kubernetes and you want to work seamlessly with OpenStack and you want to use the APIs that you're used to using with OpenStack. And so we work very heavily on the external cloud provider for OpenStack, enabling Cinder v3 for containers that you're spinning up in Kubernetes so that they have seamless integration. You don't have to try to attach your volumes. They are automatically attached. You don't have to figure out what your load balancing is going to look like. You use Octavia, which is the load balancing service for OpenStack, very tightly integrated and things, as you spin things up, they work as you would expect. And so then all the other legacy applications and all things you're used to doing with OpenStack, you bring on Kubernetes and you essentially do things the way you've been doing them before with just an additional layer. Yeah, Melvin, I wonder if you can talk a little bit about the providers and the users. How do they get engaged and give us a little flavor around those? Yeah, so you get engaged. You go to openlaptesting.org and there's two options. One is you can test out your applications and tools by clicking Get Started, you fill that out. And what's great about OpenLab is that we actually reach out and we talk with you, we consult with you per se because we have a lot of variation in hardware that's available to us. And so we want to figure out the right hardware that you need in order to do the test that you want so that we can get the output as relates to that integration that will of course educate and inform the community at large of whether or not it's working and been validated. And again, as a person who wants to support OpenLab or for provider, for example, wants to support OpenLab, you click on the Support OpenLab link, you fill out a form and you tell us, do you want to provide more infrastructure? Do you want to talk with us about how clouds are being architected, integrations are being architected, things that you're seeing in the open source use cases that may not getting the testing that they need and you're willing to work with engineers from other companies around that. So individual testers and then companies who may bring a number of testers together around a particular use case. You're starting to publish some of the results of interop testing and things like that. How does OpenLab, how does it produce its results? Is it eventually going to be producing white papers and things like that, or dashboards, or what's your vision there? Yeah, so we're producing dashboards that we produce a very archaic dashboard right now, but we're working with the CNCF to, if you go CNCF.ci and they have a very nice dashboard that kind of shows you a number of projects and whether or not they work together. And so it's open source. So what we want to do is work with that team to figure out how do we change the logos and the get reposed, how they're driving those red and green success or failure icons that are there, but they're relevant to the tests that we're doing in OpenLab. So yeah, so we want to definitely have a dashboard that's very easy to decipher what tests are failing or passing. Looking forward, what kinds of projects are you most interested in getting involved? Right now, very much Kubernetes, of course. We're really focusing on multi-architecture. Again, as a result of our work with Kubernetes and driving full conformance and multi-architecture, that's kind of the wheelhouse at this time. We're open for folks to give us a lot of different use cases. Like, we're starting to look at some edge stuff. How can we participate there? We're starting to look at FPGAs and GPUs. We don't have a full integration in a lot of different areas just yet, but we are having those conversations. So actually, I spent a bunch of years when I worked on the vendor side, living in an interop lab. And the most valuable things were, not figuring out what worked, but what broke. So what kind of things, as you're working through this, what learnings back do you share with the community, both the providers and users, but big stumbling blocks that you can help people give a red flag or say, you know, avoid these type of things. Yeah, exactly what you just said. What's good is some of our stuff is geographically dispersed. So we can start to talk about if what's the latency look like? You may, within that few square miles that you're operating and doing things that's works great, but when I'm sending something across the water, how is your product still moving quickly is the latency too bad that we can't, I can't create a container over here because it takes too long. So one example of looking at something fail, as well as we're talking with Octavia folks to see if I spin up a lot of containers, am I going to therefore create a lot of load balancers? And if I create a lot of load balancers, I'm creating a lot of VMs or am I creating a lot of containers? Are things breaking apart? So we need to dig a little bit further to understand what is and is not working with the integrations we're currently working on. And then again, we're exploring, like VP, VGPUs just landed more or less, that was a part of the keynote as well. And so now we're talking about, well, let's do some of that testing, the software, the code is there, but is it usable? And so that's one area we want to start playing around with. Okay, one of the other things that the keynote's got mentioned was Zool, the CICD tool, how's that fitting into the open lab? Yeah, we use Zool as our gating. So what's great about Zool is that you can interact with projects from different SCMs. So like we have some projects that live in GitHub, some that utilize Garrett, some that utilize GitLab. And Zool has a plugability where it can talk to different, it can talk across these different SCMs. And if you have a patch that depends on a patch in another project, so a patch in one project in one SCM can depend on a patch in another project in a different SCM. And so what's great about Zool is that you can say, hey, I'm depending on that, so before this patch lands, check to make sure the stuff works over there. So if it succeeds there and it's a dependency, then you basically confirm that it succeeds there and then now I can run the test here and it passes here as well, so you know that you can use both of those projects together again as in an integration. Does it make sense? Hopefully I'm making it very clear, the power there with the cross SCM integration. Yeah, Melvin, you've had a busy week here at the show. What, any interesting things you learned this week or something that you heard from a customer that, oh boy, we got to get this into our lab or road map or what would be? The arm story, the multi-architecture is, I feel like that's really taking off. We've had discussions with quite a few folks around that. So yeah, that, for me, that's the next thing that I think we're really going to concentrate a little bit harder on is, again, figuring out if there's some problems because mostly it's been just x86. But we, so we need to start exploring what's breaking as we add multi-architecture. Well, Melvin, no shortage of new things to test and play with and every customer always brings some unique spins on things. So appreciate you giving us the update on Open Lab. Thanks so much for joining us. You're welcome, thanks for having me. For John Troyer, I'm Stu Miniman. Thanks so much for watching theCUBE.