 Okay, want to get started? Let's get started. All right, we're going to get started. I'm Brad Topol. This is Rocky Grober. Hiding somewhere is Catherine Dieppe. Tong Lee also hiding. But here to talk about ref stack and beyond, the Interrupt Challenge. And we're going to start with a brief overview of ref stack. Go ahead. So, ref stack. A lot of people, when we talk about interoperability, we, for open stack, we associate with def core ref stack. So we just want to clearly define where ref stack and def core, what are the difference? Why do we go on beyond to interop challenge? So ref stack itself is a tool set for interop, of open stack interoperability testing. And the criteria is testing today is based on what we call a def core guideline. With that guideline, the test result was checked against that guideline and will give a status pass or fail. So pretty much the definition of interoperability today, testing by ref stack is defined by def core. And the focus on ref stack is mostly on API tests at this moment. You see API is the foundation layer for whatever tool, Ansible, Shake, whatever Cloud Foundry tool that use on top of that. So it makes sure that whatever the foundation API is defined by def core is being enforced by whatever vendor cloud that you are using. And all the tests that are using ref stack tests, the result will be sending up in a public place. And I have the link here. If you go to the link today, you go to the community result section, then you will see a lot of the result today being sent up there as anonymous. So that is what ref stack is testing. It is the base layer defy, the base API that at this moment def core things that is important for interart. And to go beyond that, that is the interart challenge that Rocky will go through. And the interart challenge, IBM actually challenged the community of vendors to demonstrate that all these different vendor clouds could actually work from it with a single application and everyone could load up this application and it would just work. And we saw the results on Tuesday. We saw 16 of the folks of the 18 that actually made it in time for Barcelona. It is more of a holistic approach that is more of a scenario type of thing much closer to what the end user actually goes through when they are trying to use the clouds. And so by demonstrating that these clouds can do this, you have a template, you have an example, you have a way for folks to, instead of having to start from scratch, they can actually start from working and modify it to work for their needs. There are, these are enterprise apps. So it's what the users want for their enterprise is not the operators. This is definitely end user focused. We are so far, this first phase was let's get something working. But now that it's been so successful, we plan on moving forward with this rolling more apps in. We already have plans for Docker swarm. Some folks have already passed that. We're working through the issues there. We're working on NFE. Actually, Tong Lee is the guy who's doing most of the heavy lifting here. And we will work with the foundation to incentivize vendors to participate and keep the apps up to date and working on their clouds and tested on their clouds by working with the foundation to have these apps from the interop challenge actually be in the app catalog. And every vendor who passes these, who actually does work with these without modification, will be identified with these particular apps so that users can go and say this is the app I want, which vendors pass it and they can do their collection of vendors so they know that their multi-cloud functionality will work the way they design it. We also plan on identifying new apps as the user surveys demonstrate. I think we just 20 minutes ago identified Cloud Foundry as a critical app. And we will have a grown collection which will make this much more usable for everybody. Overview, we've got the Docker swarm. We've got amazing participation. I think that after Barcelona, we will actually expand our participation because getting the word out was a challenge. And we were getting folks in at the very last minute. The last week, we actually had people join the challenge and actually get things working in just one week. So huge participation, but it's going to get bigger. The workloads are public already in OSOps tools. We're going to have the changes reviewed and whatnot. We'll come up with a process to let the vendors know that things need to be retested when changes are made. And we have... I don't see a DevCorp. DevCorp is collecting the results. And so this will actually also help towards putting this information on the OpenStack website. And this is just a collection of the first phase. And all the people who made this happen... Actually, this is all the people who made this happen after these guys kicked our butts. But you can see we've got a fine start, and this will only grow as it becomes more popular and more customers ask for this. And this is how we got to where we are today. Back in Austin, IBM, GM, threw down the gauntlet. Then Brad got the team together. And Tom got the team working. And the last slide is where we were. Where we are. So if you want more information, we do have a nice Wiki page, a Wiki OpenStack org interop challenge. It's got all the details when we meet, who's been involved, our results. It's a great place to start off, and then you can get more information, find our meeting logs, et cetera. If, you know, looking at our workload testing, we did more than was just demoed yesterday, and we continued to do more. But we had a couple different workloads. We had the one that everyone saw yesterday. It was a lamp stack workload with Ansible as the automation. One that we didn't show that a lot of folks who have been working on had slightly different characteristics was a Docker swarm app. It was a Docker swarm, gets a swarm of pods up and running on OpenStack, and it was using a different automation tool called Terraform. And we wanted to test more than one tool. We wanted to see how the different tools did, ran into issues. Paul Zarkowski's not here. I know Terraform is his beloved tool, but Terraform had some trouble working on some clouds. I'm sure they're easily fixed. You can easily tweak them. But for our results, and we were able to go, the Ansible's what got us to the 18 different clouds. And sometimes you have to dance with the one that brung you. And that's the one that brung us. Well, that went really well. Tong Lee, who's here in the audience, staying awake, was kind enough. One of the things that, you know, the analysts had said is, well, we want to see OpenStack run with an enterprise app. We want an enterprise app. And so, you know, Tong went out and built an enterprise app with the load balancer, with multiple WordPress nodes, with the database and attached storage, with the security groups. And you can see here on the right, you know, all the different things that were happening. And it was more than that. I mean, if you go look at those scripts, you know, you start with a plain volume. You got to put, you know, NFS on it. You got to put the database on it. You got the database from the WordPress on it. So a tremendous amount of software installation and configuration, in addition to the infrastructure, networking the group rules, the IBM provisioning, the attached volume storage. You know, a numerous amounts of stuff was happening in this enterprise workload. And so, you know, thanks Tong, you put together a very good one. It kicked a lot of tires from those APIs, the Devcore APIs, the RefStack APIs, and really gave you that enterprise feel of an app that, you know, you could feel proud of and call it an enterprise workload. Here's one of the very pretty graphics charts. I think actually, Joanna Kester's here. She made this chart, so thanks, Joanna. Well done. But this is just a nice graphical view. Key things here is a lot of work being done so that even after this is all set up, the only public IP that you can get to is the one on the load balancer. If you go look in the script, you'll see that the other nodes are made so that those are not public IPs. So I'd like to point out here that this is an example of a good architecture for an enterprise app. If you start from this, you actually have a lot of the... a lot of your concerns taken care of, like you said, the load balancer is the one point of entry from the outside, so this is something that would work well and you wouldn't have to do a lot of extra work to secure it. And here's the same very nice picture that Joanna made and just demonstrating that. And we saw this yesterday on stage, running on-prem dedicated clouds and public clouds, pushing the same workloads, same ansible modules, little different config files, and being able to push those up. And if you want to go see our work, you can go see what this looks like in the repository. The different workloads that we've put, we keep putting them in the repository, and then you go check all those out. Another neat thing that we did as part of this challenge is people were able to submit results, and this is where Def Core was huge. They took it upon themselves to be sort of the judge, if you will, the traffic cop to say, yeah, yeah, send us your results, send them into the Def Core mailing list, we'll keep track of them, we'll record them. But people were able to record the results of how things were going. We had folks running ref stack results, we had how they was going with workload one, which was the lamp stack workload, how were things going with workload two, which was the darker swarm with Terraform, and your name and what version of... We had actually had folks using different versions of OpenStack as well, that was a nice variation. So we covered a couple different versions, you're testing backward compatibility there as well. And so thank you to Def Core for helping to be the responsible adult in the room and gather these and keep track of them. Very helpful. So lessons learned, you know, based on our experiences, other people have different opinions, but our experiences, you know, we're all entitled to our own opinions, but the facts are the facts. What we were able to get run on 19 Clouds was Ansible and Shade. Shade does a really nice job of handling different characteristics of the different OpenStack Clouds and mitigating those and making that very smooth. And that's, you know, it also plugs into Ansible as an Ansible module. Another nice thing about Ansible is a lot of folks do their software config already with Ansible. So you're using the same tool, not just for the automation of the infrastructure, but in many cases for what you're using to put the software down on the nodes as well. And, you know, that was a big thing. We had some issues with Terraform, not being able to handle certain Clouds where they were doing different versioned endpoints. So they gave you two different versions of the Nova service, and when Terraform saw that, it ran for the hills. It screamed. It actually kind of gave an exception. So, you know, there are some issues with Terraform, and as far as we could tell, they weren't being actively addressed and, you know, a little concern from ourselves that, you know, perhaps that community wasn't actively addressing things as quickly as, say, the Ansible module community was addressing it. You know, you always go look in stackolytics for the Ansible group and see the variety of the vendors who are the big deployers, the Walmart's, the Comcasts are actively working on the Ansible scripts. So you kind of put all that together and, you know, hey, these other tools may get there, but they're not there yet, in our opinion. Another one that, you know, folks like Tong ran into not all the Clouds providing tenant networks by default, right? So you start out with an easy one, and life is good, and it's giving you the tenant network and ready to go, and then you hit the next Cloud, and that one didn't give that to you. So, right, Tong, we ran into a couple bumps there that we had to make sure we understand that maybe you don't always get those, but it's hard to configure your own. Another fun detail we hit was when you attach the volume storage, the default for the volume label, it's typically slash dev slash vdb, not always, you know? So that was one of the few things we had to put as a parameter so that you could change it. That was important. The networking always gets really interesting. A couple different things can happen, right? You could have the private network or the tenant network, or you could have the provider network, or maybe the project had multiple tenant networks. There was actually a Tong that's work and it's best practices in the script where he could do auto-detect of whether you have a provider network or a single tenant network. If you have any of those two cases, you can look in the scripts and it handles that just fine. The case where you might need to pass on a parameter is if you have multiple tenant networks, then you've got to at least tell it, hey, this is the tenant network you need to use, and that should make sense, right? We can only do so much and the scripts aren't psychic. So if you're only passing one, he'll figure it out, otherwise you've got to give it a parameter, make it help. Again, with all this effort, we did pretty good running on 18 different clouds, and like I said, you can go look at all the scripts and see the very few amount of parameters that they were used to handle the minor differences between the clouds. So at this point, I'm going to hand back over to Rocky and let her talk about our deliverables. So one thing this first bullet points out is that DevCorp is changing its name. It will be the interoper group or something close to that. And for the challenge, step zero for any company that wants to participate and get certified in the challenge, they have to pass one of the DevCorp guidelines that's currently active. Forgive me, not enough coffee this morning. The deliverables are vetted enterprise reference apps that you can deploy in a test environment and explore, modify to your own uses. This shortens the cycle to actually get up and running and being able to deploy in a production environment. The applications all get put into repository. As Brad pointed out, we're going to see if we can combine the repositories for the app ecosystem group and this so everyone can find all the different apps and be able to access some discoverability. We need to make discoverability as easy as possible for our end users and for our vendors. We're going to work and focus on that a bit more. A number of the bits and pieces that this enterprise, the current enterprise app that we demonstrated include load balancer, multiple web application nodes, database security groups, block storage, perhaps going forward, we should maybe add some form of monitoring for testing to make sure the apps remain up and reporting our notification along those lines, but this is a great start. And we're getting feedback from the OpenSec community. Networking is the pain point at the moment. Shade has to be there because there are too many ways to do the same thing in OpenSec. We give you too much choice. Shade helps to fix that. But Shade needs to be expanded to support more of the NFV functions and more of the advanced networking functions and the software-defined networking aspects. Ansible is great, but it doesn't unallocate floating IPs. We're going to work with the Ansible OpenSec community to deal with that. And then we've got collateral recommendations and best practices. We need to actually write up this beyond just our PowerPoint slides and put it out there on the wiki. Now that we've got the work done, we can go and formalize it a bit more so you guys can all find the information that are in these slides, either on the slides or on the wiki. Come on. And the next steps are more apps. We're going to get the Docker Swarm app to a point where the vendors, right now some vendors can install it and run it and some can't, and it's a matter of doing the same thing to our scripts that were done for the WordPress Lampstack app that is finding out where they break across vendors and putting in those checks and variables and branch statements and whatnot. We've got the Docker Swarm coming up. We have an NFV app that we have working on a couple of vendors already. That's another one that we'll be focusing on. I'm going to work with the Enterprise Work Group and we will get the Enterprise Work Group workload reference architectures included into the Interop Challenge. They have heat-based apps and Murano-based apps and for those folks who use those, they'll be able to do that. We're going to formalize this process a bit more so other people can propose apps and get a nice collection for everybody. We're going to reach out to more vendors and I believe vendors will be reaching out to us and we're going to look at other deployment tools. Juju is a big one. Heat certainly is important and looking at the user survey, there's a lot of folks using that too. We're going to work more on the network virtualization and getting some good solid reference design architectures into apps for that so that our community can move forward on that line too. This one, come on up. If you participate in the Interop Challenge, please stand up. Some of you are here. There you go. Let's do it. Thank you. We're hoping that we really won't allow time for questions so anybody have any questions or suggestions? We're here, this is where we're going to take suggestions and stuff for the next cycle. Yes, Tom? This one? Yes, Cloud Foundry. That'd be interesting. That was a good suggestion from the previous session. We actually got them to put the link in the Etherpad so we'll be able to contact those people again. Good. So hopefully we'll see a Cloud Foundry reference workload for Boston. Any questions, suggestions, otherwise? Come on, guys. Is there any app that your customers are clamoring for? How many of you guys actually are vendors? How many are operators? Are there apps or even suggestions of what other vendors that you guys are working with that aren't part of the Interop Challenge yet that we should reach out to? Let me go back to that one. Here we go. We've got the three major Linux distributions. Who else do we need to reach out to to get on here? Who's missing from our list that you would like to see? Nobody, huh? I think that's mine. Is it too early in the morning? Did you guys not have enough coffee? Well, I think we got a lot of covers there. It's hard to find one that's not there. That's true. So you're happy with Cloud Foundry and Docker Swarm for next round? And NFE, yes. And we're going to have our weekly meetings. We'll have more discussion because we're obviously going to want broader input from beyond this extensive crowd. We're going to want it from the whole community, so we will have those, I'm sure, we'll have the discussions, right? Yep, and the weekly meetings are at 1,400 UTC. Is that doable for you guys? Good deal. And the first one after this? We'll send out an email. We'll let everybody recover. Oh, also on that issue, the mail comes out on the DevCorp mailing list, so if you're not subscribed to the DevCorp mailing list, please subscribe so that you can get the mail. And the name of that may change, but if you're subscribed, you'll get migrated to the new name mailing list. Join the DevCorp mailing list. Join the DevCorp mailing list. DevCorp committee. DevCorp committee mailing list, yeah. You go to mailinfo or list.openstack.org, maillist.openstack.org or mailinfo.openstack.org, and they have a full list of all the committees and look for DevCorp and have DevCorp plus stuff and just click through on that and subscribe. I think we're pretty much out of time. Cool. Thank you.