 Thank you for coming to the second day of the Ensuring Software Quality Track at DevCon to US. The next talk is OpenShift on OpenStack by Eric and Emilio and I'll hand them over to them. Hey, Jadis. My name is Eric. And my name is Emilio. And this summer we worked in Boston on the OpenShift on OpenStack team. And this is our presentation on overcoming development challenges. So, show of hands, who here is familiar with OpenShift? Cool. All right, so for those of you who aren't, OpenShift is an application platform that runs at Kubernetes containers. And it can be used to dynamically scale applications, as well as update your back end on the fly. So it's a really cool tool. And then another show of hands, who's familiar with OpenStack? Cool. So OpenStack is a cloud infrastructure management platform. And it's used to manage and lease resources and hardware. All right, cool. So for our work this summer, we worked on the OpenShift on OpenStack team. And OpenShift can be run on OpenStack. And that's something that we are trying to accomplish. But why, you might be wondering, do we want to do this? So basically, when you run OpenShift on OpenStack, it makes it a lot easier to deploy, manage, and scale your OpenShift cluster. And you can do this by taking advantage of the compute and the networking and storage services in OpenStack. Now, developing an environment that's sitting between two really large technological projects can be pretty complicated. And this is, you know, something that we've been taking on for a little while. But there are working versions of this. For example, if any of you are familiar with the MOC, that is an OpenShift on OpenStack environment. But anyway, so our team works on code related to deploying OpenShift on OpenStack. And we are trying to standardize and optimize that process. So throughout the course of this summer, we've faced several challenges. The first one being adjusting to a new development workflow. That's something that most developers will have when starting a new project. And the more specific to an integration project such as OpenShift and OpenStack was making technology to meet in the middle. And so we're going to dive into each of these one at a time and use our summer projects as examples for each. All right. So first off, we're going to talk about getting adjusted to a new development workflow. And we're going to talk about how the tooling and the technology that you get exposed to are going to come into play here. So first of all, the first thing that you have to do before you get started on any project is obviously communicate with your team. And in terms of the OpenShift and OpenStack team, that can be a little bit more challenging than others, because our team is actually distributed across the world. And so in order to get started, you're going to have to use a tool called IRC. And how many of you are familiar with IRC? Wow, that's shocking. Well, good for you guys. And then so the second fun part about that is since everyone's all over the world, they're not on your nine to five schedule. And so everything that you want to do has to be built around that time zone. And that's going to become increasingly more challenging as your problems get harder to solve. And so as you move along, obviously, you're going to have to eventually hit a point where you ask for help. And the first thing that you have to learn about using IRC is you can't be afraid of the chat. And I think that's something that everybody inherently has at first. I know for a fact, I was terrified to post in all chat and look stupid for a long time. But eventually you reach a point where you realize, you know, there's going to be someone who wants to help you. And if you post an all chat, you're more likely to get that help. So do that. Second of all, people might, you know, always be willing to help you with one problem. And that's fine. But don't just ask for help, ask for resources. A lot of the times, people will be able to link you documentation or things that you didn't know exist to help you solve your problems on your own. And that could just make the development process easier for everybody. So then there comes DevStack. And this is another part of developing for OpenShift on OpenStack. If you develop on OpenShift in general or rather OpenStack, you're probably going to work with DevStack at some point in your life. And basically what DevStack does is it builds a little miniature version of OpenStack in your own local environment. And it's kind of neat because you can build from your own source code and deploy it however you like. But just as anything that software related, there are obviously some complications that come along with that, which Eric is going to go over. So have any of you guys ever set up DevStack before? All right, for those of you who haven't, buckle in. First thing you're going to need to do to set up DevStack is stack it. So basically you just have to run this script. It takes about half an hour. Works fine. It's great. But if you want to include services such as like Octavia, which is OpenStack's load balance area, you're going to have to set that up and you can make fun. So you're going to have to do that and then stack it again. That's going to take another half hour. So you already have over an hour of time spent still setting it up. So you add these services, you re-stack it, should be fine. Now something's not going to work. It's been a familial experience. It's like I don't want to exaggerate, but we spent six or seven years setting up DevStack. And so we spent a lot of time debugging, then finally re-stacking again. And this process continued and continued and continued. But we got it and DevStack is a great tool. All right, so now we're going to get into the sub-project within OpenShift and OpenStack that both of our projects were related to and that is career. So first and foremost, the most important part of the slide is this guy. And that is the career platypus. He's unofficially named Carlton. Don't remember that. I didn't tell you that. So basically, what does career do? Career enables OpenShift to use the networking packages and services that are already in OpenStack in place of its own software-defined network. And so why do we want this? Basically, both OpenShift and OpenStack have their own software-defined networking solutions. And the problem there is that both of them we're going to inject their own checksums and their own headers into their networking packages and that makes them very slow to decrypt. So by going around that, you can really speed up the networking for OpenShift on OpenStack. Now, career as a project can be configured to run either in VMs or in Kubernetes. And that allows it to be kind of isolated from your OpenStack environment. But it's not isolated from the software itself, just physically isolated. And there's quite a bit of a tech stack that you're going to have to get used to to work with it. So my project in specific this summer had to do with scaling out the career controller. And so the career controller is a component of career that is responsible for listening for Kubernetes API events in OpenShift and then translating those into the appropriate OpenStack API calls in order to service different networking events that needed resources to be created, modified, or destroyed. And so what that entails is the following. Basically, what we have right now is a high availability mode. And it's an active passive mode. So if anyone's familiar with what that means, basically it means that one node's going to be active and then a series of nodes are going to be passive or idle. And they're going to wait for that active node to fail. And then one of them will take its place. And this is good. It's high availability. But the issue is that it doesn't scale out. And when you start having bigger data centers, it's going to start getting stressed under load. And we don't want that. So we are working instead to develop an active active high availability solution, which means that all of the components are able to handle jobs and pick up the slack of any servers that might have died. And overall, the benefit here is that not only is it highly available, but it also scales horizontally. It scales its performance when you scale horizontally, which is, you know, obviously better. And in order to work with it, actually, let me go back real quick. So you see these guys are in little boxes. That's because in order to run it in high availability mode, you actually have to run it in Kubernetes. And so you're going to have to run the career controller each in its own pod composed of two different containers. And that's where you start running into fun stuff with Kubernetes. So first of all, because both there are two containers running in your pod, it makes debugging a little bit of an interesting situation. And so first of all, when you build a container, I don't know if you guys have built containers before. It's not the fastest process in the world. So don't make the mistake I did at first. You want to be pretty vigorous and test your code before you build, because it will slow down your development cycle a ton. So first of all, read your code. And second of all, take advantage of unit tests. The third thing that I want to mention, which is kind of maybe not for everybody, because I know some people don't use Docker, but a lot of people don't always realize that you can actually use the Docker tools as well as Kubernetes tools. But if you're having a hard time with Kubernetes, remember that you can use Docker's tools and they have a pretty rich tool set that can help you out a lot. But anyway, back to the unit test slide, because that's pretty important. If you work in OpenStack, then you're going to have to work with talks. And I'm sure at least everyone's somewhat familiar with unit tests, but just to go over them at a high level, unit tests are really good for isolating different parts of your code and just making sure that they work the way that you expect them to. And again, what's nice about the unit tests is that you don't need to build your containers or your final product even to use them to test your code. And so if you take advantage of them and you build unit tests to test your code along the way, it's going to make your development process a lot faster. There is, however, one part of talks that I think are about unit tests in OpenStack that I think a lot of people won't be familiar with. And that's Pep 8. So Pep 8 is this fun little test that tests your style. Basically tells you whether or not you suck at programming. And News Flash, the first time you use it, you're going to suck at programming. And I'm not talking about like this. I'm talking like that many errors. So what is Pep 8? What does it do? Basically, it enforces a really strict programming style. And it's kind of used to just kind of keep everyone in check. It makes sure that all of your code within a project is uniform to some degree. And it tries to make sure that it's all readable. And at the end of the day, it's going to make your code a lot easier to understand whether you like it or not. And it's kind of a necessary evil. Now, how many of you are familiar with GitHub? All right. How many of you are familiar with Garrett? All right. So same. So you're also going to have to learn Garrett if you want to work on OpenShift on OpenStack. And the fun part about this is Garrett, you would think, oh, it's a Git tool. How different can it be? Actually, it's really different. So in Garrett, first of all, you don't follow the standard open source workflow like you do in GitHub. You don't fork it. You don't make your own branch. You don't take out pull requests. You actually just clone the main project. And then you use something else called Git review. And then you push your code up to Garrett. And this could be a little tricky to learn for the first time. And there's a lot of little things that go on underneath the hood that you don't really realize. Once another thing you might not notice, the only way you might not know at first is that when you submit your code to check it into Garrett and you don't want a code review, you actually have to go in and give yourself a bad code review and then basically give yourself a minus one to tell everybody, don't review this. I didn't know that. I got code reviewed. It didn't go well. But the upside of Garrett is actually it's kind of neat to, it's a really neat way to manage a lot of projects that are taking or a lot of people that are working on one project at a time. And I think in comparison to Git personally, I like it better as a way to see the various stages in your code and to see either the reviews that people left, what changed, and ultimately just to see what projects are going on in general. And we'll just take a look here. So this is what a standard project commit would look like in Garrett. And you can see here your history and these would be the changes that you've made along the way. And each of those are expandable, which is pretty nice. But when you look at the project view like this, right, you can see all of the projects that are being worked on under one large project and all of their progress, which is, I think, a lot more organized than Git. Either way though, I can't speak for the OpenStack team. I don't know why they chose to use Garrett over GitHub. But whether you like it or not, you're going to have to use it. So deal. So anyway, the grand takeaways here are don't be afraid to ask for help. And if you do ask for help, also remember ask for documentation every now and then. Always try to take advantage of all the tools that you have at your disposal. And ultimately, you're going to bump into some stuff you might not like and definitely won't be used to. But there's probably a reason that it's there. And it's usually there to help you. So, you know, get used to it. So now I want to talk to you guys about making technologies meet in the middle. So being a member of the OpenStack team, the obvious OpenShift and OpenStack, we have to make them meet in the middle. But there's also services within them that we have to adjust to meet them. So this summer, my project was called Watch Endpoints as a Service for the acronym WIS for short. Basically, what it is is a service to speed up career by listening for networking events. So I'm going to talk about what that means in a second. So why do we want this tool? So this is a diagram of the interaction between a bunch of services. You only really need to look at that part right there between the career controller and the neutron, which is OpenStack's network control. And basically, you can see that the career controller creates a load balancer and then will continuously pull neutron using the OpenStack API to show the load balancer until it becomes active. So the way I've been describing this through an analogy is imagine you're a chef in the kitchen and you want to bake cake. You're going to have to preheat the oven. The current system in place is that the chef is going to go back to that oven every three seconds until the oven has been preheated. What my tool does is it just has the oven alert the chef once ready. So you can just see by that it's a lot less work and it's a lot more efficient. So how do I do this? Basically, there's this messaging queue between Neutron and Octavia, which is the load balancer service. And RabbitMQ is getting a queue of all the messages as it is a messaging queue. So when the career controller tries to create a load balancer, eventually when it becomes active, Octavia will send a message to Neutron saying that it has become active. My tool will sit and listen to the messaging queue and wait for that event. Once it sees that event happened, it'll tell the career controller. So this makes too much sense. This is the obvious solution. So why isn't this currently in place? Basically, the career guys asked the Neutron guys to add this BAPI and they said no. They said, thou shalt not take events that came not from RabbitMQ for thou would be duplication and we're running shorter on maintenance. So no. So what's the end game? If they already rejected this idea, why am I doing this? Basically, we're tested with career communities and if it behaves as we expected to, being faster and more efficient, they're going to propose it to open stack as a career self-project. And if that goes well, they'll be integrated. So this brings me to the challenge of making two technologies meet in the middle. My listener would not hear any Octavia low events or events. He would hear the four events fine from Neutron, but it wouldn't hear anything from Octavia. So naturally, because it was working with Neutron and Octavia, I figured the problem must have been within my listener. So my train of thoughts, first of all, I assumed that Neutron and Octavia events behaved the same way because they're both open stack services, so why would they not? Secondly, when I would use an open stack load balancer command, it would generate events, but they appeared to look like port events. Like the event type in the JSON return would be port.something. So at the time, I assumed that these were Octavia events because they were being generated by a load balancer problem. So based on these assumptions, there was no way to distinguish these Octavia events from the actual port events. And so this was an issue because if this tool can't sort the events in that manner, it kind of defeats the purpose of it because then you're going to be sending back hundreds upon hundreds of messages back to your career control. So the explanation of all this, we figured out after talking to the load balancers guys is that Octavia doesn't actually emit events like Neutron does. So why use this and why do we assume this? Basically, on the career side of things, where I kind of was, we just assumed that most open stack services emit events, why wouldn't Octavia? On the Octavia team, they thought, why make it emit events? Ironic, which is OpenStack's fair metal phishing tool didn't do it. And it was never a requirement, so it was never enabled, which makes sense. So both sides had reasonable expectations. There was just this miscommunication. So meeting in the middle, how did we fix this issue temporarily? Over here, you have career expectations, Octavia capabilities. In the middle, there's load balancers of service version two. So if this satisfies, if this behaves how we want it to, why isn't this the final solution why aren't we using this? Well, for two reasons. The current OpenStack deployment uses Octavia and also LBOS v2 is deprecated. So right now we're in talks with the Octavia team to get them to reconfigure Octavia to emit events. But for now, we're using this for test purposes. And this isn't the first time the chip stack team's had to meet in the middle. In the past, OpenStack requires that nodes, a node being an OpenStack server, can talk to each other through their host names as opposed to VIP addresses. Whereas OpenStack doesn't need at least support this. So the solution they found is called Neutron DNS. And it's actually a tool that they've had built into OpenStack for a while now, nobody really knew about it though. So just digging deeper and finding solutions to this. So that's the thing. Sometimes you have to dig deep to find a common ground. So throughout this summer, we dealt with a lot of challenges. But we really learned teamwork and collaboration between teams is crucial. There's so many different teams, so many different resources out there, especially the IRC, that it really gives you to make use of them. Take advantage of your resources. And Red Hat obviously is all about open source. So open source means being part of the community. So you really want to take advantage of the whole community. Because you do have that support. Thank you. Do you want to take, pass this around? What does she does? She's asking about Garrett. What specific, wait, what did you ask about Garrett? You can use this one. Get it, questions are fun. Okay. Yeah, when you say Garrett, it sounds like it's like a replacement of GitHub. But you, I think when I use Garrett, I still need to clone the repo. And then I can add the Garrett is like an extra tool for get they can use on a computer. So I'm just confused. We'd be able to just talk more. I was confused about Gary as well. Oh yeah. I think Garrett actually works with GitHub. You can host your code up on GitHub. And I think Garrett is really more for managing different commits and code review. So instead of using like the traditional workflow, like I think for most open source projects in general, what people normally do is you'll fork GitHub repo and then you'll work within your own fork. You'll make a branch and then you'll take a pull request out against the upstream. But when you use Garrett, you don't do that. Basically what happens is you just clone down the, I feel like I'm too close to this. Basically what happens is you just clone the upstream and you can make whatever changes you want. And when you submit it, you use a tool called get review. And get review is going to, I don't know all the details, but essentially it tags your work with something called a change ID. And then all of the changes that you make under that change ID get submitted into one general repository. And then eventually if it's reviewed and accepted, then those changes can be pushed back up to the master to the upstream GitHub. We use Garrett a lot in Red Hat to explain it to you in the simplest of terms. When you use GitHub and say you submit, so you commit something and there are changes requested, you send a v2 with a new commit ID. So in your history, you have a commit ID, then a v2 commit ID, a v3 commit ID. When you use Garrett with git, it allows you to edit a single commit. So you can do, because it uses something called change ID. So say you commit something, get it tags it with a change ID, say your team request changes. Hey, you made a mistake. You don't change the commit. You don't change the commit ID. You keep the same commit ID, you keep the same change ID, and you can make changes to the same commit. So when you look back, you have a cleaner history. And in one commit, you can see all the comments v1, v2, v3, just for the same commit. So Garrett allows you to do that. In GitHub, it's just more of a keep sending commit sort of thing. So it's similar, but it's not the same. It gives you more granular control. Yeah, that's very true. Although it's it's fair to say that the GitHub is starting to develop more Garrett like workflow tools because they're they're recognizing that the workflow component of it is actually really important. I had a question. I apologize. I missed the first few minutes. Did you all, did you guys, did you talk about what is the well, two things. The rationale for the work in the first place, like what problem you're trying to solve. If you talked about that, then ignore me because I know I just want to make sure that everybody else heard it. The other thing I'm curious about is what you see coming in the next few months for the shift on stack work, where you expect it to land in, you know, next version or whatever that is. In terms of the first part of the question, we did go over a little bit just the concept of what the shift on stack team is trying to accomplish and how career is part of that goal. That being said, if you want to add to that, feel free to if anyone has any questions about it, please ask. In terms of future though, for my project at least, we're going to containerize it for triple open stack on open stack. And I'm not sure what the next steps after that will be. Yeah, I can. Oh, sorry. I'm pretty sure my project is not ready to get pushed on. It was a lot more research focused. That being said, I think some of the changes that were done are going to be kept. And I think some of the changes serve as an example of what we should not be doing in the future, which is fine. That's part of the process. And yeah, so I think, yeah, also I do know for a fact that in terms of just the active active, I know that the open shift and open stack team has decided to kind of put that aside by now for right now. It's going to be coming. I don't know if it's going to be in the next release, but it's on the menu. Just it's not a high priority. Thank you. Since it might be worth saying that, we have an open stack project. There's a lot of directions that that the team is going in. On one side is just supporting new features, like making sure that we support things like staff and stuff like that. Another part of that is like standardizing the project, the process so that other people can just play open shift on the stack easily and without having to do a lot of customizations. The third part, which I think is what you guys kind of concentrated on is optimization, which is making sure that bits of the code are optimized in this case, networking. So that's what you guys were concentrating on. But we're going in multiple directions with open shift and open stack. If anyone's curious, talk to me afterwards. Right. And also just to give you guys an idea of what open shift on open stack, why we are interested in developing this. Think of it as it's kind of our hybrid cloud offering, because right now, open shift can run on almost any cloud service provider, including open stack. And we want customers who are thinking of building clouds either on-prem or in the cloud to be thinking about using open stack as their under-cloud solution. So, yeah. Well, another question is like, I know that this container module for open stack, I forgot the name. Is it Xun? Or is this like a container module for open stack? But I just can't remember the name. But what's the difference if you know what I'm talking about? I'm not sure. Between using open shift and the one? Yeah. So this is, okay. This is very ancient history. There used to be a project called Project Solem, S-O-L-U-M, which was about essentially making using open stack as a container scheduling system. That's long since been obsolete by Kubernetes. There is another sort of interesting effort in open stack land around what's called Cata containers. Cata containers are actually a fully free and open source implementation of Intel's clear containers, which are really VMs that you can spin up very, very quickly. They're super minified VMs that boot almost as fast as a container. And there's some movement within open stack to make Cata containers sort of a first-class citizen. But at this point, there's no competing with Kubernetes momentum in the container space. We don't think anyway. I hope that answers your question. Any other questions? All right. Well, thanks guys.