 So, good afternoon, everyone. My name is Randy Perriman. I'm with Dell, and thank you for coming to our threes company. Are you Jack Janet or Chrissy? This was written up as a birds of the feather interactive talk. So we expect people in the audience to take and participate. So first, let's do some introductions. All right. My name is Mike Arazi. I'm an engineering manager at Red Hat. The interesting thing about that, and I've been at Red Hat for 10 years, most places that would be the guy that's got his lawn and yells at all the kids to get off the lawn. Red Hat is a little bit different in that regard. I have a lawn kind of, but it's a shared lawn. And what I get to do as an engineering manager is encourage people to go out and play on the lawn and join in game of soccer or baseball or football. And I get to interact with communities, help build communities. And it's a very fun job, much better than the typical shaking your fist role as a manager. This slide is a little bit about our OpenStack platform. And it talks really to the rich infrastructure of plug-ins that we've developed through the community method. And so that's kind of the crossover there. I have links in there about what plug-ins are presently available. Whoop. Yeah. Our clicker's a little touchy here. And I wanted to highlight a few things. The biggest being that on the very bottom, you will see Red Hat enter flies on Linux. The reason that is there is because the OS matters. And we want to highlight that all the time. We're able to provide world-class support on a level that's just above and beyond what else is out there. So thank you. So as I said, I'm Randy Pairman. I'm the solution architect at Dell who designs the Dell Red Hat OpenStack solution. These are some of the things that Dell has done over the previous years in regards to open source and cloud solutions. We've been part of the OpenStack community since day one. And I've been basically working with this community since that same time period. I'm not going to go too much more into what we have here. Everyone's probably familiar with Dell and what we have. And my name's Cynthia Thomas. I'm a systems engineer at Mido Chura. I'm basically involved in education of what MidoNet is, working with customers and helping design and architect and how to use and deploy MidoNet. So MidoNet's actually open source. So for those who aren't familiar with MidoChura, so this company was founded about six years ago. Our CEO and our CTO came at a likes of Cornell under Werner Vogel, the Amazon CTO. So they worked with him in an academic and in industry at Amazon. So their expertise is in distributed systems basically. So they applied that knowledge into building a network product without any bottlenecks. Having a distributed system that had the ability to scale. So the focus of MidoChura initially was a network virtualization overlay. So it's a software agent that's residing on servers at the edge and all the network intelligences at the edge. This is an extremely flexible API. So basically we're mapping the neutron API to the MidoNet API. But we also go above and beyond to provide some additional functionality and visibility as well. It's quite simple to deploy as we can talk about a little bit further. This diagram basically is depicting logical topologies, how they can be created through open stack, for example, the neutron API. And how you might, for example, deploy on physical infrastructure, Dell servers, which is on the underlay. And it was beneficial for us to work with Red Hat and Dell, Red Hat having obviously a strong presence in the open stack and open source world. And all along we've always had compatibility working with the REL OS. And we've also had a strong partnership probably at least two years now with Dell. So Dell has the ability to provide services for MidoNets through sales cues, for example, and bundle solutions, which we'll talk about today. So why did we partner together? Dell and Red Hat have been working together for the last few years producing our reference architecture and putting together a solution. And one of the things we at Dell recognized was that we wanted to extend our reference architecture to include other products and solutions. And one of the items that was on our shortlist was the ability for our network to become highly resilient, to put overlays and such on. And we had already been talking with Mitokura with other projects. And so we brought them on and hooked. And then we put the engineering teams, basically facilitated getting our engineering teams to work together. Yeah, as I previously mentioned, too. So yeah, we have a strong relationship with Dell early on, so working with the cloud solutions team there. So they were basically pretty pivotal for us to work with Red Hat. The Red Hat we've always worked with, really. We've installed our software on RELL OSes. So we've always been compatible with that. Mitokura can, or sorry, MidoNets can be installed with RELL OSP, with RDO manager. But it was quite a manually intensive process. So working to provide a cleaner way to deploy was always something that was top of mind for us. So definitely was on board for a joint solution. And on the Red Hat side, our interest has been in developing that cleaner way to deploy. We bring to the table a flexible architecture and the community to help back that and to help curate the community. So what's the solution look like? This is the overall solution taxonomy. We start off with Dell hardware and networking put at the base layer. Perine on top once again, the Red Hat Enterprise server. OpenStack using KBM and then building on top of that with Mitokura as the middle layer between that and the OpenStack in using the new Trump plugins. This solution is very resilient. And one of the things I really like about it is the fact that we use BGP as the up-end, which means that as you bring on new tenants, their networks are learned upstream automatically across your network. And as they are destroyed or removed, they are automatically removed out. It makes it very clean and easy to manage and update. Yeah, that being said, Dell already had a reference architecture with Red Hat. But some of the advantages that Mitokura brings are like this distributed gateway functionality that Randy's talking about. So having that ability to advertise your floating IP space or whatever subnet space you want to advertise upstream for reachability into the cloud. So having that kind of capabilities. Our agent basically achieves things in a much simpler fashion. It's a single agent on servers. So removing the need for things like IP tables for security groups or NATs. We achieve this all through our software and higher level services. I mentioned NATs, load balancing, firewalls and service. All of this is achieved. So this is more of an alternative solution to what was already available through the reference architecture at Dell and Red Hat. So this is not very easy to do because we're now talking about three different companies. Even putting this talk together was interesting because I'm up in New Hampshire. Mike is down in North Carolina. Cynthia is over in California. In fact, we did not get a chance to sit down and talk face-to-face prior to about an hour and a half ago. So we've, thus, the reason for the handoffs and stuff, you'll have to bear with us. But that goes back to one of the things is the tooling and development. In order to do this, we had bring all three companies working together and we all have our own ideas of what is right, what is wrong, none of us are wrong. And that's one of the things we have to learn is that we each have our way to do it. It is a way to do it and we have to be flexible. And then the other piece is the testing. We have our own ideas of what we wanna test for Dell. We're interested in testing the solution as a whole. We want to know that that solution, when it comes out, the customer's gonna have a great experience every time that every piece of it is gonna work together. Yeah, so along those lines, even from the Mitokura perspective, learning a new platform, how to integrate with it compared to some of the other platforms that we've been working with. So there's a bit of a learning curve on how to get going with a different platform. For example, so we had puppet modules, but how does that incorporate it into the new development that we've done here? So that was some of the things just getting off to the start. And then, yeah, as Randy mentioned, testing in general, things from a networking perspective are different from the entire cloud solution in general. So things that we might be focusing on would be different than what a general solution might be looking at. And our focus has been on creating a flexible solution, lots of plug-in points. The big challenge that we face is how do you onboard people onto this infinitely flexible solution? And what we've found is we do have, you don't get this offer free. You do have the let's have a talk, let's help understand. But what we've found, and we've been very pleasantly surprised, is how quickly we were able to take off the training wheels and go from what was a, have a private meeting to we do development in the community. And make sure that everybody's on a level playing field and has an opportunity to be involved. And that being said, also being a global company, all of us being global companies, how do you tie in people from across the globe? These are logistics that just needed to be figured out so challenges that were overcome during those early onboarding sessions, for example. But in the end, the turn to community, IRC channel, for example, those were excellent resources for getting going and continuing on. So one thing, this is supposed to be an interactive session. Folks, does anyone have any questions or have any experiences themselves in any of this that they want to bring up? And we're not afraid to run out to you with a mic, so, yeah. So you might cover this to come, but I was wondering more of some of the specific methods that you guys used to kind of come together. Just for this presentation, it'll be a good example, like how did you talk to each other? And I think that also might pertain to working in general, like three disparate companies. Take that first. You want me to go first? Go for it. All right. Yeah, so I think initially, as Mike said, it started as more of a hands-on engagement for onboarding, so the Reddit. So even Randy had mentioned earlier, Dell did a lot of the facilitating for the initial introduction. So that was just one start. They knew of Metocura as an alternative solution that provided these higher layer services and the value add with Metocura. So the introduction to Red Hat happened, and then a lot of it was just a lot of handling initially, right? So we had weekly calls, so Mike hosted weekly calls to host the engineering folks, and then the alignment of time zones was kind of an issue at first, so you have to have a limited window of when people, the prime people that are involved can be on a call. So for example, we have developers in Barcelona, Tokyo, and Mike's on the East Coast in North America, so that was, I guess, a lot of the initial onboarding. In terms of the tooling, a lot of it is fairly commonplace stuff at this point. We're using Hangouts or Skype or BlueJeans for teleconferencing to do calls, mailing lists for communications that we want to see, everyone to be able to see. We make heavy use of IRC, particularly for things where we know there's going to be a lot of back and forth, and I may need to paste logs, and have somebody look at logs, what happened here, why did this not work? Seeing, using IRC, pastes bins, all of that good stuff. So from the Dell side, for me, my biggest interaction has been with the mid-core team on Slack. That has actually been very interesting because by using that, I can go back if somebody has suggested me to do something and use that as a, essentially, as my history, and I don't have to keep track of everything, teleconferencing, and the other big one is sitting down and forcing yourselves to do a meeting on a weekly basis, set touch points, set goals, and then following up and making sure those goals are happening. And over time, we started off with very much a company-to-company meeting, and over time, we really saw that evolve a lot in terms of participation at the upstream, triple-O meetings, which are all IRC-based. And so I think, again, going back to that theme of, we started with something pretty structured, and then we took the training wheels off and allowed the community to develop around that. Anyone else have anything else or any other ways that they have success? So this next one is just as important as our tooling and QA cycles, it's who do you bring to the table? For me, I was looking, I wanted to get my product marketing involved because they need to define what we are going to put together and sell. Our engineering teams, our quality assurance teams, and most importantly, our support teams, because even if we put it out there, if support is not there, our customers are gonna have a bad experience at the end of the day, and we don't want that. Tosti, you know. Tosti, for me? Yeah. So I think Medocura is not a huge company. Well, I guess we're 50, 55 globally, so there were definitely a lot of people involved and continue to be involved as our cloud solution is deployed. Yeah, everywhere from, we were talking about the alliances team, the engineering, our engineering manager, our developers. I was involved initially as a systems engineer, introduction, yeah, just across the board, basically. For us, we're a smaller company, obviously, so lots of involvement. And I think we've touched on the key points here. We play, say, a particular emphasis from engineering quality assurance and really support because a lot of the cost of any technology solution is kind of how do you keep it going? How do you deal with the ongoing care and feeding of a thing? So we spend a lot of time there prepping for how are we gonna get this out the door, how are we gonna support it when people install it? So at the end here, we gave a list of the key individuals from each company, some attributes that we felt were required. They need to be result-oriented. They need to have flexibility because they have to be able to understand the fact that each company is going to be different and we're not gonna be able to do it my way. And then finally, they're gonna have to adapt to the community because at the end of the day, this entire project is, and this entire exercise is going open upstream and is being published. In fact, most of the work has already been done in upstream and to ironic. You guys, by the way, we do have candy for those who ask questions, which I owe you some. So where do we go take that while I answer this? Sure. I have priorities. Yeah, fair enough. Yeah, in terms of where do we go from here, a lot of this is how do you get, above's generated, how do you get people interested in the community? How do you identify markets that you can get into? I have tended to focus more on the community side and prepping what we have available in RDO for use for learning purposes and things of that nature. And in addition to that, there's just a ton of ongoing, how do we identify the markets? How do we get into the correct markets and do that? And Cynthia, you can probably talk to that considerably better than I can talk. Well, in terms of, yeah, basically deployment. So our solution is intended to help end users deploy Red Hat, Dell, Metakura within an open stack cloud solution in an easy, easy fashion. So more details are, you guys can get involved as well. That's what we would love to do to continue the community momentum. So through Mido Net's an open source solution, Triple O being the other, the cloud solution, and then of course RDO as well. So yeah, these are the key components that help enable these cloud solutions to be deployed easier, so yeah. So other things that we have to consider where we're gonna go when the solution is done is it's never done. This is a ongoing process. We have a new release of open stack every six months. We have new releases of RHEL, we have new releases of Mido Net that we have to put together and ensure that it is working at all times. So even though we think it's done, it's done today, tomorrow is a new day. Right, yeah, so the initial work was done. So the downstream versions were basically the Rallo SB8, the Mido Core Enterprise Mido Net version 1.9 and the Liberty release. So yeah, as Randy mentioned, obviously this is gonna continue with every open stack release so that involves continual development and work to be done. So we always welcome more contributors too. So any other questions, comments? Oh, we shouldn't give him the mic. Give him the mic or it won't go. I would go to my product manager and ask him, excuse me, is there a three-way road map? I haven't seen him. So he's my product manager. But he asked a question. No lossage. But yeah, so what also makes interest, so with RHEL OSP, some of the key important things are upgrading between open stack releases. So how do you do that? So we have to ensure those things for the next go around, that it's a smooth transition from the initial release to the next release. So these are not only from the open stack perspective, Mido Net's versioning as well, so multiple things to keep in mind. Question over there. Where's the question? You have to speak into the mic though. Hi, yes, my question is around Mido Net and how it relates to Nacira, specifically NSX and what that means in the wake of the EMC acquisition. Do you see a move away from Mido Cura and towards a different type of software to find networking, whether it be VMware or otherwise? Well, I might be really biased given I work for Mido Cura. So in terms of, well, so NICERA, some of the things I came out of NICERA, so the work with open V-switch, so Mido Net leverages the open V-switch kernel module. But other than that, it's a complete replacement of those alternative solutions. Well, I'll let the panel also help answer that question as well. I personally don't see a drive away towards the aforementioned solutions, but it's a complete swap out for those solutions with Mido Net. So there's no dependency there. So it's really more on what's Dell's perspective, what's Red Hat's perspective on that acquisition. I can toss that one to you guys now. In all honesty, I'm not able to even comment on it because I haven't been involved in any of that. If you are very interested, come see me and I'll give you my card and we can get you hooked up with somebody who might be able to help you there. I'll chime in again from the community perspective. It's largely about what can happen upstream. And we can be adaptable and flexible to a variety of installations. And it really is largely about when the need exists and we start scratching that itch. That's how the magic happens. Any other questions? Well, I guess to carry on to that too. The attention was that it's a flexible architecture. So that's even Red Hat's motto, right? So the intention is you have that flexibility for choice. You get candy. So not really a question. It's more a comment about the things being upstream and things being flexible. So at least on the triple O side work, I'm an engineer with Red Hat and I work on the triple O project. And I worked a little bit with Joan Devesa who's one of the devs from the middle career engineering team. And Joan did a lot of the work for, at least on the triple O side for the integration with the triple O heat templates, et cetera. And that implementation is there and it's all upstream. It was all done with code reviews, et cetera. But now, because our triple O templates are changing slightly, we're trying to make them more flexible by making our services composable. So for example, the middle career implementation, at least on the way it's implemented in the triple O heat templates is now being moved under the puppet triple O project at least the puppet side of that as part of making those network roles services composable. And that's been picked up by somebody else who is doing that. So just to kind of reaffirm the fact that it is all upstream and it is quite flexible and it's already evolving where it's just recently more or less kind of been completed as a solution anyway. So just a comment. Yeah, thank you. And yeah, I think that does speak very well to, as with all things open source, it evolves over time. It depends on people contributing and people being interested in being part of that community. That's what drives the innovation that we do in the community. In Ian. Since it's a three way, right? So I'm just curious about the support perspective. Let's say some customers implement this already. Does it first go to Red Hat and then if you need to debug in middle net, then how does it flow? So there are processes in place. This is more on the product side. We have processes in place that does involve that where each company is actually able to take L1 support and communicate on a shared case and pass the notes across companies where appropriate so that we're not acting as isolated individuals in any particular support issue. Most of the support is done with the axiom I've heard is one throat to choke. It's all written on through Dell and so they can call up Dell and our level one support will triage and pass it over to the appropriate company and there is handoffs put in place to handle that. I think you're in a similar situation. You're capable of handling L1 issues. Yeah, so I think for this specific solution, Dell is the level one support as it gets escalated for networking, it would go to Amidokura, excuse me. But yeah, so Amidokura on its own does handle networking support basically. That's part of our subscription model for our paid support. So these are alternative ways to get support. Any others? Okay. We still have a lot of candy, so we need more comments at least. Okay, thank you all very much. Have a good day. And seriously, we have candy. Come get candy.