 Cool, great So probably gonna kick off now. I know some people are still gonna be trickling in from the general session But figure it's better to start now and at least get the story kind of underway So we're gonna be talking to you today about Building a true hybrid cloud with OpenStack and AWS right and we're gonna be using Sky TV as an example Who have built out this platform originally on AWS and decided to extend that out into OpenStack And we'll be going into kind of why Sky TV have made that decision And some of the challenges and learnings that we've made kind of through this journey So just in terms of introduction, my name is Nigel Wright I work for a company called Dimension Data in New Zealand and I kind of look after their cloud technologies their DevOps stream and I've come to them from kind of a background of Well, I'd say HPE That's a bit of a dirty word Sometimes, but that's my background OpenStack background cloud automation and software software development and Hand over to JP now Thanks, man Yeah, like you say JP it's Jean-Pierre Sienekal. I'm infrastructure architect at Sky TV, New Zealand I'm afraid that my brain is still working on New Zealand time. So please just bear with me Yeah, so we're just gonna kick off Just give a bit of a background on why we decided to go Why we actually had to change How we changed and then we'll delve a bit deeper into the The lessons learned category So I'm not gonna spend too much time on these slides because we've all been bored to death with stuff like this but Connected personal devices they've been growing at almost an exponential rate We're currently in the billions and they're looking at, you know, I think projected Numbers are into the trillions and by 2030 What does that mean for broadcast like us? Well, it changes the whole landscape People are no longer just focused on one device to consume the content. It's all about having multiple devices So the way that I consume content have changed If you then couple that with the fact that we will be disrupted aka Netflix We've pretty much got a good case to have to change So Let's start off. What did we do about it? We had to change. We knew we had to change. What did we do? first of all, we changed our technology vision Now there's a lot of buzzwords in there broadcast quality with agility and an online speed of change Basically, what that refers to is whatever you deliver Deliver it as a quality product within a short time frame and keep the momentum going by updating that Always well always evolving the product and this is this is in reference to our whole technology you stack It's not just about applications. It's not just about platforms infrastructure. It references the whole stack We also had to define a few principles Again not going through all of these because we'll touch on them as we go through the presentation But some of these phrases were bandied about in meetings all over the place But it was it had to be done because we had to get the people to start understanding What does it mean? You know, what is cattle cattle and pets cattle's versus pets? What does it mean if I say there's no humans in the cloud? touch on that later so we had to start building a capability and being an enterprise company in in New Zealand was about 25 years old We are very project. We used to be very project focused With a waterfall delivery model We had to change that and become a bit more product focused With an agile delivery model and always keep the user experience within our sites I don't like using its own customer centric. It just sounds weird, but yeah keep the value and The user experience in our sites and as we started gaining velocity through this transition We were able to increase our capability models so Brings us to open stack and why are we here? We had the vision and we needed the mission and That mission came in the form of the Olympic Games in 2016 in Rio We were tasked with providing an OVP or an online video player for Olympic based content and We straight off the bat knew that our legacy system, which is called sky go was just not going to cut the mustard so the decision was made with the backing from our directors and our vision to Provision provision the OVP platform in in the public cloud Because we had to leverage their performance and scale scaling capabilities, which we just didn't have on prem We did that Through a well architected manner as well because we had to get bind from various stakeholders right from the start And that included security being compliant But with that and through secure it through automation We were able to deliver a product that was really performant And we were able to during the four weeks of the Olympic Games do quite a bit of of changes And without any interruption to the user experience as you can see in the slide We did a hundred and thirty-five code changes 40 prod deployments and 176 pre-live redeployments And that that might not sound like huge numbers, right? But when you compare them to the the other lines there, which is the delivery to their existing legacy model Those are huge, right? So you've only got a handful one pre-live deployment Yeah, you know a handful of production deployments and only a tiny amount of code and config changes, right? So that means that their legacy environment just simply wasn't able to handle the amount of changes that needed to be made So I really love that slide because it shows you that with you know open sack and AWS together And you know everyone talks about hybrid cloud, but Actually making it work and making it deliver something valuable is kind of harder to prove. So this really proves that Yeah, I think the total changes for that legacy system was about seven changes in four weeks So that was we thought that was quite successful and we wanted to Get the same capabilities that we had in the public cloud on our private cloud as well And like they said in the keynote this morning someone mentioned that you know There's no cookie-cut away or just the one-size-fits-all So that's where open stack came in Because it bridge bridges a lot of the gaps for us Open stack is located on our private cloud Running on infrastructure. We're also on Do I have a pointer on this or not? No, thanks. I wouldn't put it at your eyes trying to figure it out So anyway on to to the right of the slide Our legacy systems applications running on on VM where we decreased the footprint in that by redeploying it on hyperconverged infrastructure and then increasing our footprint on the open stack platform and To the left we've got public clouds now. We've got different use cases for public cloud integration So we had to be agnostic in our approach. We couldn't just get sucked into one provider We had to be able to deploy the same stuff in on multiple different clouds This is just an idea of that legacy sky go Application how it's changed and evolved. It's actually a true hybrid application with a front-end sitting in public cloud and AWS An authorization stack which is bridging public and private and it actually talks back to IDM and OED services Which is all on legacy systems internal on physical hardware and cool so That's kind of a background of you know an application that sky TV have transformed to work in a hybrid model What I'm going to talk about now is I'm going to talk about some of the challenges that have Popped up in this journey. It certainly hasn't been completely seamless and we didn't you know Click next next next and we've got open stack and AWS talking together and everyone's happy Definitely not the case and I should go forward instead of backwards. There we go so the developers already had an existing pipeline they already had processes in place and Way to kind of deploy their code in AWS right the problem with that is that there was no kind of architecture in place around those models There was no Planning done with sort of the wider business the developers had a need and they met that need And the platform kind of just increased and increased and increased rights with no regulation and no kind of Planning and architecture. We like to refer to it as the Wild West And that's I mean that's great. They were able to do their jobs and they're able to develop code Develop applications develop products with Sky TV, but really need to start thinking more about how to manage that right because it needed to be managed so we brought open stack on board and use that as an opportunity to start planning these things and Defining an architecture that could be used for both private cloud and public cloud Which was important. We didn't want to have a different process up in AWS and a different process in open stack And then someone's using a different process in VM. We're down here Definitely not the case We want to make sure those are kind of standardized and we used the deployment of open stack to kind of do that and force that through the business Which worked quite well The other key thing I want to touch on here is that we need we needed to minimize the amount of time taken to Get open stack in production right and that there's two parts to that There's deploying open stack and there's also making sure it's ready and fit for purpose for the workloads that the developers want to put on it So we have that two different ways so that the deployment was done with a vendor We worked alongside HPE to actually get open stack in place And we did that kind of in a you know sprint based approach Which was quite different for both HPE and for Sky's infrastructure team And we're actually able to start getting open stack up and running You know within a couple of weeks and then for purpose for development workloads within about you know Four weeks or so which was great and that's fine for actually getting it stood up But then we needed to figure out okay How do developers that use the platform how do they want to use the platform? What are they going to get out of it? What do they need to get out of it, right? These are the things that still had to be kind of defined and we had to do that very quickly So the very first part of the whole implementation was around standing up the infrastructure making sure it actually works And then we had to go okay. How do we make it work for developers? What do we need to do here? And it was really important at that stage to Have those conversations with the developers and with the development team leads and bring them on board early as possible because I Think it's pretty clear that without that buy-in no one's going to use the platform And I think I've seen it many times before when an enterprise comes in and goes hey We've got this amazing platform and hey, it's developer first and it's got an API and please use it Please please it doesn't it doesn't really work right you trying to force people into a different model of working So what we had to do is just make sure that it was fit for purpose for those developers and as part of that We you know, we don't want to force tools We don't want to force processes between the two clouds shouldn't hit my microphone Otherwise the sound guys are gonna spank me And we wanted to make sure that we're using the right tools for the right job, right? The key thing that we needed to do was we need to make it repeatable easy to support and Make it fast right and the easy to support bit is kind of the most important bit And that's why we had done the path of a vendor just because we had kind of one throat to choke And it was a lot easier to kind of get that done So what I'll do let's figure out how to use the pointer There we go So this is the existing pipeline that the developers had in AWS. So there's a few different components there though using the Atlassian suite Using bit bucket and bamboo. So basically the developers will go through the normal process, you know, they develop code They commit it and merge it pull requests would be issued the Bamboo system would actually build the artifacts or build a Java war file store it in an Artifactory and Artifactory was backed by Amazon S3 buckets, right? And then what bamboo would do is it would run a deployment plan Fantastic, and it would talk to code deploy code deploy would grab binary deploy it to the deployment groups specified All auto scaling groups load balance, etc. Everybody's happy. You've got a highly available application in the cloud That worked really well One day. I'll figure this out. There we go Open stack right you see looks exactly well not exactly but pretty much the same and that's the end goal Right, we wanted to make sure that what we've done with open stack didn't differ wildly from AWS because otherwise It's gonna be a real pain in the ass for developers to get the head around, you know changing processes And why would they use it? They've got a process that works in AWS So there are things that are really different here is that We've got we have introduced an orchestration layer on top of everything, right? So, you know, they check the code in they commit it pull request bamboo will still build the artifact as normal But then the deployment plan for bamboo will actually talk to the overall orchestration layer that sits across the top of everything So what that does is that actually goes out and checks that you know the artifact exists locally and that's where Swiss storage comes in because The key parts here is a bit bucket bamboo and artifact tree. They were all up in AWS, right? So what we don't want to do is every time we do a code build We don't have to pull the artifact down from AWS and install it locally because that's just stupid So what we've done instead is check to see if the artifact exists and Swift for that particular project if it doesn't exist It'll pull it down and catch it So it's now in the in the local private cloud environment Then that orchestration layer goes out and talks to heat which will actually stand up the instances required deploy the software software with software deployments and software config and again That's all auto-scaled load balance the whole lot So similar process slightly different that we have the overarching orchestration layer Hey So then we have the final deployment pipeline and you know it kind of makes sense That's just a combination of the two right? I've just joined them together So effectively bamboo is a decision-maker here so we can decide whether or not it's going to be deployed to AWS Whether it's going to be deployed to open stack or whether it's going to be an application It's going to be deployed to both and that as an example the sky go product architecture We showed you a couple of slides back with the authentication stack and open stack front-end at AWS an example of that So we can choose where it's going to go That kind of moves us on to some more of the the challenges that we encountered and the first one there is terminology And it sounds like a really minor challenge But it turned into an absolute nightmare I'm sure JP can attest to this in the beginning of the project We're having conversations with with project management with different parts of the organization Talking about open stack talking about applications products projects Okay, great. So we talk about an open-stack project and we stand up an open-stack project then the project management office is going Oh, okay, how do we get involved in that? Like what does that mean for us? So, okay, it's not that type of approach So we had to figure out how to change that language and make a kind of common model For a product an application deployment models, etc. So there was many Many meetings discussing what we were going to call things and how we're going to how we're going to refer to them So we just defined a product as the highest level and a product kind of corresponds to an open-stack project, right? And that's how we kind of got away from That issue of weird terminology meaning different things are different parts of the business So we had to kind of map that out and model that out So it's something we didn't expect to be an issue, but turn it to kind of a big issue And then we had to that kind of flows on to the next point Which is kind of architecting open-stack for applications in a way that's understood by the business, right? So that's where the terminology fit in we had to make sure the business understood what we were trying to do What an application looked like and all the different components those two were tied quite closely together and The key bit around that would be Communication so we had to make sure that we were talking to the business on a regular basis We weren't an isolated group of you know engineers and project resources going Okay, we've done this we've done this we've done this and then take it to the business who then say this is completely unworkable Right, we need to make sure that they were involved in a very very early state and often Which leads us into the other issue that we had DNS and this shouldn't be an issue right, but it is unfortunately so sky TV have a Unique kind of DNS set of requirements. So there's no internal DNS server within the sky TV environment though using public DNS route 53 and We hadn't really thought about how an app a hybrid application living in open stack and AWS is going to talk how services are going to discover Each other and how to make sure that the front-end services are talking to the right load balanced instances in open stack And we kind of took the approach of Okay, she'll be right which basically means we'll sort it out later, right? We'll sort it out later and everything will be fine and Not fine Definitely not fine So we have to end up doing things like you know host file hacks when we're doing deployments and running cloud in its scripts to make sure that We're hard coding. It's just an absolute nightmare, right and this issue is still ongoing I don't want to paint a picture. That's completely rosy and everything's working and everything's 100% No, it's still a problem that we have to address. So it's still kind of working through that challenge And then kind of brings us into some of the other challenges and the other the other areas of open stack, right? networking challenges being quite a quite a hairy piece of the puzzle So what we wanted to do is We wanted to solve the problem of you know, we've got a lot of different projects being stood up and torn down at a high rate We want to make sure that the Networking wasn't held up by any of that So JP is probably it's best to talk about how they address some of those networking challenges. Yeah Yeah, I think whenever Well, you can actually say that adoption of something new was really slow from the network team and they were pushing back quite hard Especially if you said the word SDN they just like not we're not having that and Originally, you know, there was a bit of headbutting and you know shouting over the table and all that but Once we realized that we actually thought of this this analogy that about 10 years ago when Computer and server virtualization started making making grounds The infrastructure and the server support guys they were going through exactly the same thing They were like, you know, the some of them saying man, this is awesome. This is like gonna change everything It's so cool. And then you had another group that said, you know, now this is witchcraft. It's never gonna take We just we're not even gonna look at it So at least now we were in a position where we can look back at that experience and go like hey guys You know what? This is probably not going anywhere. We're gonna use it anyway So better get on board someone once made a reference to you can't be half pregnant You need to be you you're by the pregnant or you're not and that's kind of the stance we took Yeah, so And then that's I mean that was one of one of the big issues that we had around that The the other bit is that, you know, we and I touched on it just briefly before about you know The how to handle the amount of networks that we're creating on the fly and talk to the network guys about this and they just absolutely freaking out right So the decision was made by one of the architects to use a slash 16 for external networks Which caused the network guys to have kittens And security and security they were they were not very happy about any of this because they didn't really understand the model right that we were trying to build and The next slide's actually gonna describe that not that one come on there we go And so what we've done there is we've just made had to make them aware that yes We've got a slash 16 network and yes We have a lot of dynamic networks being stood up and provisioned on the fly, but they're all controlled right? They're all controlled by the template. They're all controlled by the method in the methodology So we had security groups in place We had role-based access for neutron in place to make sure that only the instances that needs to talk to each other That's it and only on the required ports one port ideally so that they were a kind of Made to feel a bit better about the whole situation because they could see that security's baked into the into the template into the model into the infrastructure and they kind of got away from The okay, you know slash 16 will we'll deal with that. That's fine. Yes I can just touch on that with with you know in an environment. You've got a Dev stack a test stack pre-live stack at a prod stack and they can be multiple stacks as well So what we ended up doing was using knuckles to separate the stacks so that? Dev can't see test this country pre-life pre-life can see pride And when we also had security groups for this because we just used a normal 3d architecture application architecture So we were using security groups to allow communication from the web to the logic to the data layers Yeah, and that's how we kind of got past that issue with the network team. Not really an issue. It's a slight challenge Concerned. Yeah Cool and So that led us to another of the challenges right so SD and SD and ZZ. Yay piece of cake next next you've got you've got virtual networking Right, and that's definitely not the case. We had a pretty unique problem at Sky TV where There was a requirement to have software to find networking across the entire environment virtual and physical So they decided to go with Cisco ACI to do that, which is great You know great SD and engine to handle that capability And so they've gone through at roughly the same time an implementation of ACI learning how to use software to find networking and then we go hey open stack neutral on whoo and They had to learn to SD and engines look there was some pushback around that to say why do we have two engines? What's going to drive what is there any integration points between the two? So we were running in a pretty strange environment with two SD and engines being installed at the same time and a network team Who had to suddenly learn the differences between the two we to use what we the integration points were right and that was hard that was very very difficult and a Problem that we have and it's still a problem now because there are still problems Is that the integration between ACI and neutrons still isn't done and now we're sort of kind of moving very very close to Production and that becomes harder and harder and harder as you get workloads in place right working backwards Untangling a mess and kind of re re-architecting. Yeah, that was very difficult and I think that the key take away from that is maybe don't do that and Education was really key because the education is key. That's yeah, absolutely the networking team were to be fair They were left out of some of the conversations that we were having at a very early stage And left out of maybe some education sessions that they should have been involved with And that's that's something we took away as a lesson learned Yep, if we could do it again, they would be in the door from day one and maybe we'd space out the two SD and engines Everyone would be and we'd be one happy family exactly. Yeah, that's right so I'm just gonna move relatively quickly through these Brings us onto operational challenges Because we've solved the technology and now we need to figure out how to actually run it and in production or close to production So we had we've talked about, you know the code pipeline and developers and developers and developers and developers and that's great But there's a whole other part of the business, right? And they do have a requirement to use some of the capabilities of OpenStack and of hybrid cloud So we had to figure out a way that we could deploy this hybrid cloud make it easy to use for both developers And for the rest of the business So what we had to do is is identify a way that the rest of the business would use the platform How they consume workloads and so we've decided upon a self-service kind of portal layer so that they can log into a website spin-up instances for testing spin-up instances for you know general kind of Hacking around if they want to see how a new flavor of Linux works or they want to install windows or what have you and That was that's us taking care of the rest of the business. So we've gone, okay, you can use this platform as well It's not just the shiny thing we've put in for developers and you know, the rest of the business has to Go down to the coal mines and use VMware Definitely not the case And the other part of that is the developers and we've kind of we've catered to them pretty well So we've we've integrated it into the code pipeline. We've made sure everything's API API first API friendly And that's the way that things should be consumed. So that's kind of taking care of how we handle those two requirements and Let JP talk about some of the other operational challenges around building a team I'm quite aware of the times. I'm gonna fly through this building building the team so it was Quite clear right from the start that we will not be getting any headcount or we will not be able to stand up another team Which in hindsight is quite a good thing because at the end of it one of the benefits that we had from what we were doing was that We were breaking down silos and we didn't even realize it, but I'll get you that now yeah, so because we couldn't do that we had to revert to using virtual teams and To get a virtual team up and running we kind of had to look at at how do you pick people to become a part of a team and That's where Attitude was chosen over aptitude. So we didn't necessarily take the best people in Skill-wise and and certified wise We took the people that had the attitude that drank the Kool-Aid pretty much that had to buy in that believed in what we were trying to do right from the start and Because we were doing this in an agile delivery method. We pretty much had two week sprints So we could the well from a lot of goodwill from the managers as well we could take a resource from a team and run through a sprint and At the end of the day it kind of looked like we had this I Don't know this cloud ninja cloud ops team because it was made up of people from different resources from different teams Infrastructure network security developers products people would come in and go out, you know, not always the same people But this turned out pretty well And this is actually now our operational model as well for for support And like I said, you know one of the key benefits and takeaways from this was we were able to break down silos between teams Because people were kind of forced to work together Cool, so we've built a team Figured out how people the different personas involved and now we need to figure out how do people actually use the platform There's a ton of different ways you can interact though and sec obviously so CLI API you can log into horizon You can you can do what you need to do But we need to make sure that the way people are accessing it was the right way, right? It was the right way for their particular persona We didn't want people just all logging into horizon spinning up instances and just going for it Nor did we want kind of unrestricted access to the API and people could do what they want So it kind of goes back to the the principles that that JP defined earlier Not the rise of cloud ops, but no humans in the cloud, right? We didn't want people logging in doing tweaks standing up things and you know, oh this instance doesn't work quite right So let's log into it and make a little change here. That's not really that they recorded So what we decided to do is access to horizon was only allowed by the cloud ops team So a very very small amount a small virtual team And that was kind of it was strongly recommended to them that they use that as a sort of last approach Right if there was something that they absolutely had to do in horizon sure log in and do that But generally we're trying to move them away from that and try to move into kind of an automated world So they could you know, they could run playbooks. They could use terraform They could do anything they needed to do to kind of stand up and tear down instances We want to make sure the workload creation was only done through orchestration and through API and there's got this Obviously some exceptions to that because not everything could be done that way So we did when you're standing up a new environment Maybe you need to check how you know how to lay out the networking how to stand up servers How to configure them so maybe you'd log into horizon and do that build a template and that's that you wouldn't get back in again What we wanted to also do is make sure every instance It's created has meaningful metadata attached to it So we know exactly why it's been created and what purpose it's serving right because it's a private cloud the open-stack part of it Users users resources we need to know what's using resources and why So every time instances were created they were tagged with metadata for you know the scaling group The application that's being deployed the project all are useful information to look at an instance and go This is exactly what it's used for and it was easy on the flip side of that to go Okay, let's have a look at all the instances that are created that don't have metadata and We can you know We can point the dev killer at them and destroy all those instances if we need to which is just a script Just sounds more impressive when you call it the dev killer, but it's easy to actually pull that data out and go Okay, there's some instances here that shouldn't be here and we can we can get rid of them and free up those resources and The outcome of that is that the platform is actually being used as intended Developers are using it hitting the API's only hardly anyone's logging into horizon everything's done by an automated fashion through the self-service and the orchestration layer and That works for applications And so I'm just gonna touch really quickly on the last Sort of challenge that we hit which was automation and that's a very very common problem Right Automating there's so many tools out there so many different purposes and we talked to a lot of people about automation They spit out the automation Prayer or sentence or whatever, you know, Antipole soul she for pub salt So okay, is that one product is that is that five products? So we needed to break down all the different automation tools into their roles and go, okay What do we want to do? We want to stand up instances. What should we use that? What should we use to do that? What do we want to do? We want to ensure configuration is consistent across all the instances What should we do to make sure that happens? So it's all about choosing the right tool for the right job and in doing that sure You might have three or four different automation technologies in place But by having an automation an orchestration layer across the top of that essentially acting as an API gateway You can control the rest of those app of all those automation technologies in quite a granular fashion and make sure that if you Want to spin up an instance maybe use terraform if you want to deploy some software you could use Ansible, etc So we tried to make that as logical and rational as possible by layering that orchestration engine over the top of everything else So that we can add new tools and if we need to because it's easy to extend the orchestration layer out And that's kind of the the automation challenge that we had right tool for the right job And I'll just finish up by actually What does success mean? Did we succeed with this project and Slices what does fast look like and that doesn't look really fast But the key takeaway from this is this is one of the projects that we had the internal part of the hybrid application the sky go application And in the past to stand up this environment in AWS had to go through security Had to go through change control had to go through design meetings had to be signed off by the business Then actually had to be stood up and made sure it stood up correctly and that process took about six weeks to go through And what we've done and what we demonstrated to the to the sky team is Deploying that project in three minutes and 20 seconds everything security's in the template Change management's in the template you do the template once you get that approved signed off You don't really need to go through change control again So that was a huge Just game changer for sky TV, and I know that everyone talks about big numbers big number here small number here But this is something that Actually took six weeks to do and now only takes three minutes and is a pretty pretty great outcome I think for a hybrid application. Yeah, like how do you change the direction on the well tank? It's not gonna happen fast, but this was this was definitely something that helped. Yeah So that's really all we had to talk to you about success. Yes Questions, I don't think we've got time for many but yeah, I think that it's a bit of both So we we always try to not change the the way that the developers used to work But we also had the fact that we've got some DevOps guys who were trying to Kind of figure out where do they actually fit into this because who's gonna spin up the service It's gonna be the infrastructure guys who's doing the templates, you know So there was there was a bit of that to and fro But there's a much better understanding of what needs to happen and who should be looking after what after four sprints So yeah, there's this I don't think that the way that they work has changed at all that much and just to just to add on that the The way we approach it was a little was a little bit clever as well because we got the developer involved Well one of the developers from the team involved and he saw the value of it was working really closely with us And then we got his team leader involved demonstrated this to him and then he could see it right He understood how this worked and then that would that good will kind of means that it's easier now to get more resources from his team So we had to make sure that but back to the the the concept of building the team make sure people are drinking the Kool-Aid You want those people who are gonna be talking about it telling everyone they meet about it This is great, and then they can tell their manager the manager gets information show them more results from that team So it kind of worked out quite nicely it did and we had spread as well like you know You're walking down the hallway and people would come up to you and just say like hey man What's cloud ops about you know I want to know more about this and I want to get involved So it's I think we started with a group of about four or five people initially and now in our weekly meetings You know it's up to about 20 25 people So it's it's a good. It's good in that respect Yeah Or they were So yeah, so it was HP Healy and OpenStack. Yeah Sort of it was it was it was Healy and OpenStack, and it was the operations orchestration layer for the automation So we just use those two pieces really and The reason that was done was due to support right? It was a lot easier for us to get support from a vendor like HP then it was to figure out who's going to Solve the issues from upstream. Yeah Yeah, yeah, how to sell OpenStack to an enterprise. Yeah, just get the backing Just a quick technical reminder Could you just repeat the questions that people are asking for the people listening remotely? Sorry, what was that? Just repeat the questions that the audience are asking so people can hear Yeah Microphone So you mentioned there are lots of challenges in making ACI and neutron working together and your suggestion is probably not to do it. Oh, no, no, no that Definitely not not not the suggestion is that's not the answer What what I was trying to say around the integration points with ACI and neutron is that we needed to know what those points were And we didn't know what that was and the idea is that we should have Integrated Cisco we sort of installed Cisco ACI made sure networking team knew what they were doing with SDN Which they which they do now, which is great And then when we introduced OpenStack to the max made sure that they understood what neutron was for and then Understood how the two took talk together and work together So definitely it wasn't a don't don't ever integrate the two because I think it's a fantastic integration It was just how do we do that? Yeah, and which points do we use? Yeah? Cool Did you guys think about the front end for all the integration between AWS and OpenStack and what it means for the rest of the company Not the DevOps that's going to use the system Yeah, so the the front end kind of question that was that was a part that we didn't really show in the in the slides, but we had a Self-service platform in place for that so it was all catalog driven So that people could log in with their normal AD credentials to a shopping cart kind of experience and go I want to stand up a server or I want to deploy an application or I want to deploy a service We made it so that it was super simple for them to do right not saying the rest of the business with idiots Because that's definitely not the case But you you want a nice shiny UI really at the end of the day to start spinning these things up and make it easy for people Otherwise, they're not going to use it. Yeah, so that's how we handled that that rest of the business kind of problem Cool. Thanks. All right. Thanks so much for your time guys. Cheers