 Okay guys, I guess we'll get started. Kick it off. You're in the building a secure multi-tenant cloud for SAS application session My name is Jennifer Lynn and I'll be moderating our panelists today And I'm gonna ask before we kick off each of the folks to briefly introduce themselves So Actually, we'll go slightly different order here. Edgar. Why don't you start first? Yes. Thank you. Thank you, Jennifer So my name is Edgar McKenna. I'm a cloud operations architect for working in being involved in the community science 2011 still core developer for new tree Lachlan Evenson, I'm a lead operations engineer with lithium technologies based out of San Francisco Been with lithium about a year and a half Open-stacked user and contributor for about the same amount of time My turn Steve Hallett with cement tech hit up cloud engineering and as of December last year now part of the open-stack board Okay, so thanks everyone for being here We wanted to focus this panel specifically on delivering SAS applications And I think as we'll find through through this session This this group of users has a very specific needs the way we're gonna run this is essentially We've got some topics that are broken into four major areas And we're gonna use it really to guide the discussion more than anything But I'm gonna ask each person to kind of give a little bit of background about their environment And they use cases that they're driving with open-stack So I guess to start again, so next time is gonna be Steve So word a word is a SAS company is so we produce software as a service So we have human resources and finance applications we extend the applications to other barriers and obviously We manage very critical information from our customers and you can imagine that security. It's our top priority lithium technologies is a social platform where companies can engage with their consumers and users and form relationships in a Husted community So about a year and a half ago semantics started down the path of building our own virtual private clouds on top of which We would be building next-generation security and infrastructure management related applications we went Into hardcore development mode about a year ago right now and went into production last December and in that respect We've we've been able to abstract away some of the complexity of neutron and just look at neutron as a abstraction layer And rather than its own Implementation and it's really allowed us some flexibility as we have built and deployed our cloud and in production capacity Okay, so you know in terms of Specific customer segments and target use cases really just Steve you started to kind of go through You know the fact that you're looking at VPC environments But maybe you can expand a little bit on the open-stack use cases and some of the key key criteria Well for us it was critical to be able to marry open-stack with bare metal At significant scale running Hadoop storm Kafka stream processing ingesting hundreds of terabytes an hour and processing billions of of Events a day and so we needed to be able to not only scale the network But to be able to scale open-stack to support both of our highly virtualized environment as well as bare metal in the same network and That's just a snapshot of what we're doing right now lucky Yeah similar story to Steve so we have a bare metal environment an open-stack environment And we also prior to having open-stack. We were out and running several workloads in AWS So we were familiar with VPC Segregating networks and we'd actually built a lot of our applications around the fact that The data in the app were completely segregated at a network level So what we wanted to provide in open-stack was actually a consistent experience with that of a VPC and do that same Experience on bare metal so that the experience at a networking layer across all those environments was consistent to the application developers So in our case we already have a and hybrid solutions, so we have Applications running bare metal. So we have some solutions already now some kind of elastic clouds You can imagine Just when we're talking about human resources finance payroll these kind of applications They get some spikes and obviously running on the same Fixing fresh sector will not scale up. So we already have a scan elastic solution that we want to make it more agile more dynamic even more faster We want to be able to scale up. We want to be able to even provide better as solids for our customers. So Open-stack is providing us the best cloud management systems and obviously we need to adapt it to our needs in terms of Keeping the flexibility that we have right now keeping the security levels that we have right now I'd even increase it if it's possible and also Providing even more agile technology for our customers I think this has to be among the more demanding set of users Given that these are internal application development teams that are really looking for a Environment to really roll out their applications faster and faster So I can imagine it's not like you know some of the other segments that may be using open-stack That our external audience is you this is definitely a more demanding constituency And since and since we're past proof of concept right our our customer and our target target audience is not the The person or the team that's getting up to speed on open-stack or that is looking to become a committer to open-stack Our customers are product developers who don't even want to know about open-stack They want it that stable that scalable so hiding that complexity from them is important so that we had actually actually build and deliver Revenue generating applications. Yeah, and I think that at least in the open-stack community There's been a lot of discussion about you know the development CUNY community versus the user operator community in this case You have highly skilled developers as users who may yeah, you know Expand expect a much higher level of capability because they are development teams Okay, so that was kind of a little bit of build-up we wanted to in the next section Just talk a little bit more about some of the unique requirements And since we're talking in all three cases about a SAS environment with virtual private cloud Expectations, maybe we can expand a little bit more on Issues like security and and how you're measuring success in that environment for the for the virtual private cloud Yeah, maybe I can start so over our just cases a specific private cloud We're on everything on premises. Obviously We are a distributed system of course We Some of the requirements that we have already mentioned a little bit So we have a very strong requirements in terms of security Adding a layer of virtualization. We want to be sure that we keep covering those security requirements We actually want to find any possible gaps before even trying to send these Open open stack base deployments to our internal the environments There's another there is another very important use case Not just for our tenants is also for our internal teams So we have a bone a bunch of application developers. They're exploring new Enhancement to the current application but also Trying to create new applications to fulfill the requirements for our customers and they need a very flexible dynamic a scalable environment so Open stack is giving us that that that management system we are Extending its capacity in terms to make it agnostic 100% to the developers So they just have a set of APIs that actually not even they open stack API's API's that they already already designed in their own Pipelines to actually convert it into a open stack API's and end of having use environment up and ready and That can be disposable at any time So dynamically creating these workloads and removing them and not having concerns about Resources because that is the whole idea behind it So so on our front it was you know, not only providing a consistent experience But that also enabled us to have the security guys sign off only once on a specific architecture at a network level So we didn't have to iterate back through The whole certification process again internally So Segregating our traffic out into you know different different functions Functional units and entities and environments basically down at the hypervisor level We're splitting out VMs into different networks that at an IP overlay network are functionally different and different VRS And the traffic is securely segregated Automation so Providing VPC in open stack was crucial because our orchestration tool to actually deploy our applications out to given environments like to pull the resources and VPC gave us a very clean way to you know round up the resources in a way that was consumable to our orchestration tool so that it wasn't horribly difficult to Point it at open stack and have the orchestration tool understand it given what we'd already done Internally an MPLS and out in VPC and AWS Steve I'm wrestling how to answer this As most of us know right when you when you start your implementation you typically go down the path of well How do I stand this up private? Or you've got a partner that already has public space and you go down a path. We started we started private Year and a half ago and and quickly not just because we started discovering that The nature of our workloads and the location of the data that we needed to support meant that we no longer could think of just about private versus public and this notion of hybrid and everybody has a different definition of hybrid but our customers and our Customers who who were coming to us to learn well, what what are you guys doing with open stack not to buy? Open stack because we don't sell open stack What what are you doing? And we start talking to our some of our biggest customers about what we're doing and say well How can you support me because oh by the way semantic or rare toss guess what? We're moving hundreds of terabytes and petabytes to this cloud and that cloud and this cloud and that cloud That hybrid cloud. I don't know what it means anymore to us. It's it's cloud It's it's not a location anymore. It truly is a capability. It's a mindset It's a it's a way of doing things. It's not a location, but then that means where's the edge? And where's the edge of the virtual private cloud and how can we extend the edge so that we can consume? Resources because we have to because our customers are saying I'm running in a private cloud. I'm running on premise I'm running in Amazon. I'm running in rack space. I'm running and healing on. I'm running in soft layer I'm running in rack space simultaneously In addition to my own private cloud I Haven't solved this. I am putting in a we're trying to figure out how do we do that and and that's where you know We think that is where The thinking is moving With the community here is what where is the edge? Where is the edge? The edge is already moved. Where is the edge and can we get there in time to be able to support our customers? and Yeah, I think that you could say that You know that the application layer part of the reason why you know There's so much excitement around containers is that notion of right once run anywhere or app portability from a network perspective And obviously that's our day job at contrail in juniper this sort of mediation layer between You know the application and the and the physical infrastructure is what we spend a lot of our time thinking about How do we essentially create federated domains that interconnect whether it's public cloud or private cloud or someone's you know Softlayer versus AWS versus Google Compute Engine You know in the network I think part of where the network has been successful in federating heterogeneous domains is in Recognizing that there is heterogeneity in those environments And if you can solve the application portability problem and you can pull the compute and storage underneath We can solve that network problem, but yes, it's a journey And I think some of the hard pieces are just being bubbled up. They're just not coming right So I mean everybody heard some of the great stuff coming out with you know federated Keystone, right? I mean, let's look at what that how does that perform in production now? We're really excited at least as as OpenStack right so many of the people that have done sort of multi region have done Let's say one Keystone server and then multiple, you know regions But if we confederate that identity environment as well starts to get interesting not that it hasn't been done for other mobility technologies You know, I think there's some lessons learned as that are coming into OpenStack in that environment as well as well in terms of How do we do access control and enable mobility across federated domains without having let's say an identity server in every different site? Okay, so that any other comments about success metrics I mean, how do you measure the success of you know these implementations which obviously are ongoing you constantly have to do upgrades The exit criteria for an OpenStack implementation. I think is hard to You know to define and very often for instance and as the vendor community we're starting to see more RFPs for cloud Implementations, but I think that exit criteria question is obviously very specific to your application environment and to the deployment that you're trying to do How do you measure success? So in our case we have two or three areas I will start with the the one that's supposed to be the most Simplest one, but it's actually very hard which is to Install it right so it was very funny But there was a session earlier today about How we failed with OpenStack because from the goal it was to supposed to be as simple to deploy it and Actually, we even create an ecosystem around installations support maintenance, etc. Because it's not easy So that was that was the very first so we have a long list requirements Obviously has to be stable has to be reliable it has to be indebted because we're not just creating one cloud That's it. That's kind of like the difference between the private and the public rights You have one public you may create some federations things like that But in the private cloud you create multiple bear crowds I you just wanted to be equal You don't want your operations team or your infrared training to just set our assistance Differently for every single cloud you want it to be as homogeneous as possible So we need to build a system to this repeatable by depotent, etc That's kind of like the first the first metric you need to be successful on doing that So how you make how you measure that? It's also based on the feedback from your Infra team on your operation team. How happy they are how frustrated they are if they are not You are doing a good job if they are so frustrated because every time they are logging into the system Well, they are trying to find out what's going on. There's something wrong the second metric that we able it a lot is Obviously the performance as you can imagine when an application running thoroughly environment only that you move into virtual machines Before going to container containers. There is some performance accreditation It depends on the application could be 10 quality 5 20% you don't want to increase the number so another metric is just to keep your performance degradation as minimal as possible and I'm mentioning this because you were talking about the networking layer and the one I work like it is a key topic If you start using open source technologies that actually increases the degradation of performance at the networking level you will end up having a Performance problem and also a scalability problem So that is another key factor for us to actually Decide whether we go to production or not No compromises Yeah, I our requirements are very similar to Edgar's At the business level though it is How quick can the developers iterate? So, you know when we peel it all back and it comes down to the cloud ops Team, you know, it's it's exactly the same but what the business wants is What are you providing? How stable is it and how quickly can my developers iterate? So our key success criteria is is what we've done Make iterations quicker That's it So app cycle time and time to revenue for the actual business line that's supported by this cloud. Yes I love a great great comments both both of you. I couldn't agree more one other thing I think that helps us is then the notion of with open source and a community to what extent can my team actually Commit back contribute back that that notion of They have to be invested in the success of the technology They have to be skilled enough in that stickiness you create significant stickiness when now your team become part of the Contribution cycle the life cycle, right? And I think that's also an important success because I I want to make sure that the team is invested in the technologies That we're adopting and they're doing the things necessary to become recognized by the community to have their Blueprints their designs their code accepted and I think that's just yeah part of the maturity of DevOps I mean many of the folks that are trying to adopt open stack do not have this level of Understanding and convergence yet of the development and operations team and you're taking it one step further to say, you know the actual success of The technology that's underlying the app is owned by the app owners themselves We'd love to see you know more of that. I think there are definitely You know some segments that have not yet converged in a DevOps way And certainly not seeing sort of the success of the infrastructure being something that they help own Maybe like you said open source kind of drives that that mentality well It does but if you're not shipping product every 12 to 18 months like some of our business still is right because we're still in that shrink Wrap business in some segments if you're not shipping code every 12 to 18 months, but you're shipping code every two weeks Now you can have a different relationship. Yes, your infrastructure exactly. Yeah, and so in terms of the cycle times I mean typically yeah, what are the how many releases? Let's say per month for the application environments that we're talking about in your in your environments Well in our case is very specific. We actually We actually release every Friday every week We have a future release and actually enhancement releases and now after a few iterations we have future release So it's very very dynamic For us it really depends on the application. We have the sour architecture you know everything is functionally broken down, but Some of the larger apps it's monthly and we only have all of our customers one release behind current, so Yeah, it's monthly cycle Well, then this goes back in a motion right of continuous delivery or Deployment and what's the ICD to you? We we ship Continuous delivery every two weeks at the end of every sprint if it's a critical patch, of course Then it's it's much more frequent But at the end of the day we have to be able to deliver such that the product teams that are building and shipping product on top of Our infrastructure can ship at their leisure. So we don't want to stand in the way of them shipping code product and employing that product in one region two regions and then globally so it's very very important that we've been able to prove and Demonstrate the ability to upgrade our control plane with zero downtime Upgrade our data plane with I think was 10 minutes downtime the last time That took a lot of time to figure out how to do that to prove it To be able to roll back to prove it over and over and over again so that the team had the confidence to pull the trigger and show that we we earned The trust to be able to do those rolling upgrades Yeah, so for those that haven't seen it semantic has shared quite a bit of information about how they've done the Upgrades thus far of various components with an open stack I think the first time you guys did that published was Havana to ice house and Quite really nice work from the team there and then you know the work around CI CD and Edgar I know you've been busy this week sharing some of the best practices in terms of continuous integration and continuous delivery in your Open-stack environment It's it's critical for us. I say before right the stability and the Repetibility of the system. So we actually started from that for having a very strong CI CD system that it's Doable even in a laptop right and then we can move it related to some virtual environments and then we migrated to bare metal and You know, we are in the in the borderline to say like do we really need to go to burn metal We ended up testing more in virtual environments that actually were met up. So what is the point to going to burn metal? So We're still we're still in that line I gotta tell you Edgar demoed something really cool yesterday and a little we had a little private session right and any demoed Developer experience all laptop based and there were a number of people that got up after we're saying hey How can I get a hold of that? So I mean putting pressure on each of us putting pressure on each other to share what we're learning I mean, that's what the community is about right? We're standing on each other's shoulders We need to continue to give back what we've learned and there's some pretty cool things going on. No, thanks. That's that's excellent So maybe some of the things that are a little tougher Can you talk a little bit about some of the specific challenges that you've seen and you know? Maybe some of the lessons learned and how you might mitigate some of these challenges moving forward Just so we can share with with others who may be earlier in the journey to avoid some of those pitfalls Maybe Steve I'll start on that side Okay, so I won't mention V router You know, we're growing right with growing pains and and all of our partners are having growing pains in different different parts of products and product releases I think one of the things that we we bit off more than we could chew in terms of what was our what was our Requirement about a year ago. We want to do load balancing as a service firewalls of service DNS as a service We wanted to do everything at the same time and just you pick One focus deliver execute. I think that's something we learned and it humbled us, right? Making some mistakes losing a whole two weeks sprint and throwing it away But it also taught our teams that it's okay You can go down to a two-week effort and you can throw that away without any impact to your credibility But we we we have to recover from it quickly quickly quickly and and I think that's one of the things We we still are struggling and this is something that may be a different topic I don't know but we're still struggling with the notion of underlay versus overlay Underlay in my definition is the the network engineering team the traditional network engineering team that we need and know and trust and they're the folks that put their arms around racks of network gear and and physical connectivity and circuits and the like and the overlay which requires a different skill set And and constantly having this conversation Well, just because the s in sdn is software doesn't mean necessarily That the I has team or the infrastructure service team it runs it and and we have this debate, right? What's is it the s or is it the sd? Is it software defined and then we have a we have we have to recognize that our network engineering team is struggling It's psychologically it's traumatic for them right now because there are new skills that they need to be successful in this world Can you code in Python? Well as an example not that that is required. No, do you want to know? Well, then how are you gonna be successful with software defined networking when you need to be able to get into code? We haven't solved that one either and that's the people dynamic. We're still solving the people Part of it and that's I think that's gonna take another year or two to Work its way out. We're trying to give our people incentives to learn the new skills, but it's hard There's a lot of fear there, right? And I'm just putting it out there because it is part of the journey Makes for a good blog topic. Oh my gosh Yeah, no often often the challenges are not just technical for sure go ahead lucky. Absolutely. I agree I agree with everything you said. I mean much of the same at lithium You know, it's it's been as much as a cultural battle as it is technical You know, the saying is internally, you know, how do how do you eat an elephant one piece at a time? You know Technically as engineers, we want to challenge ourselves But we really had to peel it back and say what services are we actually trying to deliver here? and what does the business need and coming out of public clouds where they have you know, very sticky features that some Application developers have already gone and have that It's it's hard to wind that back and say, you know, what do we actually need to deliver? Internally in our open-stack offering. Do we need to match everything that the public cloud provides? Or is it just a subset of that that actually matter to the business right now? So we try to you know boil it down to exactly what we would like to deliver and make sure that that is a stable functional environment and then on the other hand with with the the culture and You know, one of our challenges is to be able to shrink wrap what we've delivered to the business and hand it to a wider audience of Network engineers that have never touched an SDN. They're scared. So it's you know, and I don't blame them You know, it's it's perfectly natural. So we need to actually Go on that journey with them where they haven't been involved in the past and actually say, you know this is what we're doing and and it's our responsibility to the company to hand it off as something that's actually Supportable and maintainable going forward Well, so we are still in this journey. So we're still having those challenges and risk every day I would say the most the most critical for us was the the We created this amazing team of software engineers and for the first Couple of months or even a little bit long longer They end up being system engineers, right? Because they were trying to deploy it open a stack doing integration testing stuff So there was a point that when are we going to start really doing Python? And if you don't control that emotion because there's something inside of open style that gets you addicted to trying to fix a patch Or actually just in the testing you start getting like mmm. This function could be right in differently and That could be your life's spending. So actually it was it's just it's also amazing because we have one Intern actually fixing a code in Keystone because well anyway It's providing the right direction is Helping them to understand that it's just the beginning Then will be all these beautiful contribution upstream It's very important that open a stack. I go I go to contradict myself I said before that it was very complicated to deploy it, but it's very easy to start playing with it It's something that you can just get a depth stack in a virtual machine and just a plane you start changing code You can do a lot of things, right? But from that to production is like a whole see so These group of amazing engineers. They need to understand that and that was the biggest challenge that we are Still facing every day on the risk side There is a transformation between what your operations team understands of how to operate physical server a switch a router versus a virtual machine a container virtual device Bridge tap interface a bad device some of them. They are just truly new all these concepts, right? so Something that I like to play a little bit in the team is trying to move some of the inf networking guys to be a software engineers as some of the software engineers to be System inf guys and it's been very very very nice experience Ended up that one of the network engineers to start writing Python code to do API schools remotely So simplifies his own job So it's it's it's part of trying to lend a note that is kind of like part of the journey and it's going to you know bring some Satisfaction at the end of the day so in your environments you actually your core business is software some of what we've seen Let's say and some of the large banks that are Dipping a toe in the water in open stack We saw a large bank who formed a tiger team of 17 people and those 17 people jointly owned the success of the open-stack Environment, but those 17 people I think two of them were network engineers four of them were storage folks three of them were CIS admins and a couple of them were the application teams and obviously a diverse skill set But their bonus there, you know their MBO was tied to that joint Outcome there was a lot of cross training in the in those six months And that was sort of an interesting way to do it All right, so on to the next topic and I think Steve you started to hit this a little bit But can you comment a little bit on You know your approach to open source and you know How has that sort of helped or not addressed some of the challenges that you've had and then specifically Not only that the transparency the agility benefits, but also Comment a little bit on you know things like open-stack which happened to be the largest open-source project in the industry You know how that changes some of the thinking around interoperability and ecosystems and that kind of thing Edgar so we're using a bunch of open-source tools. Some of them are Like study would open a stack obviously we start also using for our CDC CICD things like Jenkins Docker containers backgrounds virtual box, etc We also the networking power. We are using open controls and We're trying to understand for the technology right for much of the people in the team They didn't even know what was open a stack You know a couple of years ago So it's just trying to learn and trying to educate or on these open source technology, right? We try to contribute upstream as much as possible. So we are our configuration management is based on Chef so We are helping to the chef community to grow we So recently on these on this cycle. We're trying to formalize Chef to be a One of the core projects for open-stack. So one of our engineers become already a core member of that team to show Which is great? I'm trying to contribute specifically in the networking part. We're trying to contribute also on the On the glass and and keystone part because are very important for us So we're very very engaged with the community trying to increase it For us, I guess You know as a user and a consumer of open-stack, you know and and not only that a contributor We're we're kind of in the trenches day in day out And I think the value that we can add is is in the stories and the lessons learned Not only the the pull requests. So actually saying, you know, this is how we're operating our stacks And this is what we've learned and trying to feed that back to the community And help them not make the same mistakes potentially or learn together And I really feel like that is the power of of the open source is the driving community behind it Yeah So I think all of us here in the panel right we're wearing the user hat or the operator hat where it's not so much the Open-stack developer hat we depend upon the Development community for us to be able to build an infrastructure and operate it to support our our customers So as a user, I I'm absolutely thrilled that the the notion of deaf core has been accepted much more broadly and much more quickly than I was hoping and We're now we're now transiting to this interoperability standard and I think interoperability is important for those of us that use Open-stack because we need to be able to know that those partners and vendors that we depend upon to deliver a whole solution Our puddings, you know some skin in the game, right? And so I think that's an important I think step in the maturity But we also have to recognize that we have to be moving a little bit faster than just this every six-month kind of mindset and it's important for us as users of open-stack to remind the rest of the community that We release a little bit faster than every six months and therefore we we need the innovation spark To continue to burn very very bright within this community and we need to combine our voices as users Just to let everybody know how that that there is a strong user voice here Yeah, I think it's it's come to mean for many of the folks that are moving towards a more pervasive open-source strategy It's it's really about agility, right? And obviously the networking was one of the maybe last major components to actually move towards open-source en masse It also was surprising to see, you know, some of the carriers for instance Starting to make mandatory requirement open-source technologies to make sure that they're pushing hard on innovation and recruiting the types of Software development engineers that are pushing faster and faster All right, so I think we We probably hit lessons learned I wanted to make sure we leave a few minutes for for some questions unless Anyone has any things that that weren't hit in terms of some of the specific lessons learned We can also save some of the the look ahead for maybe the Q&A unless anyone wants to hit something very specific I just want to say yeah very specific in lessons learned So understand your just cases don't go wild and trying to build an open-stack. Just understand your just case I ended up talking to many other operators They don't even understand their own just cases and when they build their cloud They ended up providing services that they are not useful for their use cases. It's just my advice Go deep talking to the info people talk to the application developers talk to the guys Cannot change the infrastructure if you have some specific requirements Understand those requirements put on paper Understand everything talk to other architects talk to other companies I I as an operator as to say like I would love to keep sharing all these best practices I would love to see like we already went through that pain It's not needed for you to go through the pain we can share as much as possible So that's that's kind of like my my lesson learned Yeah, I mean for us it's it's very similar You know do your homework before you go into it and make sure you understand the Scale you wish to operate it at because it's very difficult to go back after the fact and after you've made key decisions on Projects you're investing in to go back and change that after the fact So yeah, that was what one of our challenges just get out there and talk to us We're happy to talk to you guys about what we've learnt and seen But really understanding each component, you know boiling them down compute storage network How are you addressing each of those? How are they going to scale? What kind of redundancy, you know, do you need to do live migrations all these kind of things, you know? Because once you make those decisions, it's very hard to back them out at a later stage Okay. All right, so Steve will save his pearls of wisdom for the for the queue. I'm already tapped out. All right. Yeah, we've got three minutes So let's open it up and I think we've got a question. Yeah Great discussion. Thank you so much panel members Jennifer phenomenal moderation I have two-part question The underwriters of the contrail as a company for example Do you think going forward your executive leadership will stay true to the agnostic vision of To the vision of being vendor agnostic In long run and the second part of the question is going forward What could Basically, do you see would be a bigger challenge technical Challenges in interoperability so that open control works with say a Cisco gear or a Juniper gear XYZ gear or will it be more political inertia? So, okay. Thank you for the question In terms of you know moving forward and you know, do we think that the position of the executive leadership will change? Control started up as a startup Not associated with a single vendor when it was acquired by Juniper networks It was you know a decision explicitly to maintain the strategy of multi-vendor support for the underlay driven Essentially by the original IETF work that was done which ensured protocol level interoperability And I hope to believe that a lot of the adoption that we've seen is because of that So we talked from a control plane perspective from a forwarding perspective to the other network devices just like they talk to themselves Which doesn't say I as a router. I'm only going to talk to you if you're a Juniper router, right? And we test that with them all the time, right? And so and we've got a few different data centers if you walk in you go into one data center I mean there was a certain generation of folks that built it you look at the tours. They're all Cisco You go in the other one the tours are all orista you go in the other one so I mean all these different flavors, right and Interoperability has been demonstrated and that's key to us and I think you know the other thing We've been tried to been maniacally focused on listening to customers And I think that is what the customers need because there's a lot of complexity and now in opening up the network infrastructure to the Application layer where the application teams want to be able to see latency between one container and another the network guys Traditionally had more information than when then was accessible and now when you're exposing a lot of those analytics through REST APIs Directly to your application teams or your cloud tenants. You can't discriminate whose implementation it is So I think a lot of what's exciting from a software perspective in the networking industry is that some of these practices in software good Software development are now coming back and and you know I guess exposing some of the challenges of proprietary operating systems for network gear. That's very vendor specific This is where if we create a mediation layer in terms of software-defined networking or whatever you call it It's really about enabling faster agility in the application layer And that's why we we really enjoy the SAS segment because I think that's where it's you know These folks need to roll out clouds very quickly and they don't they don't have time to kind of you know Make sure that it's tested for one vendor and then rewrite it again for another vendor I Just I just want to say something about the first part of your questions, which is great I also have that questions and the way I answer myself and to the team was like we need to own it We need to own open open control because it's open. It's open source We heat we see the code we need to understand it We need to get help whenever we need it But if we own it and we distribute and we create a bigger community Well, we don't go what happened and in full disclosure in case I didn't do the justice of kind of Mentioning it before you know, these three distinguished panelists are all part of the open open contrail advisory board They're all you know in production in their own environments with open contrail And I think we've been doing a lot of listening and learning together And I think part of what has been important in an early market is that you know this notion There will be issues along the way I think how quickly can we kind of band together and move step forward changes hard And there are going to be you know some some bumps along the way. So we've definitely I think learned together Any other oh sure good. Yeah, just quickly I mean we we had the same concerns going into it and and the fact that it was open was great But was it really open? You know and what did it look like and and yesterday? We sat in a user group and it was just amazing to see how many people had actually contributed usable code To the community to the project, right? So, you know seeing each each six months iterate the room is more full and you know, it is really coming To pass so that gives us confidence and it is our dream as well You know as users that we want to make sure that it remains open Because that's part most of the reason that we chose it. So it's you know, it's up to it's on us To keep it that way and the commitment to deliver a single code base Which was delivered a year ago and now we're living on a single code base. That was a huge Step in the right direction, right? And that was great for all of us. There's only one code base Yes, so for just to make it very clear Our master code repository on github is the source for both open contrail as well as juniper contrail Which is supported by juniper networks, but is the same source code Any other questions? We're over time. So I'm maybe stealing. All right. Thanks very much folks. Thanks guys