 Check, check, okay. So I think we're gonna get started. Thank you all for coming We're standing between you and your afternoon cocktail. So we'll try to make this lively and engaging My name is Greg Holzerter. I'm the CMO of big switch networks And I'm joined by a great panel here of both operators and as well as service providers We're gonna talk about neutron and experience working with neutron in the real world I know that that's a very hot topic and one where people are exploring How do you get scalability? How do you work through some of the the challenges and opportunities of neutron? And without further ado, we're gonna kind of head go ahead and jump right in and introduce the panel So Christian Sarenson is the CEO of clean safe cloud. He'll talk a little bit more about what they do Dallas Thornton is a deputy CEO of Clemson University Dimitar even off is in the office of the CTO from TELUS And we're joined by Wang Wei who's a CD SDN architect from united stack based in China So why don't we go ahead and have the panel introduce themselves and a little bit about their environments to kick things off Hi, good afternoon everyone. So yeah, I'm Christian Sarenson. I spent something like 19 years working as a software developer in the Finance industry from my sense Road things and did and architected things like law latency trading systems, for example For tier one investment banks Last year or perhaps about 18 months ago I had to relocate from from London, UK where I was based to back to Switzerland, which is my home country And question was well, what next? And it turns out that certain mr. Snowden is actually Gifted Switzerland with a with a great new USP as it were Which is which is you know the fact that Switzerland despite being in the middle of Europe is Outside of nsa and gchq and whatever else jurisdiction. So, you know for people who care about Privacy etc. It's quite an interesting place to put your data And this is how clean safe cloud was born Hi, I'm dallas Thornton. I'm a deputy CIO at Clemson University I've been there about two years and in that time one of the big things you've been working on is Sort of designing our new pod and hosting infrastructure We do sort of the typical university mission work But we also support a lot of research users as well as support hosting for a lot of government agencies in the state So we've been doing a lot of work with that Our big use case that we sort of started us off here aside from the pod design was virtual desktop usage so We we wanted to build something that would allow us to You know, especially with the network side of things It's supporting vdi which in a university environment is very Very spiky in in in terms of usage. So That led us to to working with these guys on Developing that out and building on top of open stack And hi everybody. I'm Dimitri Ivanov. I work for tell us tell us is a telecommunications provider in Canada We are Headquartered in Vancouver, British Columbia. We have about 8 million wireless subscribers about a couple of million High-speed internet subscribers About a million tv subscribers. So that gives you an idea about the The size of the company. I work for the office of the of the cto. I'm a I'm a cloud architect for the last Two years. I've been Yeah, focused on developing cloud architecture for a you know private cloud environment to use in our You know for the internal business units at tell us You know the primary one of the primary drivers there many of them But uh, you know kind of the more sizable one I guess is for our tv service kind of we're looking at a You know a pretty sizable cloud. I I can't even you know Uh Kind of disclose the size of it and all that but uh, it is a pretty sizable environment that we we're building Afternoon everybody, uh, I'm one way from united stack. I'm the austin architect We provide all public open-stack cloud and the private open-stack cloud in china and Uh, we have about Nearly 100 Customers which we provide the professional open-stack service and the most the biggest customer run about 500 Hippo razors in purely open-stack And as for me I have made contribution to new trend for two years and Uh through these years I we have seen the new trend has grown fast, but made some problems I will share this experience and the problems or the Tricks to to you. Thank you Great. Well, that picking up from that. Why don't we go through the experience of you know starting off working with open-stack and and neutron With you christian. What was the uh, the reason that you kind of looked at open-stack or the issues and problems that you were trying to resolve We had great luxury in as much as you know, we basically built everything from scratch Which I realize is certainly not, you know, everybody's uh Luck, uh, but it was ours. Um, so so we were able basically starting out in sort of flay 2014 So ice house time frame to select An architecture and an infrastructure which we were hoping to be as much open source as possible no vendor lock-in and flexibility being An absolute Top goal that we had because you know, we were just starting out. We we were talking to some customers, but By and large it was very hard to anticipate at the time what we were Going to have to deliver for those customers say even six months down the line So really an architecture that's based on on on everything software defined and that obviously includes software defined networking Was extremely appealing for us and that's that's how we kind of came to Open stack and hence neutron. I mean in in the ice house Which was our first proof concept time frame neutron was already pretty well established and it was pretty clear that you know That was going to be the the only game in town a few a few months down the line That's great Dallas. You might talk a little bit about the background in your environment Sure. So, you know, I mentioned our vdi use case and that's really kind of what what initially drove us Our issue was a lot had a lot to do with licensing and you know having 20,000 Students out there and another you know 6,000 faculty staff You know the use cases for vdi and the cost on a per user basis for a lot of the vdi solutions that were out there were prohibitive So we're looking at you know, how do we do this in an open way and build out? You know a vdi environment that allows us to deal with a lot of the custom software that's it's out there in academia That you know in a way that we could manage and scale out sort of as needed And so open stack really allowed us to do that That's great. So you had a vmware environment and you were looking at moving into open stack then Correct. So a lot of our legacy hosting environment Was and still is vmware And so this was sort of you know, how do we actually roll out a new service in vdi on a new platform and using open stack for that Fantastic and demitar what um, what do you in terms of the background for tell us? What was the the reason for kind of looking at open stack and the background there? Yeah, much of the same reasons the guys already mentioned that you know with us being a service provider You know, I presumed most of the You know, if you know that The service provider had been kind of adopting for the last few years open stack for You know for any vnsdn so as being a service provider we're not Kind of we're not different. We kind of we we we follow that same same trend So selecting, you know open stack for us was kind of the You know pretty much, you know, I would say kind of no-brainer From that point of view because we obviously it would be you know, we don't want to develop Two different platforms for you know, any v or cloud so And but that's not the only thing though the You know, what what was just mentioned about you know being as much as you know open source and Kind of void any Or as much as possible You know vendor lock-ins from the perspective of The underlying infrastructure, but also from the you know the app for the actual cloud control air And that was the you know, that is the real driver, which is the you know the driver for In the reason why the the service providers are adopting open stack in in a first place In you know, if you want to develop You know software Software define everything You know open stack is really it is pretty much the only the only game in town The you know because the You know Amazon or microsoft or all the public providers that you know have invested in proprietary solutions You know, they're not they're not productized right so You know it is it is great that we you know we We have open stack and you know Should be grateful for it. So That's great. So, you know when you have a unique perspective being in china and Being with an organization that really is at the forefront of delivering open stack solutions. Can you talk a little bit more about you know Talking through the evolution of neutron and particularly the adoption in china and some of the the challenges and opportunities Okay, as we know that networking open stack from the no one network. It's pretty stable, but it's too uh, too small and as a feature is too too Too short and then community pushed the newtron the newtron project from about ice house or the journal and From the the newtron is pretty Well complex everybody will say that it's the the reference Implementation is from the is ova spaced it has many complex components and many which are network network devices And then the community has put the newtron to a framework or a platform And there are many new projects around the open stack like Dragonflow like ovm and open daylight and now the community seems more and more excessive and fruitful and You can use newtron as a framework and the platform not to use as it is a Astien solution. It's a Very exciting trend and we think it now makes the astien in open stack more and more healthy That's great. I'm going back to you d major. You say in terms of your experience With open stack and and with newtron in particular in your deployment You know, what were some of the the key kind of findings that you discovered along your Journey in terms of poc. It's something that you didn't expect to find Um You know, there wasn't so much that we you know, we didn't expect to find they wouldn't you know any any big surprises the You know the challenges of uh, you know in terms of deployment deploying, um, you know networking solutions for Uh, you know virtualized networking in general, um, you know specifically with with newtron They're not they kind of they've quite well documented. So, uh, you know, we try to you know, avoid those surprises from You know, just with just with research and and you know, making sure that we we learn from other people's You know challenges so we From from that perspective, I I can't really think of anything that we Kind of that happened that we didn't and didn't we didn't anticipate, but uh, it it was a Really Kind of You know in a in a good way to kind of find out that we that they are solutions out there that can help with with overcoming these challenges specifically, you know with that that we You know If you're building virtualizing virtualized solutions, uh, you know, one of the things that you have to do is You obviously have to have a network that you run on, right? and Managing of that network is something that you have to You have to do regardless of what solution you have in place and and having a solution that is Um, uh, that is manageable that it it helps you to Helps you to scale and at the same time Uh, it reduces the uh, you know reduces the complexity of the Of your deployment and the ongoing management is extremely important So that's the kind of the one message that I would You know, um kind of that I found for for ourselves that uh, uh, you know, there are many solutions that that will work um, but You know some of them require just more, um, you know more involvement and some of our, you know, less Kind of more manageable and uh from that perspective You know, they easier to they easier to deploy Got it. Great. So so dallas in terms of your experience when you are looking at deploying You know using neutron and open stack Did you come up with uh, did you come against scaling problems or are there ways that you kind of got around those challenges? Yeah, I mean, I think again our big use case in vdi. We're looking at you know, how do we get away from all the encapsulation problems that we had you know initially You know looking at this using traditional sort of fabrics and uh, so, you know Rolling rolling that out along with um, you know, the big switches and sdn Paradigm it really allowed us to overcome some of those issues I mean honestly, I think that the biggest thing we learned and are still learning is kind of the human factor side of it and You know the fact that you know, we had a network team that always did things and there was tickets that went to You know One person that went to another and and it's really, you know, for us it's been all but how do we automate the whole process and and Really get a lot of the bottlenecks out out of the system um, and you know, you know, that that's a not not Not as much of a technical problem as it is a as a as it is a process problem and it's something that you know I think we're making progress on and People are learning new skill sets and again. I think it's it's a human A human learning thing You know that was interesting in the in the keynote on uh, monday There was a conversation about you know 10 technology 90 percent sort of organization Could you comment a little bit more about you know before and after in terms of your org? How did how did that change? What did it look like before and now? Sure, and and we're we're we're definitely still evolving but You know, we've we've gone from a you know a process where you know, you know IP assignment was manual and you know ds assignment was manual and and It took three different groups to do that to you know, now we've got people who can automate that You know, we were big salt users. I mean we call everything and And basically the whole provisioning process can now be automated. So For us that it's a big paradigm change. It's you know Changing people's world a bit. But at the end of the day, I think it puts us in a lot better A lot better state That's great. And and christian could you maybe build on you know, some of the existing Challenges and then you know, what are you working on as your your next? Your next milestone your next goal Sure. Um, well in terms of challenges what we've found out is that I mean and I guess it doesn't come as a surprise, right? You're at the end of the day you get what you pay for in this instance You don't pay much. Therefore you shouldn't expect too much Which is actually, you know, in essence you get an awful and you know, I think I think we're all here extremely thankful To all the people and all the corporations That put all of this extremely hard work into making this stuff work, you know, so well out of the box In our in our case, I had you know, as I mentioned earlier, I was as I was a software developer I had very little experience of what I call the dark side of it at the time Which which is infrastructure stuff, right? And and you know with with close to zero experience in networking, etc. Well, lo and behold, you know, you you It's amazing how this stuff really you can you know, you have to have a little bit of an idea of you know What's to be honest? I I didn't even know what uh, you know, what what one of these 10 gig ethernet cables I'd never touched one in my life, right when I had to patch these into racks I mean that's the beauty of startups you get to do a bit of everything and you just you just learn all the time But you know in terms of there's going to be rough edges, right? So for example, one of the things we found out with neutron is we wanted to use the neutron metering agent to You know build for bandwidth as providers do Um, and we found out they was well, I certainly couldn't find a way to get it to filter out certain um IP ranges So for example, if the customer was going internally to the object storage network I didn't want obviously to charge them for external bandwidth that kind of stuff. So Yeah, you ask around a little bit you You get a little bit of help. I think this is one of the area where where perhaps things can always be improved I mean the ask open stack thing is is not particularly Vibrant I find compared to to other ask communities out there Um, and you know, you figure out a work around you document it and hopefully Going to your point, you know, what we hope to be doing somewhere down the line really and particularly with my background as a developer Is to start contributing back to the community. So, uh, so, you know, we we get to pay back a little bit All of this amazing tech that we get for pretty much free That's great and sort of you know kind of building on the concept of contributing So when maybe you can talk a little bit about some of the Feedback that you're getting from some of your customers in terms of asks and how you're contributing into the the Neutron project okay, and We know Neutron is awesome Since they're they have a very good community. They have active developers. They have the industry sport and The Neutron is a poorly so to be already find no wonder locking. There's all features that What our company or our customer like? But as you know, Neutron has some disadvantage as taste decides like it's lack of the physical manage From the previous previous release Neutron has a new Uh feature named Hirachi port binding. It's about the to ours To ours information binding, but it's not enough and we what we need is Manage the physical and the virtual networking both and it will keep the networking clean and more And more efficiency, but now Neutron takes the virtual Virtual role, but the physical is just we assume that the physical is always work Well, but the real world is that the physical network is not always well And we need some people like an infrastructure team or a networking team to operate the switches cables and the Port our nicks and it's very it's very Not as you know, not as the except except as Our customers and this is I think the Neutron's The biggest pay point for the current Great and Dimitar in terms of some of the Thoughts and that we were talking about the other day around You sort of overlays and underlays and building on Wang's point about the physical network One of you could comment a little bit about some of the discoveries that you found along your journey Yeah, that is That is definitely an interesting topic and you know that overlay versus underlay discussions been kind of going on for a while and You know people even arguing, you know, which one's better, you know overlay underlay and The answer from my perspective is that they both have their You know, they use and their place and they're they're good for some things and they may be You know not use not not good for some matter use cases, right? So what I find out from You know Big switches, you know has one of the kind of one of the few solutions on the marketplace which actually virtualizes the the underlay and You know, there's two Aspects of that, right? First of all, you know, you obviously have a You know a software defined network right into the You know into the actual fabric which You know at that level Kind of eliminates the need to have to have an overlay at that level at the same time if you You know, if you have a number of a number of parts with, you know a With a virtualized underlay where you can have and this is where the You know kind of making the connection with with neutron is, you know, you have neutron Basically directly, you know talking to your, you know, azn controller that it manages your You know a fabric in At the same time if you want to if you want to interconnect the You know a number of parts that are You know in the same location in different locations that that's when you use a You know one of the One of the overlays But the other aspect of it is that the I talked previously about manageability is that, you know, when you when you have a You know a relatively Kind of large size, even not as large even if you know something like as a You know 4ac, you know pot you still have about, you know in a Spine and leaf architecture you still have about 10 switches right not about but exactly 10 switches with you know to Spine and for leaf so You have two options you either You know Build that fabric either layer two or let layer three, you know manually and you manage it once which at the time or You know, you have a you have a solution that that manages that for you and You know, this is what I kind of see as a as a as a big advantage of You know of the the solution that big switch has is that It really reduces the complexity of manage the managing that fabric to the you know complexity of You know pretty much the complexity of managing a single switch And to me that's a big big advantage because now I can have You know somebody like myself. I don't consider myself You know a network admin. I'm more of a generalist And so I it is it could be a challenge for me to Not so much to build I can probably build it but to kind of to manage and maintain a You know a relatively large fabric But I know that I can manage a single switch And so we can manage a single switch I can pretty much, you know with with with some help for you know solution that You know Kind of takes that away from me and reduces that complexity to manage a single switch I know that I can do that and that kind of helps with the adoption of the of the of the technology because now I can have a You know a relatively small team that that manages the entire solution as opposed to Kind of the siloed You know solutions that we or Organizations that we we kind of we have in the legacy environment. So Great question. I think if I can add a little bit to this from me I mean, we've got really very different perspectives. I suppose Me being a startup you looking after, you know, a A major sort of telco operation But at the same time, it's funny to see that we've got, you know, very much the same kind of concerns Which I suppose are the kind of concerns that a lot of us are going to have and in our case, you know, being a startup Well, what matters a lot to startups is is Cap X and OPEX, right? and being able to Well, first of all, you're gonna have to have Neutron is all well and good it but it's at the end of the day still software and somewhere You're gonna have to have some hardware to run the network on And being able to do it all through a single pane of glass and thereby reducing complexity Really significantly. I mean from from a startups perspective where you've got about 10,000 things to do at every single Point in time. I mean that is fantastic. I have to say it's and and also The ability to see that this is a solution that will Accommodate future growth again from a startups perspective being able to start reasonably small but know that you're not locking yourself into An infrastructure that you're gonna have to throw away and rip out completely, you know, because because you're outgrow it Over time. I mean, I think that's that also is very compelling and if you can just add one thing and the the you know, the other thing is I'm not sure if it's I mean, it's clear to me. It's Probably most most of you know, but you know big switch Is basically software, right? It's not a it's not an actual, you know switch. So it does support, you know a variety of You know bare metal switches so you have a lot of You have a lot of choice from that point of view kind of selecting your selecting your own your own hardware and You know to to christian's point, you know that that helps reduce You know the capex significantly in terms of You know the the bare metal switches are significantly less expensive than You know the branded equipment that we are mostly familiar with Yeah, that's great. And so dallas, you know in terms of going talking about neutron and your existing environment Maybe you could talk a little bit about where are you taking this in this environment? What's your kind of growth? plan Sure, so our initial work was done sort of in a pilot mode with you know several classes that had specialized software What we're looking at going next is taking this into our hbc environment. So we've got you know 3000 4000 nodes of a compute infrastructure that we'd like to be able to utilize for for our bdi use case Again, when we have these kind of spiky workloads It's nice to be able to throw it in a pool that can absorb it and a lot of the the research workloads can be preempted And and so the idea being that we would deploy open stack into the hbc environment and Then you know again using the same technologies we've been talking about be able to scale the workload into there That's great. And in terms of Some of the feedback that you're getting from Conversations from customers. Where do you think? You know the trends for adoption around open stack and neutron in china it's looking like Uh, as I mentioned before the new train is trying to make more and more open and Become a framework or a platform so customers can get more and more choice And since the community has to make the default reference implementation May change from the oas to relax bridge. This means that What you can choice is one Is is more than the linux bridge or open way switch you can choose Like a big switch you can choose like ovm or another software based sdn solution and even you can choose some Switch or hardware based sdn solution likes from cisco likes from jonifer and what customers needs is stable and performance and scalability and such as such as this They may pray for software or hardware, but the Final result is that's what I mentioned the three words the scalability performance and the reliability yeah, that's great and to Dimitar going back to talking about futures and sort of where your project is going we have heard a fair amount about the Verizon red hat the big switch use case and NFV is maybe you could talk a little bit about scalability and what you guys are building out Yeah, sure. So kind of what we're looking at is a You know a typical, you know cloud cloud top environment with you know a couple of geographically distributed data center regions and a number of availability zones in each of the regions we are You know in in in our case the availability zones kind of equal to a pot You know a pot would have a You know a pod interconnect and A number of you know computer storage resources connected to the interconnect And that will be kind of one The building block of the of the of the cloud if you will that will be in Kind of subscribing to this to the you know core and port architecture where You know you minimize the The size of your full domain to the size of the the size of the pot And this is directly driven by you know some of the some of the requirements that we have in terms of the You know the availability and the resiliency of the solutions that we we have to provide So That's you know, that's pretty much pretty much it from from that perspective great Why don't we do kind of a quick sort of lightning round on you know if you could do it all over again What lessons learn what would you do different christian? And then we'll kind of go through the the panelists and then open it up for q&a Well, I mean we're still very It's still very early days for us But I think one of the things that I've certainly learned and I've got lots of input from from the summit here is Upgradability obviously is a major concern how you initially deploy your cloud is going to constrain this very much Neutron being one of the more complex pieces to upgrade particularly without downtime And I think you know, we would take a different approach to deploying the cloud We would do something a little bit less monolithic Perhaps based on ansible the open stack ansible for example And that that would definitely be something that we would I mean, I'm kind of thinking well, we're going to have to sort this out somehow So I guess I won't speak to the vdi use case but for our general sort of open stack hosting environment I think one of the big things that would have done first is inventory the pets And you know, I think we've got a lot of applications that aren't really designed to work well in an open stack environment today That we're sort of going through and looking at now And so, you know, we've had sort of an ongoing We have sort of our pods are in hybrid mode right now where it's, you know, a lot of vmware and a little bit of open stack so You know really it's going through and doing that inventory up front and starting to re architect a lot of those things to Work in a more fault tolerant way. So From from my perspective one one of the things that you know, I would change and that would be would be you know, difficult to change would be the kind of the Governance process that we have to go through, you know, in our own organization on their Another thing that is more kind of within within my control is to To spend we spent quite a bit of effort, but I would have spent more in terms of You know, bringing, you know, other teams In specifically in operations Earlier and kind of educate them and And make them kind of more enthusiastic about You know these these paradigm change. So it's it's it's a big it's a big cultural change And getting everybody involved, you know, early in the processes is critical. So that's one thing for sure that I would have approached differently That's great. Wang any parting comments on things that you have seen some of your clients You wish that they had have done differently or what was the top sort of takeaway for Issues to to avoid There is a there is a thing that's need to say that Many people say that ASTN there will there will be a revolution. It will Cure all network engineers, but the reality is that now we use the ASTN technology in open stack mostly the software based is still needs the hardware operation and it will not make any result to the Network the traditional network operation. So If if the ASTN solution should be more productive should could In fact the virtual and physical both way and then the The result and the operation or develops of the open stack and networking will be Big change Fantastic. So we're going to open it up now to q&a. We have a nice geographic representation A variety of different operators and contributors And I see that we have a question. So go ahead Um, yeah, my question has to do with the Throughput of open v-switch. There are many Technologies for bypassing the v-switch like dpdk or s r i o v or pc i pass through Do any of your applications need More throughput than open v-switch, and are you going to these bypass technologies? Yeah, I can probably address that quickly so With the big switch it it's solution. It's actually not using open v-switch. They there is a It's specifically if it's a switch light virtual is the Is the virtual switch that is basically replacing open v-switch is still using by the way the the kernel The kernel kind of portion of open v-switch, so it doesn't require You know additional changes to the kernel But at the same time it does replace the user space, which you know, which is where the bottleneck is in the first place So it doesn't by definition use the open v-switch And I get this right Yeah, that's right and then but in time But I was going to ask in terms of building on his question the the types of applications that you have in your environment that that require that type of throughput Oh, the the types of applications they're you know in a You know a private cloud environment, there'll be you know the full variety of You know enterprise class applications that you can imagine and you know some of some of them are more demanding for networking than others, but uh You know we we know that they are you know multiple that are really demanding to be you know for networking performance, so Not to translate the question Is 10 gig into the vm enough if it's not enough You may need to go to the bypass technologies Do you need to have more than 10 gig coming into the vm? If we need to have more than 10 gig into the into the vm, uh, realistically Um You know I have to be if You cannot get You know 10 gig to the vm Unless you go to something like s rov or you go to dpdk, right? That's what really so exactly so You you know and you absolutely need that if you you know if that's what you want to do And you know I've been talking to big switch quite extensively on that and My understanding is that the the next release of big switch will be Integrating dpdk, which will you know take it so so it's extremely important if you there was a there was a presentation with Verizon just before this one They actually talked about this because the in in their case they're deploying it for You know for any v which is actual You know networking workloads And for that it is you know, it is absolutely necessary I I'll have to you know kind of defer to somebody from From big switch as to you know when the dpdk integration is going to be complete. My understanding is with the next release, but I I see ken nodding so he confirms that That's correct Any other comments on the panel and on that question All right, great next question. Hi, I'd like to hear from each of you on this one. Um in the real world What do you do when things go wrong and how are you monitoring like Both the underlying the overlay or using what tools do you go to tcp dump using ganglia what What's your uh sense of the state of the art and how it how would you improve it? And thank you. I mean, there's there's there's lots of different approaches. I guess to this. I mean we we use traditional stuff such as sabbics for monitoring and the elk stack To collate logs, etc And I know that big switch internally also provides some great Analytics and visibility tools and again that goes back to the single pane of glass Whereby, you know being a having a physical plus virtual layer that actually addresses the entire spectrum Gives you great visibility Across you know across the boundaries essentially the boundary is just just just mesh into one which is Again going back to the simplicity argument. It's it's fantastic and it's it's kind of next gen stuff really and it's It's one of these things once you've experienced it. You you kind of wonder how you did without But yeah, I mean, you know a variety of things and strategies will will will work. I think well I haven't found anything that works well yet I think, you know, we I mean, it's this hodgepodge of stuff, right and You know if someone has the answer, please stand up. I would love to hear it But other than you know going out and spending millions for certain vendors who remain nameless Tools that say they'll do the job, but really want You know in the absence of that we're looking at, you know, basically pulling that data as much together as we can We're using Splunk for you know to basically suck in a lot of the logging data and then writing things on top of that to you know To look to look for the patterns and when we initially deployed our pods we've had all sorts of You know fun stuff that you find right and it's it's how do you figure that out and what it what it is and iron it out and So that's been tremendously useful for us So I'll go back to the the overlay versus the underlay the topic that we We talked about it is the so in this typically any You know overlay underlay situation if you're using overlay, what happens is the in essence you have two separate networks, right? So you have you have a physical network that you know, you might be managing or somebody else is managing Very likely and then and then you have your overlay on top of that so basically you have a virtual network and you know physical network and in many cases they The overlay doesn't have the Doesn't have any visibility into the underlay other than say it doesn't you know It doesn't work and and then and then you start troubleshooting in in this case and you know Christian as well dish on that in the case of a big switch you you you have a you know one solution. So basically The the the virtual switch that we talked about that is in the Is running Basically on the hypervisor is in fact part of the fabric, right? It's it's like it's like a fabric switch So it's a you know, you know in a people's V Fabric as they call it basically it becomes a from a you know from a two layer Spine leaf it becomes a three-layer spine leaf, but the the virtual switch is really You know part of the fabric and you have the visibility of the entire fabric both physical and virtual because it's one fabric from a single sdm You know a single sdm controller So you get the entire visibility into the entire You know solution both physical and virtual from from one place And and there are certain tools that they you know, they provide like you know, you can you can You know test path. I believe is exactly what it's called where you can you know Generate traffic from one point and just see it how it goes from You know to to another point both through the You know virtual and and physical physical environment and see exactly where it stops if if it does if you know If you're troubleshooting some problems, so Um, yeah, that's that's what I can Can add to that question or Yeah, Wang any final comments on that? Okay. I can add some comments that Uh, in my opinion, these are so far. There is no a perfect tool for Overlay networking virtual visualization You might you may be need to come by as you say tcp dump or ip wrote or net link or something Tools that you familiar with and Yes, it is pretty complex. There's no perfect tools to Monitoring Under over the networking Great. I think we have time for one. We're a little bit over about time for one last question Yeah, there was some mention of migrating existing virtual network and VMware Are to open stack Just wondering, you know, what are the steps you go through? Do you have to once again? Create network subnet interfaces things like that or is there a way to just Move the metadata have it point to the existing network take over the management Under open stack. How do you go about doing the migration? I guess Yeah, I mean right now it's it's not I haven't found a simple way to do it I mean as I said a lot of it it gets back to you know, how do we sort of untangle? As I my cto says the the spaghetti monster that is a lot of these applications and and really look at you know How do we really build them for fault tolerance and resiliency and In doing that I think it Means straightening out some of the networking stuff that's there now. We've got a lot of big big flat subnets that are You know that that are that are there and Trying to you know segment that out And you know, so what we've done from our Even VMware to VMware From sort of our legacy hosting environment into the pods is We do a temporary span to get things over and then start doing the rearchitecture in the new environment. So And the idea being the span goes away and you actually start managing those networks through In this in our case that big switch Great fantastic. Well, we're at the end of the time. Thank you all for staying and I wanted to thank the panelists For their insight and sharing sort of the lessons learned along the the journey of neutron networking and we're we're going to wrap Thank you so much