 Thank Joe Fernandez today for joining us again. He's the head of product management for OpenShift And we've done a number of talks on OpenShift v3 But this time we're going to try and do it from a rather than a deep dive What's under the hood and the components there is to really focus on what the features are of v3 Because I think that's and the other side of the reason why you're all using it. So With that I'm going to let Joe Fernandez introduce himself and take it away Thanks guys All right. Thanks Diane. So my name is Joe Fernandez as I mentioned I run product management for OpenShift and You know as as many of you know, we've been working over the past Year plus on OpenShift 3, which is the next major evolution of the OpenShift platform You know more of the the core architecture and some of the things we're doing around around specific areas like networking and Storage most recently and talking a lot about what we're doing with Docker and Kubernetes I just wanted to give more of an end-to-end view here starting with you know Why we we decided to build what we're building and then also go through sort of an end-to-end perspective And then we're going to continue doing these types of briefings You know through the commons initiative over the next several months so you can get deeper into different aspects of OpenShift So you've probably seen this slide before but OpenShift 3 is really a brand new stack from top to bottom, right? So we're we real rebuilt our containers API around Docker And you know containers have always been the core of OpenShift and now with sort of the emergence of Docker is really a standard For for containers in the industry. We really felt like that was the right choice to build our new platform around We've also rebuilt our orchestration engine Around the Google Kubernetes project built it on top of a new container optimized OS based on rel 7 And really as a result of these decisions been able to expand the choice of frameworks and services that we can provide You know tapping into a much larger community of available services adding certification on top of that and then built a whole number of You know developer and operator capabilities on top of that which I'll also get into today You know, I think that you know the one of the biggest drivers was the emergence of all these new standards, right? Again, we we're bullish on Linux containers. We have been from the start. We feel that that's you know really The best way to deploy applications and manage them at scale But now you know, but in the past each Paz vendor has had their own bespoke implementation, right? So we had gears and cartridges and open-shift Roku had dino's Google had their own implementation They all use similar primitives like Linux control groups like kernel namespaces, but you know different in implementation With Docker now, you know, we have a standard packaging format for how we package content to run in containers and working on a common API for how we Instantiate those containers and so that allows us to bring This capability into open chef but to work it upstream with companies like Docker Inc like Google IBM and others and really, you know I think it's that standard that's going to continue to drive Container usage across the industry We chose Kubernetes as our orchestration and container management engine and there we collaborate with with Google and there's, you know More communities, you know tons of communities emerging around these standards So our own project atomic is one and it seems like every every week, you know, something new pops up And so it's really an exciting time in the container space specifically and to see what's happening and open source in general We also get asked a lot about different trends in enterprise software, right when we're out talking to customers Customers are interested in how they can evolve their development practices to more of a dev ops type model and you know How different solutions can help with that? Folks are considering a new application architectures and moving more towards microservices based Deployment models and you know, we we get asked about that in terms of how our platform can support that and then you know In their emergence of containers as a deployment model and something that enables deployment across hybrid cloud architecture given the portability So we get asked about that as well So so with that You know, sorry Apologize So with that we've also learned a number of lessons over the last five years So we've you know learned lessons about, you know, what developers demand in terms of What they expect from our service what they need for their applications How they want to integrate open-shift into their existing dev tools and processes And also on the infrastructure side, you know, when we deploy on-premise How do we integrate into a customer's compute infrastructure into their networking infrastructure? integrate with Your existing application services and how do we enable operations folks to manage manage those The Paz platform manage containers at scale. So so, you know, there's been You know bumps in the road along the way, but certainly I think, you know, all of this experience has led to some of the Decisions we've made here. And so let's go through it now and talk about, you know, the stack So so starting at the at the bottom, right? We I think you guys all know open-shift really is based around the latest in Enterprise Linux, which is a rel 7, which was released last June. So rel 7 is where red hat first introduced Full support for Docker as a container standard as a container format And last week we just came out with rel 7 1 And you know, we're really we're trying to bring the enterprise grade security stability and reliability of rail to the container space and and to Docker last week we announced a new Well, we announced in addition to rel 7 1 a new variant called rel atomic host And this is a new model for Linux a minimal footprint Container optimized OS So I know we had a big session on that today So some of you guys may have may have sat in on that But you know our goal for open shift is to allow operators to choose right to be able to run their open shift deployment either on a full traditional rel implementation in rel 7 Or start to leverage rel atomic host as a minimal, uh, you know atomically updated host OS to to to run, you know more of an optimized footprint And you know we've talked a lot about Docker As well, but this is really again the core of open shift. This is the deployment model And you know, really there's there's two things that we get from it, right? One is The api the the engine for setting up the container Sandbox for setting up the cgroups the namespaces You know se linux and so forth and then the the packaging format and really this this is what drew us to docker the most It's it's the the the packaging format and the ecosystem that's built around that where you know these days you can go and get you know You know almost any stack from From docker hubs a vast community ecosystem upon which then we can you know We can certify, you know enterprise ready images and also work with customers on On you know building out stacks that they need for their applications this all this does move Open shift fully into an immutable model or image based deployment model And that certainly has implications which we'll talk about later which drives some of the features But um, but we think that that's you know, that's the right model for for managing applications. It it Certainly accelerates deployments, but also makes things like Rollbacks and so forth much easier than than our prior model, which was more build based And then you know the other component we've talked about Quite a bit is is kubernetes, but just to go a little deeper, you know Really three things are important here with kubernetes from our perspective and for those who are familiar With open shift this this largely takes over the functions of what you know as the open shift broker So kubernetes is like the the broker tier in v3 So the first thing it does is it helps us orchestrate Multi-container services and I've talked about this in numerous presentations But you know most applications on open shift or in general aren't going to run in a single container So something needs to to wire those containers together And and create these multi-container services and then wire those services to other services So you may have a service that's your web tier or your web front end Running something like tomcat in a cluster of you know, four or five containers Whatever the size of that cluster may be and then perhaps that talks to another tier another service tier, which is Say my sequel, which you know, which you may run in a clustered setup as well And it may talk to get another service, which is maybe a in memory cache or something so so you know, what's neat about v3 is that Any all of these services become first-class citizens So it's it's no longer the case that you know The app framework is the primary service and everything gets added around it You can come into open shift and just deploy that database service or just deploy a messaging service or what have you And then so all services are treated equally and then we basically You know wire them together to to create the topology That you need for your application the other thing about Kubernetes that That's interesting is it handles not only multi-container deployments, but handles them across multiple hosts. And so the scheduler function Determines that and then we've built features on top of that like regions and zones To handle to allow the administrators to control placement So so the scheduler where it will determine when you deploy You know these containers where they should actually run and and you know Things like regions will enable you to specify a set of hosts for affinity So, you know only run within that select set of hosts and then zones enable anti affinity So within a region you can specify zones and and have You know open shift automatically spread your application instances to each of those zones equally And then finally, you know Kubernetes handles container management And we talk about this declarative model for container management What that actually means is that you know, you actually Define how many instances make up your service when you deploy it And then kubernetes works to to maintain that state So so if if the tomcat service was deployed with four instances, for example, you basically then Declare that up front and then open shift will maintain that state So if any at any point an instance fails and it detects that there are less than four instances running It'll automatically restart that So that builds on a functionality we had in v2 called watchmen But it goes it goes much further and then we also tie in to that things like our scaler So if you're going to do manual scaling where you want or automated scaling where you want to scale up additional instances Essentially, you're just updating the declaration. So instead of having Four you might say, you know, now I need five instances kubernetes would detect that and then automatically start a fifth instance and likewise when you scale down So so all this is tied to The the kubernetes model for for managing, you know, how many instances are running And I put a little diagram here. This is very high level not not a technical architecture at all more of a more architecture if you will But it kind of shows some of the concepts that we're referring to so So just like in v2, there's really two types of instances There's the nodes where your applications run and then there's the master, which is You know what we're referring to as what we used to refer to as the broker that kubernetes uses that The terminology master and you can actually You know it's showing a single instance there, but you can make that highly available But the master will run components like all of the apis That developers will use to access the system. So they're all restful apis secured by oauth Um, and you'll log in through your console cli or ide Etsy D is our distributed registry. So this you know maintains the state in the system You know, we're you know information on where all of these Containers are running and what their IP address is and so forth and then the replication controller which works with the the Cubelet on each of the nodes to manage you know To do the health management as I was referring to earlier to to you know Maintain Where you declare and maintain the state for how many instances you want and then and then the scheduler which handles placement The kubernetes scheduler is also extensible Like you know, if you look in the open source community, you'll see that You know folks have done stuff like any worked on integrating the meso scheduler with kubernetes Uh the yarn scheduler. So so if you want to go beyond the scheduling capabilities that kubernetes provides, there is extensibility options there, but like I said, we'll be shipping with The the feature set I think You know handle many of the tasks that customers need in terms of being able to to do placement According to to their policies Um on the right hand side, you know kubernetes the deployment is actually Something called a pod the deployment model. So inside those pods you run one or more containers so so if you in in kubernetes terminology a pod is like an atomic unit Where you can have a single container or multiple containers running If you have more than one container the pod still You know, all the containers share an ip address. They share storage volume mounts And so and they always run on the same machine. So so basically You know example might be a Pot a container that does reads and another that does writes, but you want to always deploy them together for example Or I have postgres and pg admin as another example And then there's really two layers and I should probably change the way this looks but the service layer Is basically how All of these services know about each other, right? So when you want to connect Connect one service to another you basically There's a service proxy So that so that you know one tier can call the other tier at a known address But doesn't have to know the ip addresses of each of the pods And then you know a set of labels that identifies where those pods live so the proxy can Can connect to them and then the routing layer handles handles Routing requests from the outside and so I'll talk about that in a little bit later But this is how your your web requests from your web or mobile clients would route to the actual instances themselves All right, so so in terms of the services our plan is to basically you know ship with the same set of of Language runtimes frameworks that we support and be to accept now instead of being packaged as a cartridge These will be packaged as images and we're going to just start using the standard terminology containers and images Versus, you know gears and cartridges, which is our our open shift specific terminology because again, we are adopting You know The standard so it's the same containers the same images if you were using open shift Or if you were running these outside of open shift We are, you know excited about, you know, the large and growing community ecosystem of images out there because you know, this is Essentially the biggest leverage point for for any platform is what can you run on it? So this will give customers lots of choices and then on top of that. We're we're doing a certification program So that you can get trusted images from red hat or our isv partners So we'll be certifying to things like the content of the container And also certifying that the container Has been tested and certified to work with a known host our xpaz Program, which is you know how we're bringing more jboss middleware to the platform Is is also built on this so so those jboss services will come in as As docker containerized images as well So then all of this gets abstracted right and actually um had an internal meeting today where I started. Yeah, I just saw the latest Designs on our web console But essentially as an open shift user you'll be you know, all of the stuff that we just talked about will be largely abstracted You'll come in through Either our web front end our web console Through our command line interface or through through your ide and we're working With the jboss developer studio a team on an eclipse plugin For open shift 3 and and hopefully you know, there'll soon be other plugins as well so so that's how Users access open shift and and you know if they don't Really care that it's a docker running underneath their kubernetes. They shouldn't have to know about that stuff, right? They're basically focused on deploying their applications and managing them Um, so so um, so maybe in another session We'll we'll actually Go through some of the some of those screens and if if folks are in the beta You'll see the latest of that in beta in the beta 2 drop that just came out today Actually today or tomorrow, but it's it's coming out this week We also then enable multi user collaboration, right? So what does that mean? So users will work on projects and projects will be isolated from other projects so So when you think about um, some of the container services If everybody could just mute their lines or dian if you could just mute folks Yeah, I'll mute everybody. Yeah so So if you if you look at some of the container services out there Whether it's new amazon's container service or google's What you're really doing is you're spinning up your own set of vms and then running containers on those vms And you actually get charged for the vms But open shift is a shared environment, right? You you spin up a set of vms or bare metal instances and then everybody works on a shared set of instances So your your applications You know could run on the same instances that somebody else's does and so the project is It's basically the almost the equivalent of what we refer to as domains today and open shift too But basically, you know a user Runs applications in their own project Just like they run them in their own domain Today and then multiple users can get access to that project. So if you want to collaborate and share But if you don't have access, you know, you you'd be Restricted or you you have to create a project for for different users different teams And then All of those access controls will still tie back to your enterprise authentication system. So we'll still have You know the SAML based plugins to LDAP Kerberos based systems and so forth So, you know, we're not going to manage authentication on open shift itself We're going to get the user's identity from you know from wherever you wherever customers store it So I talked earlier about immutable infrastructure and the implications of that so The implications of that is, you know, what you're everything that you deploy is based on an image, right and And when you scale up additional instances You're basically scaling up additional copies of that image additional instances from that image The you know, and that's great From the perspective of consistency and being able to manage rollbacks As well as, you know, what will speed up the the deployments The challenge is, you know, when you want to change something you really want to change the image Not change the instance, right? Because if you change just an instance of that application, you know, if you have to restart it or Scale up another one The the new one will start based on the image not the local change that you made. So what we what we've integrated is a full build automation capability inside of open shift So what this means is when you want to make a change Where we'll run a build automatically create a new image and then deploy that image And then if if there's an issue we'll automatically roll that back to the prior Instance or or you can request that if you roll back and so forth So so this build automation is is built in and we'll support We'll support that from a from a Either a docker build or Or a source code build, right? We have a new capability called source to image So if you actually i'll get into this on the next slide, but But basically there's different build types. We'll also still support binary deployments. So if you decide to You know to build your instances outside of open shift like you're you're building your images In your own tool chain, then we'll just act as a deployment and management platform for those instances, right? So the customers may already have Build or ci infrastructure. That's a producing Image binaries, but they want a place to run them and manage them and so forth And so so at that point, you know, we're not doing any building at all on the platform We're just deploying what you've built We also introduced this feature of configurable deployment patterns So so when you define a deployment You can define the type and this will allow us to do things like You know specify a rolling deployment where you You know deploy one instance at a time versus you know Taking down all the instances at once. Uh, and you know, which obviously you wouldn't want to happen if you're in a production environment So this this slide shows some of the the build options, right? So So if you want to do a docker build, right? Yeah, essentially the input is a docker filed and then we'll do a standard a docker build Produce the image and deploy that right and so this would be the equivalent of If you did docker build on your local machine except we're doing that at scale, right because there may be Hundreds or thousands of images being built every week or every day depending on how big your platform is So this is sort of a scalable way to do these builds At the platform level where where open shift is is running that for you then the source to image build is basically Um Is interesting that that that allows us to basically Just take a source code change and then build it as a layer to an existing image So it's faster than building the entire image and it's useful for folks, you know, whether you're a developer We just wants to you know get push some some code The platform and have us take care of layering it in or if you're an operator and you want to just push a patch So so we'll basically take Whatever source you provide take that as input Match that up to the application The application instances Identify those images. Um, you know build and build that And add a layer and then redeploy that Or you know deploy the new instance and so forth And then underneath we're working on a feature called app gen that will actually allow us to To inspect any arbitrary image and generate the metadata that would be required To uh to deploy it in open shift and turn it into a builder image like you see below So so, you know, you can you can run arbitrary images Or arbitrary docker files if you're using the docker build approach If you want to actually turn it into a source to image builder, um, you know, we can You know, we're working to to make that automatic as well So you just provide us with the image and we'll We'll generate the appropriate The appropriate Deployable image for you So, um, the other next area is container networking. So networking changes quite a bit in v3 relative to v2 in a couple of ways first in v2 The containers that you deployed the gears they weren't directly Routable, right? They did not have ip addresses. You had a single ip address on each node And then essentially an apache router that would route To the gears, which were all running on local loopbacks, right at 127 xxx addresses And and you know the nice part about Containers through kubernetes is each container does have a directly routable ip address so the container starts to You know look more like a vm from that perspective and you know makes this a whole lot of Challenges that go away in terms of you know, how you're able to to deploy stuff within those containers The challenges, you know, that's a lot of ip addresses to manage So we're moving to a software based networking implementation and the current one that we're That we're Planning on is based on open vSwitch. We're going through some some performance benchmarking on that now But this will be in the box, right? So this is not something that you would have to sort of Manage will will will deploy a networking configuration that will run wherever rel runs And you know and you know wherever openshift runs as well, so So we want to make sure that That what's available in the box You know just works in addition to that, you know, we're working upstream and kubernetes To plug that into a to a networking api Such that if you have other sdn solutions You could leverage those as well And so so in that respect for customers that have already made an investment in sdn And maybe they're maybe they're doing that Through open stack neutron. Maybe they're doing that Through through partners like nuage or sysco or others They could leverage that with open shift as an alternative to what we're providing out of the box The other thing that changes in open shift as I mentioned is is routing, right? So So again, we don't have the apache based routers On each node, and then we don't we're no longer Putting sort of separate ha proxy routers in front of each scaled application We've actually taken the router tier out and made it a Platform-wide routing tier that can route to any instances on any of the hosts And you know by default that gets deployed as a redundant router. So there's There's two instances, um, you know for availability and um, and so the routes get updated You know by You know by the routing tier being connected through scd. So it's aware when new instances are deployed So it'll automatically update The routing table And then you know traffic that's coming into your applications again from web or mobile devices will automatically Be routed to the appropriate instances by our routing tier The other thing that this facilitates is is then replacing that routing tier with your own Routing infrastructure. So a common request for our on-premise customers is You know I or you know is customers who say they've already made a big investment in say hardware based We're based load balancing or or you know highly scalable routing and so forth and they want to use that Instead of using what we provide which is a software based mechanism and so so this facilitates that type of Of swap out as well. And so we want to make This component and in fact many of our components easily Swappable or you know able to integrate with with what customers may want to use as an alternative There was a commons briefing a couple of weeks back where we showed a A demonstration of the of the routing of the v-switch based networking capability and so I encourage folks to watch that Um Last week or a couple weeks ago. We did a session on storage Joe before you move on there's a there's a question. Um, that that um, we're jason ford is asked from black mesh In regards to yes, yeah, let me get through I got a couple more slides and I'll come back and take all the questions at the end All right, perfect. Yeah, that might be easier for me. Um, so um, so uh, so The storage we talked about this, um last Last session right and again a great briefing if for folks that haven't watched it. We're basically, um You know, we have a goal in open shift, uh to ultimately allow folks to run whatever they want On the platform, right? So not just um stateless Services but stateful services as well, right? So as is the case today, you you don't have to run everything on the platform So if you want to run Instances on open shift that connect to services running outside of open shift either in vm's or on You know elsewhere in your infrastructure or or coming from a cloud service. You can certainly do that But if you do want to start running more, uh, stateful services on the platform things like databases messaging queues We want to make that Easier to manage and so that's where we're making use of Shared storage through kubernetes and the ability to map That storage to the pods that are running your containers So you'll see um, you know a more detailed view of that in the uh In the briefing I mentioned But then this the idea is to make this extensible so that we'd have Plugins for things like nfs i scuzzy different cloud storage options and then essentially, um, you'd have you know to you to User personas, you know the the person who's actually setting up the storage pools And then an end user developer who's basically Making a request for a volume to be mapped to his application So that user is you know requesting storage and and requesting that to be provisioned In the same way that they just request that that we provision their applications and essentially mapping one to the other Internally in an open shift, um, there'll be a registry component So, um, so we actually have integrated A docker there'll be a docker v2 based registry And that's where we're actually storing all the images that we run and and images that we're updating and managing for users as they as they You know deploy updates to their applications So this is all you know part of the platform. You'll notice it's one of the components and actually one of the cool things When you install open shift, it's a very lightweight Um binary like 40 or 50 megs. That's the entire thing Some additional components that we actually install as docker images. So we're One of the outer is another so so the installation consists of the main binary and then and then a handful of of containers or container images that that get then pulled down and deployed Um, and so, you know, we do that reasons, you know, partly dogfooding but also Um, but also we think that you know, it's a great way to run These components and we want to make sure that if we're telling you to run your applications this way that we should be able to run our own applications this way So so there'll be authentication the authentication and access controls Will be applied to those images so you can control, you know, which what users have access to In addition to that you'll you can integrate with with external registries So for example at red hat our satellite team, which is our red hat satellite product team is adding uh enterprise container registry capabilities to To that product line. So if you're using satellite as your content manager for rpms You'll you'll soon be able to use that as uh as a content manager for image-based content as well And then you can actually pull that pull those images into open shaft and we'll have workflows that describe that And obviously you can also pull in stuff from docker hub or other third party enterprise registries So wherever you ultimately want to manage your content You can continue to do that and then you know when you run it in open shaft We'll just pull it in and deploy it and then pull in updates to that as You know as as new content becomes available And then um, lastly, uh, you know on the administration side, we really wanted to to um, You know improve things here as well, right? So first thing we want to do is make it as simple to install open shift as it is to install Applications that run on top of open shaft And you know for folks who have been with us since the 1.0 days You know, it wasn't always that simple to get an open shift environment stood up I think for folks who are going through the beta, you'll see, you know, a vast improvement in that area, right? It's a very lightweight binary installs very quickly And should be a lot easier to get up and running. We're doing some interesting work with integrating Actually underneath the covers for our default installer, we've integrated ansible as a deployment mechanism So that's actually what's handling the deployment for the default installer. It's it's part of the package But then we'll also be working on a puppet Puppet modules and an alternative deployment mechanism So whatever you want to use to deploy, you'll be able to we don't we don't impose anything on that As I mentioned the separate routing tier should simplify greatly Integration into DNS and external routing infrastructure You'll still be able to as an administrator Integrate with your enterprise off systems and then In terms of the administration console and then we'll have a full set of admin apis I think are a much richer set than what you saw in v2 So so you can manage at the api level And then, you know, we had it sort of a very basic admin console In v2 We're now working Internally with another product came at red hat, which is our manage iq and cloud form team and they're actually building out You know a richer Management framework for both containers and open shift itself Which will be embedded as part of the solution and so forth. So so there's some new things coming in terms of administration tools that we'll be able to provide as part of the package as well So so hopefully that will help on the operations and administration side So a couple more slides here so from the release schedule perspective Uh, I would encourage folks, um, you know, if you're thinking about trying it out beta 2 actually came out this week In fact, I believe today so So you can try that out What we're doing is we decided to try to do a drop each month between now And we're and when we go ga. So we did a beta 1 in february, you know, and it was the early betas have really More of the low level functionality. So you don't have all of the All of the nice gooey's or the fully blown command line interface and so forth, but But for folks who are already working with beta 1 you can see a lot of the stuff in action Beta 2 comes out today. There'll be another beta drop in april And there's likely to be one in may as well that we're still trying to Confirm that but with each beta drop you get closer to so the finished project We're targeting sort of a mid-year release The end of june is is the current projection for being a fully ga So, um, then we'll we'll ga first with open shift enterprise. Um, and then we'll also have a a preview environment online a hosted environment online that we're going to be Expanding upon and then ultimately converting the current open shift online environment over. So there'll be more on that post-tune So hopefully you've seen, you know, how we're bringing it all together again. Yeah, this is more than just, you know A bag of components, right? This is more than just docker or kubernetes This is really what we're really trying to do is build a fully integrated a platform So, you know developers and administrators can just do their jobs, right? So really Take advantage of all of these cool new technologies that are coming into the space but But not have to burden you with knowing about You know the the inner inner workings of of the various bits and pieces And also making sure that we can run this all at scale, right? So that it's not just You know one developer, you know running containers on just his one machine, but it's really You know hundreds of developers running, you know thousands of containers across, you know Hundreds of of host instances and so far. So we really want to Build a scalable platform that will grow And you know leverage the best of what's out there from from docker to kubernetes to all of the low level pieces So so with that i'll If you want to learn more you can come to our community site openshift.org So, you know, you can in addition to participating in the enterprise betas anybody can grab the latest sources From openshift origin You know, we update that Pretty regularly And that you know, that's you know the trunk but if you want a more controlled experience then Then I would suggest taking one of the the official beta drops where we have actually documented workflows and And it's been tested out against those flows So i'll switch to the console now and look at questions. So thanks All right. Thank you very much it was one questions Outstanding there early on from jason four Okay, get back here Yeah, I guess I can just say it instead of you have to read it, but What we kind of see in doing double virtualization so If we're running a open v switch underneath on the on a bare metal You know hypervisor and then we spin kvm instances on top of that and then run ovs inside of that Um, you know you pay that that double virtualization penalty Yeah, how are you overcoming that? I guess in Are you ever coming that is the first question and the second question is yeah How's that working out? Yeah, so so what we're doing so so there's a lot of um You know Work going on in the container networking area right so in addition to stuff that we're doing here. There's um Solutions like flannel and weave and others out there We've been sort of benchmarking all of them, right? So so you're right like the the performance results that you'd get with ovs on bare metal Will exceed what you get with you know with running that on top of the VMs or on top of containers We've certainly seen that what we're trying to do is um You know get to a model where we can we can achieve greater scale than some of the alternative solutions out there Right, we have customers that want to run as I mentioned hundreds of instances and you know Tens of thousands of containers, right? So I think I think our initial target is We're testing it up to I believe it's 200 instances and I want to say 50 containers for instance, so So where that math comes is 10,000 or 100,000 containers. Um So that that's the scale we're testing to you know, I think that um You know, I'm not a networking guy so I can't really comment on on um on the specifics of the implementation There was a prior Session that I'll point you to to check that out and then we can you know Maybe hook you up with some of our folks working on that to get your to get their perspective But the goal here too is to make it pluggable, right? So if you have a more Scalable implementation, we're going to plug our default In through kubernetes and then sort of we've been doing work upstream There to enable you know other alternatives to be plugged in just the same and so forth. So so So we want to provide something that scales well out of the box, but also allow for that flexibility um I believe in our tests right now the ovs implementation has been scaling You know better than than some of the alternatives out there Like uh, you know Like weave or others for example, but but I think there's still more work to be done in terms of benchmarking and and tuning to To to before we get to june. So just stay tuned for more on that um See other questions chris has been doing a good job answering. Um, I don't know chris Is there any questions that you want to highlight here? Uh Unless someone wasn't happy with my answers Okay Uh, yeah, so folks who folks who aren't looking through the chat this is quite a rich, uh, set of q&a there. Um Just trying to find anything that hasn't been answered Another quick question. Um, yeah in the previous version of open um There was a lot of support for well, there was at least some support for windows apps Actually and running open shift on windows Um, any plans for that? Yeah, I can talk to that a little bit. So, um, so upstream, you know, I think what you're referring to is We had worked on an origin by one of our partners, um to basically take, um Windows instances and connect them through our broker tier so that you could actually deploy Dot net apps or other windows services the same way you Services on our linux nodes and that was really cool. We Did officially productize that. Um, there were some challenges You Do a full ported state? So Not necessarily technical challenges. Um, but um, but We're going that it's actually interesting. You know, there's essentially two pads now One one of the things that we were excited to see is microsoft Announcing that they too were supporting Docker and that they're going to essentially Bring containerization to windows And you know and sort of made that to the docker api right and so So this this sort of and we've actually by the way, I've spoken to them about that You know as well, you know, it's just part of working with others in the community We think that's pretty exciting Because since we've adopted that api, you know, that'll sort of open the door for us to essentially Run windows containers. And I think that they've actually delayed the next release of windows server Hopefully with the with the goal of getting this out. So so that's certainly something that we're we're Watching is containers on windows Through the through the same docker api that we're using because that would bring it in From that respect. The other thing also is, you know, they've open sourced the dot net runtime So dot net on linux is now a possibility and they're not So just so that will allow you to run the dot net services just on the on the rel nodes and so forth. So Yeah, I think it's still early days, but The benefit is both of these now are being driven by microsoft themselves. So you have the the Chance for something that's going to be more supportable long-term because it's coming from the source And and we're heavily involved with that. So, um, you know, we'll we'll see where things turn up Yeah, in the course of the year Yeah, I don't know if you can run exchange on it Other other questions feel free to either unmute your mic or put anything into the chat It's not where we're actually almost at the end of the hour here and um, I really want to thank Joe again and chris for answering questions on the chat. It's great to have you here And we'll be doing this again hopefully next week and we'll do a lot more I think if there are other topics that you guys would like to hear about Please um, let me know and I'll try and arrange guest speakers And if you have a topic that's near and dear to your heart that you'd like to talk about Let me know as well and we'll give you a form to pontificate about it Your other fellow participants So again, thanks, joe. Um, and thanks chris for for doing this again today All right, thanks everybody