 Let me start so it's great to be back at on s and thanks arpit for the introduction i'm going to be talking about container networking and really representing the amazing imagination and work of hundreds of of amazing engineers in the azure networking team that's their imagination their work that created these services. There are two things that the first segment of my talk will emphasize. Number one is the title essentially that we have a container networking service that i'm going to announce here and the thing to remember about it is it's. Very flexible very simple you can essentially provision a sea of compute the sea of storage a sea of networking. In a click and it'll leverage the sdn stack that we have in azure or you can use a couple of python commands whatever is your style. The other is it's open it's linux first class. And it's open source and i'll provide the pointers to the open source in the talk. So first about azure the most of the folks on our team are actually building this constantly we're building data centers and fiber as far as the eye can see. The momentum is amazing we get about a hundred twenty thousand. New subscriptions every month and we just keep scaling and keep packing in more and more work. As a result and here's a kind of a NASCAR view of the different fortune 500 companies on the Microsoft cloud. One thing that that you've probably seen about. The new Microsoft is. We joined the Linux Foundation here I am speaking in a Linux Foundation sponsored event and we build Linux first class. So it's it's something we consider from the very beginning and we're also. Very serious about the momentum around the the Linux community that we've developed. So one out of three now of the the VMs for example that are provisioned on Azure or Linux VMs. So you see more and more of this from us. What about containers. So I mean containers really are a great technology. I mean why and so it's just a very in the came out of the Linux world it's a very very convenient way to package and ship and update your services or your micro services. And I mean you really in a networking world you want to be able to ship services without any downtime. You want to be able to make changes to little micro services instead of two large monoliths and it's a nice technology for that and we use it internally so for example. At on us 2016 I first started talking about our open source switch stack so this is firmware built on containers it's really quite magical you can update the BGP you can do this and that without any downtime and it's open source building on containers. Linux containers. We have embraced the container movement big time so for example we announced Kubernetes as a GA service on Azure in the form of Azure container services just a month or so ago. And here I sorry last week over an OCP nearby maybe it was literally here we we talked about the containerization work that we've done for for Sonic. I'll talk a little bit more about Sonic at the end but think of it as not only are we producing containers for people that are building services on the cloud to consume we also are using them to build our own software. The other thing I want to talk about in terms of momentum is just the virtualization momentum in Azure so this is our SDN momentum and you can see that in 2013 we started. With GA of virtual networks and then we just keep adding more and more features to that as we go so and at an accelerated pace every month or two will announce another another feature. Now the question is well if I'm doing all this SDN work and I've done it for VMs will it pertain to containers or not and that's the answer is yes and the that's our announcement here. For we're announcing Azure vNet for containers it's a one SDN solution everything. Everything that works for VMs same code will work for containers and not only that it's comprehensive. So you can pick your orchestrator you can pick the Kubernetes family the DC OS the Docker and and hook it up in the in Azure with one virtual network. It's in public preview now they'll be a blog out I think may already be out. And so you can you can search for it on the on Azure. It allows you to connect your deploy and connect your containers to the Azure virtual network. There's a symbol of the cubes and the containers and you can. Program with with the same connectivity rules routing security and so forth that you get for VMs and it's available in the Azure container service. What's it look like so at the bottom is Azure container service that allows you to deploy the infrastructure to manage large container clusters. Then comes the orchestrators and we like I said before we offer several orchestrators several popular orchestrators. And then we after you've deployed that you can deploy your containers into that now it seems like there's something missing here. In the middle and in some of the sessions this morning I heard talk about it it's kind of. A little bit of toil now to hook up the networking because you have networking for your VMs which is in the infrastructure and networking for the containers an extra step. So can you make it seamless can you make it you know one step same virtual network and that's what we what we've done so through the Azure. Compute service engine also open source you can deploy and we'll show you how the an Azure vNet that then networks your your your containers. With a single click so how does this differ from what's commonly out there. Well there you can already and we do support this and you're welcome to use it and we will continue to support it. Right now if you you know if you look at the symbol symbols here there's a node or VM and then there's two containers in it on the left and two containers in the node on the right. And you can network network them together through bridging so you can see that if you look closely as an IP address on the source on the left hand side if it wants to talk. Over to the right hand side it will it will it'll remap the IP address so if you can't go IP to IP you can't sort of do direct connectivity but you can get it done through net. So the 10.0.0.5 becomes 148.23.2.34. And there there are always complications with that it can be made to work. Or you can do a double overlay so the the virtual network itself has an overlay and you can overlay within that. And and get this done without having to lose IP to IP connectivity. But what what happens you know if any of you were around. You know when we were talking about different in cap formats you tend to lose the offloads when you do this. And you do so you lose the TCP offloads which which means you'll lose a lot of perf when you do double in cap. And that that long rectangle at the bottom shows that the layering of in caps and sort of the torture you put the header of the packet through. So what we do is very simple and very flexible and just leverages natively what we already have. So you can and what do you get from that with Azure vNet for containers what you get today. You can connect your entire network container to container container to be a container to on-prem any combination of those three. With one sdn stack you get native support. Meaning that all the offloads large send off load everything else that makes things go fast and TCP will continue to work. And you get one unified network policy same policy for containers same policy for BMS. You get more as well. It works with the the ecosystem. So of course we have both Azure and Azure stack so you can run in the cloud or on-prem. It works with Linux containers and Windows containers. And it works with the Kubernetes and DC OS style containers as well as Docker. So it either uses the container network interface or container network manager depending on which of the two you pick. There's just yeah we we take care of the plug-in between the orchestrator and the cloud network. And in essence we have the cloud network interface plug-in that makes it all happen seamlessly. So one thing that we'll be talking about later in the week that that I encourage you to go to is accelerated networking for Linux. We'll be talked about later this week and it just it's an example. There's so many examples but this is one example of the benefit you get when you go with one sdn stack. So this we made available we built it for BMS it just works for containers. So we get great speed 25 gigabits per second and 10 X lower latency for Linux VMs and containers and network appliances as a result. And we've also released up the code upstream into the Linux kernel. So shout out for the session tomorrow. Takeaways from the talk so far one sdn stack containers and BMS on premise and cloud Linux and windows uses code that we've developed since whatever 2011 or so started on this journey battle tested designed to scale same code. High performance like I just showed you and integrated that's really the beauty of the container story is it's simple to deploy your apps with a click. And with the other thing that that that again I want to come back to is it's open it's Linux first class so you can go to get hub and pull it down. You can add value we have for example IP address management a driver for that you can change that you can look at how we plug into different orchestrators you can add add to all these things. So with this I'll bring up my colleague Deepak. Good afternoon. I'm going to show you today how easy it is to create a Kubernetes cluster in Azure and deploy it in an Azure virtual network with as few as two commands and one simple configuration file. You can get your Kubernetes cluster up and running in an Azure virtual network in Kubernetes. The containers are called parts as Albert mentioned parts will be first class citizens in the Azure virtual network. They will get an IP address from the Azure virtual network. You can specify all your policies load balancing on premise connectivity ACLs you name it everything is available for containers that I'm going to show today. So with that the demo setup that I'm going to use actually can we go back to the previous slide. Yeah, so the demo setup that I'm going to use consists of a Kubernetes cluster that I'm going to deploy it consists of one master node and two agent nodes. The two agent nodes will be running three parts and each of those parts will be running engine X which is a popular open source software. In addition, I'll have two VMs also running in this virtual network and I will have an ankle defined which blocks connectivity from the parts to one of the VMs whereas connectivity works to the other VM. So as you can see the whole power of Azure V net is available to VMs and two parts alike. Can we switch to the other PC please switch back. Yeah, so here's the simple Jason file that you need to define to create this Kubernetes cluster in there. There is the profile that you specify for the Kubernetes cluster and you also define your network policy as Azure, which basically means that you want to use the Azure virtual network. So now let's get the deployment started. So you pass this Jason file to the Azure container service engine, which will generate a bunch of templates and what these templates are are basically the templates for deploying your resources in Azure. So here we have generated the templates. Now you pass these templates to the command to generate the Azure deployment and here we have launched the Azure deployment. Now let's go to the Azure portal and see the resources getting created. So here is the Azure portal. It shows all the resources that exists in my subscription and you see all these new resources that are getting created are because of the deployment I just launched. So if I click on this resource group, you will see various kinds of resources getting created underneath it. This includes a virtual network that is getting created. There is the network security group which refers to the ACL that is getting created. There are these IP addresses load balancers that are all getting created because of the deployment I just launched. Now let's go to to one of the Kubernetes nodes. So here I'm on the master node and as you can see there are three nodes deployed under this master. Right now there are no parts underneath it. Now I'm going to create three parts running engine X. So now I've created three parts. Let's see those three parts. So here are the three parts. Now let's go inside one of the parts. So now we are inside the part. Let's look at the IP address inside this part. And as you'll be able to see that this part got an IP address from the virtual network from the 10.240 subnet. From this part I can ping the master node as well as ping the other VMs. So here is the VM, the SQL VM that I showed earlier in the setup that I'm able to ping from this part. Now let's try to ping the HR VM that I also have deployed in the same VNet. And you can see I cannot ping the other VM because I have a network ACL defined blocking access to that other VM. So using one SDN and tight integration with various container orchestrators like Kubernetes and Swarm, we're able to give rich SDN and bring all the capabilities of SDN to the container world. We're able to offer one SDN that spans across VMs and containers. So you don't have to manage and specify your policies in two different ways, one for VMs and one for containers. Your workload can seamlessly span containers and VMs. Thank you. Okay, I want to switch gears and then wrap up. So, you know, another completely beautiful and surprising part of the SDN move when it's the incredible ASICs that are now coming available and the way that we have a choice there and in a vibrant industry, fantastic optics, fantastic ASICs, and how do we take advantage of them so that we can introduce innovation quickly. So we've been working on two efforts. So one is the switch abstraction interface. So I can accommodate a wide variety of ASICs with the same interface. It's an abstraction interface. So my software investment is protected as I change the ASIC and take advantage of the latest from the industry. So that's one key part. The other is almost the same as the container story. It's serviceability. How do I have a carrier grade amazingly reliable stack, which is the biggest deal, running on the switch? And how do I update that functionality? And that's what we have for the Sonic project. And with that, I want to announce that we have a new partner in the Sonic community, Alibaba. So I'm going to bring up Yi-Chun Kai to speak with us. Thank you, Albert. Thank you, everybody. It is with great pleasure to be here, to be able to share the podium with Albert, whom I regard not only as a former boss, but also as a mentor and a wonderful teacher. And thank you all very much for giving me the opportunity to be here, to share with you briefly about the journey that we're taking at Alibaba, to scale our network, to support the growth of our company. So for all of you who know about Alibaba, most probably only know Alibaba as a successful e-commerce company. Yes, we did make a lot of strides in that area. So last year, I'm sure everybody heard of the single-stay shopping event. So last year, within the 24 hours alone on November 11th, the shopping volume that carried on our platform totaled at 17.8 billion US dollars, with the first billion US dollars worth of order placed within the first five minutes. What that translated into is a platform that are capable of supporting over 100,000 per second transactions for both e-commerce and payment, and the delivery for goods and services over 650 million within the first 24 hours. This scale requires two things. It requires a massive investment on hardware infrastructure, as well as innovations on technologies from hardware, software, and many other areas. So Alibaba's networking infrastructure today supports from data-centered networking to regional network to our fiber networks, and we have a global reach. So we provide the scale, the flexibility and availability needed to support the growth of our company. And then the variety of technologies that we have and the need to integrate them poses one of the greatest technology challenges that we face. So let me give you an example. In China, a great percentage of the populations live within a few hundred miles from the south and east coastlines. And that's how our data centers are mapped out. So this gives an advantage of being very, very close to our customers. But in the meantime, it also poses a great challenge because our fibers are frequently cut due to all the construction events that's happening in the areas. So what our networking engineers and our software engineers do, they're both working hard and diligent, is to make sure that whenever there's a fiber cut, whether this is happening in our network or whether this is happening in the provider's network, we want to make sure a potential incident will be turned into only a small glitch that only our network engineers will notice, but not our end customers. And this is a very lofty goal. So to achieve that goal, what we need is technologies that are not only advanced, but also allows us to be agile. And that means we want to be able to learn, to experiment, to develop, deploy, manage. And as it often happens, we want to be able to change directions quickly. We call it the new triple A principles, be able to quickly adopt, absorb, and advance. And this is not just demanded by our desire as engineers, but also demanded by the growth of the technology itself. So today, we are very, very excited to join the Sonic community to help drive the momentum of Network OS. It gives us a wonderful opportunity to partner with industry leaders and to be an active contributing members to the open source communities. And then we eagerly look forward to working with Microsoft and many others to advance the NOS technologies. We believe this is a great endeavor that brings tremendous values to the network industry and everybody in the ecosystem, including cloud operators, including our chip vendors, our system vendors, and even software engineers at large. So thank you very much for giving me this opportunity, and let's work together to make great things happen. Thank you. So I'm going to wrap up quickly. Let me see. First, with the Oli guys and the Microsoft guys that created this great tech, please stand up so you know who to approach. There's some in that pocket. There they are, over there. So our whole thing here, what we're trying to accomplish, is really to invite the community to work with us and contribute to these efforts. It's open. We really are. I mean, the open, the sonic project is all open source. You can find it on GitHub and use it. The size is open source. The container network interface plug-in is open source. So we really look forward to the community having a look and giving feedback and building on top. Thank you.