 Hello, everyone. Welcome to the intro and deep dive session of the area. My name is Pengfei and before the session started, now let's introduce ourselves. So, Greg, would you like to introduce yourself? Thank you, Pengfei. So, I'm Craig Peters. I am a PM program manager at Microsoft and I work on everything that is container infrastructure and upstream, open source container infrastructure software that Azure depends upon. And so Kubernetes is a key part of that and the Azure cloud provider and all the related software is part of what I work on. My background is actually in distributed systems and open source. I've been with Microsoft for about six months now working on these kinds of things. Here is contact information for myself and Pengfei. Yeah, so we have our two WeChat contacts so you can scan them to join us. So, I'm Pengfei from also from the Microsoft and I work mainly aiming to increase the community's experience on Azure. So, I also contribute to the community calls such as container on time and Azure cloud provider. Okay. So, we wanted to start by understanding who is in the audience. So, first and foremost, I see some friendly faces from the Microsoft crew. So, everybody please Microsoft feel free to say, here I am. We contribute to making sure that Azure runs great and will be here to support your workloads, but I see some of us in the audience who are not. So, I want to kind of understand who you are at some level and what you're interested in. Why are you here? So, first, have you ever participated any of you in a SIG Azure activity of any kind, a meeting on the Slack channel in any way? Nope. Okay. So, do you run any workloads or any Kubernetes related things on Azure today? Does anybody do that? You do. Fantastic. Great. We've got a couple more people joining. Come on in. Glad you're here. So, has anyone in the room used AKS Engine? We're familiar with AKS Engine. AKS Engine is an open source project that creates, essentially it's a template generator for creating Azure resource manager definitions of Kubernetes clusters on Azure. So, any of you familiar with that? Nope. Okay. That's good to know. How many of you have used Azure Kubernetes service to run your clusters? A couple. Okay. Everybody else, how are you running Kubernetes on Azure? What's your method for running? Okay. So, you use a third party platform that it somehow handles creating the Kubernetes clusters on Azure and then it runs its own resources there. I see. So, it does all of the orchestration of the Kubernetes services on VMs for you. Okay. Very good. And how many of you here came here because you want some help with running Kubernetes on Azure? You have any question that you can put out? I just wanted to kind of understand where you're going to end. For the rest of you are here, I hope we can help answer some questions or raise some things for you so you understand what's going on. So, okay, that kind of is kind of what I expected. So, let's give you a quick introduction to who is Azure and what's happening with it. So, I see Azure is in the Kubernetes community if you're not familiar with a special interest group. A special interest group is a community of people who are responsible for maintaining a set of capabilities in the Kubernetes ecosystem. In this case, making sure Kubernetes runs great on Azure. The chairs, I'm a newly elected co-chair and I share the co-chair responsibility with Stephen Augustus who works for VMware. VMware, of course, is very interested in making sure that Kubernetes runs great across all of the platforms VMware works with. And Pengfei is a technical lead along with Cal, also from Microsoft who works in the U.S. very closely to some of our crew here. You can learn more about the governance of Sega Azure from this link, all of the documentation for the way in which the special interest groups operate are available online on GitHub. So, why do we need a special interest group for a specific cloud? That's a really valid question. And the answer is that it is now becoming obsolete to have a special interest group that is focused on a cloud. So, all of the individual cloud providers are now working together under a common special interest group which is called SIG Cloud Provider. The transition for moving from the individual Azure-based SIG to the cloud provider SIG and having sub-projects under the cloud provider reflects also the change in which the way cloud providers are taken advantage of in Kubernetes itself. So, we've come to a common agreement among the Kubernetes community that cloud providers should operate in a common way so that when people are running their Kubernetes clusters on different clouds, you have a common set of expectations and ways in which you work with it. And so, now we are going to transition the organization and governance around how we do that to a common methodology. That is, SIG Cloud Provider with sub-projects under the cloud provider for each of the related SIGs. So, presumably, I may stay involved as a co-chair of the sub-project for Azure under SIG Cloud Provider. There are several related efforts that are going to take over governance of some of the sub-projects which SIG Azure has been responsible for up until now. Right now there's an effort called Cluster API Provider Azure. That is going to move under the SIG Cluster Lifecycle project. That's also the same thing is happening for the GCP, AWS, and other cloud providers. And so, all of those things are getting consolidated. There's also a project, a sub-project of SIG Azure called the CSI Drivers' Fresher. That is moving as a sub-project under SIG Storage. So, essentially, things are moving to their kind of logical homes. Right. Actually, Andy is maintaining the CS Drivers for Azure File and Azure Disk. Yeah. So, we've got some of the principles involved in the room. So, if you have very deep questions about those, don't ask me. We'll ask Andy. So, I have some references here. Once the slides are uploaded into the schedule site, you'll be able to get your own copies. I did forget to mention at the beginning that the slides will be updated online, and they're both English and Chinese copies. Thanks to Pengfei for the Chinese copies. So, I'll be happy to answer any questions about the transition. Oh, there's one other thing I should say about this transition, about the governance, and that is that today there is a very active Slack channel called SIG Azure. There are kind of two things happen in that Slack channel today. One is discussion about the development and mechanics of management of the Azure Cloud Provider projects and engineering work. The other kind of thing that happens in that Slack channel is user support. So, we have a significant number of people who use Azure and go to the SIG Azure Slack channel to get help both from sort of Microsoft, but also from their peers in the community, and that's a fantastic thing. Obviously, the engineering work is going to move under SIG Provider, SIG Cluster, Lifecycle, and SIG Storage. The user support things are going to move under a new kind of structure called user group. Those user groups, I actually don't know exactly what the structure of those are going to look like, but we'll be communicating later this summer about what the user groups and how the user groups are going to work. So, there will continue to be a forum for that work. Okay. Next, I will make a simple introduction about the recent accomplishment of SIG Azure. So, the biggest thing that we would work on, maybe in the future, two or three releases, we would move the Cloud Provider, especially all the cloud-specific costs would be out of chain. So, for Azure, the Cloud Provider Azure has already started the worker, and we have already set up the repo here, and it's also under Kubernetes auger, and the repo name is Cloud Provider Azure. So, based on this, we have also set up some entry and test and validate its functionality. So, for the current Kubernetes call, all the Cloud Provider implementation have been moved to SIG Azure. So, all the SIG Azure repo have also set up some synchronize to a different repo. So, for example, for the Cloud Provider Azure, we could vendor that changes, and we are still making changes in the SIG Azure directory and think out and vendor in the Cloud Provider Azure. Okay. Can I just say a couple of words about why this is happening? So, if you're not familiar with the work about the Cloud Provider, it might come a little out of context. An important piece of context is that, historically, the Cloud Provider Code organically grew up in the Kubernetes Kubernetes repository. The strength of that was that it was tightly integrated. The ability to run Kubernetes on VMware or Azure or GCP or on-prem on bare metal was kind of built into Kubernetes itself. Over time, that has become very unwieldy to manage very disparate kind of implementations for how you run Kubernetes on different infrastructures in that core code base. It has bloated the repository, it has slowed down the ability for the Cloud Providers to evolve, and most importantly, it forces the release of changes to Cloud Provider Code to be in lockstep with the release of Kubernetes. Moving the Cloud Providers out of tree into their own repositories allows the different Cloud Providers to have their own release cycles, but it also forces us to pay a price, which is that we have to deal with the sort of compatibility matrix and the testing. So, how do you version Cloud Providers, say independently from the Kubernetes release? So, that is all work that's ongoing in the SIG Cloud Provider group. Actually, the out of tree Cloud Provider and also the next one, Class API, provide Azure, they also have the same goal. So, they couple the Azure implementations from the core Kubernetes, from the core Class API. So, that way, we may achieve the same experience across different Clouds. So, maybe you need some cross-Cloud solution. For example, you're trying to business using other Cloud, but the U.S. business using Azure. So, you can reuse your work based on the same solution, but the detailed implementations are different on different Clouds. So, the next one is about the end-to-end testing. We have already established the end-to-end testing for Cloud Provider, Cloud Provider Azure, and we have published the result to the Kubernetes test grid. So, everyone can check the status of our work. And the testing is based on AKS Engine. So, we use AKS Engine to set up the testing environment, build the latest releases, or maybe from master branches, and validate various test cases, such as Kubernetes, conforming test, zero-test, slow-test, and so on. And the next item is about managing the service identity. So, for Azure, it supports many ways to authorize your applications. For example, the MSI, the user then MSI, the service principle. So, you can choose to use any one. But from the Kubernetes side, so, Kubernetes needs to initialize itself and read itself to the Kubernetes node. So, for Kubernetes, we support using MSI, we support service principle, we also support user then MSI. But since the latest release last week, 1.15, we also support initialize itself without any credential. So, you don't need MSI, you don't need the service principle. The Kubernetes only initialize itself, get the required admission from the Juntes Med Data. So, it knows it's internal IP, it's availability zone, and it's subscription names. So, it could get all of those admission and compose the providing and all the node math data and read it itself to the Kubernetes cluster. Okay. And the next one is about the AirDisk and AirFile plug-in. So, AirDisk and AirFile plug-in are also depending on the Azure Cloud provider to make a request to Azure. And for example, create some persistent volumes from Azure. So, for AirDisk and AirFile, we have set up the CS drivers. So, then it's part of a job to split, to achieve the auto-trick cloud provider. So, we have set up the CS drivers for this to AirFile and AirDisk driver. And we have planned to set up the entry test for them and we have also working on the Windows support into CSI. So, for CSI, for CSI on Windows, there's still some small issues we need to figure out. So, maybe in the next release 1.16, we may add that support. So, during this period, we have also added some new features to the two plug-in, such as the Ultra-SSD and Standard-SSD for AirDisk and also the Premiere AirFile driver. Okay, and the next is about Azure Load Balancer. So, you know that if you use the community service with Type Load Balancer, then AirCloud provider would set up Azure Load Balancer and the public IP, the FJs for you so that you can access your service using that public IP. But many customers may need some, their specific use cases to change the behavior of this Azure Load Balancer. So, we supported a couple of annotations. So, for example, you can show your load balancer is only internal. Maybe you can use your public IP that already have created in the same resource group, or maybe in another resource group. And the last one is about many new features. For example, the Auto-Scalian, the Virtual Machine Scources, the Availability Zone, the Cross-Resource Group nodes. So, we have moved such features from R5 to Beta. Actually, for Virtual Machine Scources and Auto-Scalian Supporter, they have been deep, and it is also supported now in AKS product. So, and the last one, so if you want to check the recent updates, so what we have done in the latest release, what bug fixes are there, so you can check the community release nodes. Okay. Thanks. So, what is it that we're planning to do next? So, we're bringing things out of tree. So, the out-of-tree cloud provider is the largest piece of work that we need to do. We need to finish that work. It's a very large project. Yeah, yeah. And then also include a few, a couple of different projects, such as the CFDrivers or Erdisk, the CFDrivers or Azure File, the Class API provider, and the Cloud provider Azure. Yeah. We're also working on, we're going to go into these in a little bit more detail in a minute, but we're also working on the ability to expose secrets through the storage interface, the cloud storage interface as a driver, as well as exposing the ability to use Azure Active Directory to identify and authenticate services running in your pods, and two other really cool things which are the ability to leverage an Azure service called Cosmos DB as your SED for your clusters, which will give you scalability and cross-region cluster federation, which is going to be very interesting. And finally, a lot of our customers are running out of IPv4 address space, but they also can't completely move all of their services into IPv6, and so there's a need to enable Kubernetes to support both IPv4 and v6, so we'll drill into that a little bit. So I think Faye will talk a little bit about the cluster API provider. Yeah. So for cluster API provider Azure, it is actually set up only recently, right? So we have a couple of interns from Microsoft and from platform nine, right? So they have to set up the repo and make the initial work and make it working to introduce the cluster API into Azure. So today the cluster API provider Azure has been introduced in the cluster lifecycle, so it has been a feature project in Kubernetes and it also brings the cluster API related project also working in Azure. So the repo here is... The repo is in the Kubernetes 6 org, so here is a link, so you can have a try and get started and using that to set up a cluster on Azure. So originally we suggest using AKS Engine to set up the cluster, but today you can also use this project, use the cluster API provider to provision the cluster. But you may know that the project is still in its earliest stage, so there are still some issues there. For example, the virtual machine scale size is still not supported yet. It is not supported not because we can't add it in the cluster API provider Azure, it's because the concept from the cluster API, they don't expose the interface for the machine size, so we have to allow the different providers to implement that. So for this part, we have already a proposal on the GitHub and published this week, I think. So everyone can comment and leave a message here. So after the proposal gets revealed from the whole community, and then we would add the virtual machine scale size there. So then it could be compatible to the AKS Engine and also we may add many new features here to support more custom. Okay, the next. So the next thing is, the key thing is that we all know that secret management in Kubernetes is pretty challenging. And the real need that enterprises have and have been coming to us over and over again is that they don't want to move their secrets into Kubernetes secrets. The ability to handle that has been challenging and we've decided to try to support this by enabling a CSI inline volume support for mounting essentially your secrets from other providers. And the initial support has come for Azure Key Vault and Hashicorp Vault. So with the release of 1.15, which just came out, you can now make those secrets that you use in enterprise service or your existing infrastructure available there. So what we would like is to understand if these two providers sort of meet all your needs or if there are additional providers that our customers want to bring into that. So if you have a key management capability that you would like to see there, please come and help us understand the need and even better if you can also implement a provider. So the way in which the code is structured, it's very straightforward to implement your own additional provider and add it to the project. So I'd love to hear about that. We also have now in kind of a preview form the ability to use Azure Active Directory as the authentication mechanism for the services that you're running in your pods. So you see here the link to the GitHub repo for that. It's got really straightforward documentation for how you apply that. The goal here is to make it so that your pods can essentially run different authentication schemes or take advantage of the different roles and authentication that you have set up already in Azure Active Directory for your services that you're using for the rest of your enterprise in your Kubernetes pods so that you don't have to create a different kind of authentication scheme, specifically for Kubernetes services. The other piece of this is that you can now join nodes to your clusters and have authentication for services that aren't identified in Azure Active Directory. You can essentially create nodes that are independent of your authentication scheme. Do you have anything to add to that? Yeah, so for pod identity, if you're in-bodied in your cluster, then all the pods access to the instance metadata when you get the token from there, it will be hooked by pod identity. So then you can control each pod access. You can decide which pod could get the token from Azure. Yeah. It's a very nice feature. I'd love to get feedback for how you do that. And we're working to make this an integrated capability that's built right into Azure Kubernetes service. Today, it's supported as an add-on. Yeah, and the next is about running Kubernetes with CosmDB. So CosmDB is a global distributed database on Azure, and it now supports a preview feature. It provided the access API. So then you can use this CosmDB as the storage backend of your Kubernetes cluster. So if you set up your Kubernetes cluster on Azure, you can configure and use the CosmDB just like the XAD cluster is installed on your local machine. So we can do this because the Cosmos have implemented the wide-level protocol of the XAD, so that every protocol that's required by Kubernetes is implemented in the Cosmos XAD API. So if you want to try this feature, so you can easily follow the ACAS engines previews documents, and you can enable the CosmDB's XAD API and join your Kubernetes cluster there. So with CosmDB, since it is distributed globally, then your cluster could be easily scaled in many bigger scale. Just a note of caution is that that capability is not yet available for the XAD interface on Cosmos DB. During the preview phase right now, it's one availability zone for XAD data. As Cosmos DB support for XAD grows, it will become available to be globally replicated. Yeah, okay, next one. So the next one is about the network team where we have been working to support both the IPv4 and IPv6 dual stack Kubernetes. So as you know, for the current Kubernetes, you could only run one stack, either IPv4 or IPv6. So we're working on the proposals to add the dual stack Kubernetes, and also we have actually one pull request today is still under review. So we have also listed the pull request here. If you are interested, you can comment there. And if you have any more ideas, you can also comment there. So for this feature, we would like to, we are expecting the feature would be landed in next release 1.16. So then everyone can try this feature and get feedbacks and also we can improve in the next release cycle. So 1.16, the initial support is going to be for pod to pod communication. The scope and the ideas over several releases will add alpha support for dual stack and more and more of the communication schemes across Kubernetes. The reason to do this is that the IP stack touches so many pieces of Kubernetes. As you will see with this PR, even to do this very limited scope touches a huge number of files. And so we've worked very diligently with SIG networking and all of the other related projects to make sure that we're doing this in a very careful way. So we are not disruptive in how we implement this. So I want to talk a minute about just how you can become involved. So you're here to get questions answered or understand why we're doing what we are. Hopefully we've helped you answer that and maybe now you're interested. So there are several ways. One is you already heard that SIG Azure is kind of decomposing into cluster API and storage and so forth. So stay tuned for the way the user group is going to work and then get involved in cluster life cycle. Join the mailing list because that's where we'll communicate about it. But also join the Slack channel and ask your questions, get help and figure out what's going on there. You can participate in the SIG meetings. Today those SIG meetings occur at a time that is not at all convenient in this time zone. And we're very aware of that. In the next couple of weeks, I'm going to make a proposal for when we do the sub-project under cluster life cycle, we're going to alternate the meeting times between Europe-friendly times and Asia-friendly times. And that way we can get a broader set of participants. To get involved in contributing to the community, starting with filing issues and reviewing PRs. So it's basic, getting involved in the engineering. If you want to figure out where you can contribute, we maintain a GitHub project board. And here's the link to it. We do our diligence during the meetings to maintain and groom it and get new issues, like good first-time issues labeled, bugs labeled and so forth, and then assign them to people and track the progress of those issues through the project board. And I encourage you to collaborate asynchronously that way. And with that, we'd like to open it up for any questions. I have a question. What about the network plugin support policy for AIO? Because Calico has released 3.7 in last April, and it supports AIO. And I have verified that plugin and it can be used in a pure Linux Kubernetes class. So does anything change about the network plugin support policy? Yeah, actually Calico has made some changes in the CN plugin. So before the latest release, the Calico could only be run as a policy only on AIO because the underlying protocol is not supported. But today with the latest release, it supports VEX line as the overlay networking. So VEX line is also supported for AIO. So now you can run the fully Calico CN plugin and it's a net policy on AIO. So we may introduce and also in the future into AKS product. So we have time for one last question, unfortunately. We used all the time. Any other questions? No. Well, thank you very much. I really appreciate your time and attention.