 Welcome everyone to this session. Thank you. Thank you all for joining. I'm Alberto de Natal from SISC and I'm here with Marc Charles from Google. And in this session today, we wanted to talk with you about cloud native and SD1 and how SD1 and Kubernetes can deliver a better application experience if they talk a bit more with one another. So in this session today, we are going to start with an introduction of what is the government state of Kubernetes and SD1 and why it makes sense for them to integrate a bit more. And then we're going to jump into the cloud native SD1 open source initiative. This is an open source project we released this past summer around the time of Europe. And in the talk today, we're going to go through the architecture. We're going to set you an example of how this thing falls together. And of course, we are going to have a tweak on each of the components. So we will end up with a Q&A and we hope to have a bit of satisfaction at the end with you guys. So with that, I will pass it to Marc that we'll start the conversation. Thank you, Alberto. So let's talk about why we're having this discussion in the first place. What is the intersection of Kubernetes and WANCE? And in some ways, Kubernetes is a victim of its own success. So Kubernetes has become a very common layer to deploy applications. So applications have a consistent execution environment, no matter where they're running. And Kubernetes is available as a managed version of all the primary cloud providers. Customers are deploying them on-prem. And so there's many areas in which you can deploy an application on Kubernetes, including many regions and areas of the network around the world. And so while applications can be deployed in a really consistent way by using Kubernetes, the clusters themselves may be homogenous areas for these applications, but the networks between them are not. And that is very much the problem that we're trying to address here, and a problem that I think many Kubernetes operators have, is you have applications that may be different applications, maybe the same instance or maybe different instances of the same application across Kubernetes clusters that are in different cloud providers, different regions have access across different types of networks. And then in addition to that, your users are in different areas, right? They are maybe enterprise work from home users. They're maybe enterprise from your branch. And so providing the connectivity between your Kubernetes hosted applications and your users is not an easy thing to solve. And this is very much what SD-WAN does. And so this presentation discusses how do we bring these two technologies together to help solve both of these problems and make the process of connecting to your Kubernetes hosted applications across wide area networks an easier thing to do. So let's talk a little bit about what SD-WAN actually solves in the first place. So software defined WAN is a group of technologies offered by a lot of different companies that solves some of the challenges we have listed here. You have applications and endpoints across many different networks. You have users and clients across many different networks, and you have heterogeneous connectivity between them. And SD-WAN takes this heterogeneous connectivity, all these different types of network and unifies them as a single controllable and programmable WAN fabric. So that your network operator now has a consistent view of the network. They have consistent control from a visibility perspective, a security perspective, and also a control and optimization perspective. And so this provides the kind of benefits that are really needed, especially as applications are deployed to multiple cloud providers, multiple regions, and across multiple different kinds of networks. And so now let's look at, okay, we have these two groups of technologies. How do we bring them together to kind of offer the competitive advantages and the complementary benefits that they each have? So in Kubernetes, you have different attributes of your application. We understand what the application is, what zone region it is, the environment or provider, annotations that maybe dictate different attributes about the application. Of course, the IP and port where the application is accessible from, perhaps within that provider, or maybe within the public internet. And also characteristics about the app, like its health or how many replicas exist within the application. So those are things that are stored inside your Kubernetes cluster. And then SD-WAN has its own attributes that understands its own picture of the world. It understands attributes of the user and the client and its identity and location. It understands attributes of the network path performance and health and what the connectivity looks like between the clients and your applications. And so when we're bringing these two things together, we're basically taking all these attributes and creating policy. And really, these attributes are important for different personas. And as you see, as we go through this presentation, you'll see that there are DevOps users or service owners that are deploying their applications in Kubernetes. And there are also NetOps users that are controlling the network and ensuring that there's a consistent, secure and reliable network between users and applications. And if we have a policy-based infrastructure that's connecting these together, then we have a very nice and role-oriented way in which NetOps and DevOps can work together so that DevOps can have more control to determine the types of network policies that are applied to their applications. And then NetOps gets an understanding of how applications are being accessed and how clients are using them. And so this provides a lot of benefits. And over the next couple of slides, Alberto is going to show you some of the open-source technologies that we're using to help users connect these two technologies together. Thank you. Thank you, Marta. That was a fantastic introduction. So, yes, as Mart said, let's spend some time now talking about open-source and what we have done here. So, this is about the cloud native S1 project. And the whole idea with the cloud native S1 project is that you have today S1 and Kubernetes. And the two may be, you know, running together perfectly fine. And you have both DevOps and NetOps making sure that things look nice and fine on both ends. But there is little integration between the two, and we are missing some optimization opportunities. So, the whole idea of the project is to, you know, take this picture that you have here and make it a bit more colorful. So, for each of the different services that you may have running on your cluster, and this example is so important, a video conference application that is composed of different microservices that have different requirements from the network point of view. So, the idea with the S1 integration is to make S1 a bit more aware of what's going on, and the application side is going on on the current side, so that it can provide optimization, specifically tailored for each of the microservices that the cluster is having. So, with that, let's take a look at the architecture. So, everything starts from the same picture we were looking at, the NetOps and the DevOps. So, NetOps taking care of the S1 controller and DevOps taking care of Kubernetes. And as probably all of you know, the Kubernetes side, a lot of things start by deploying a YAML file specifying your service. So, the idea that S1 introduces is that in that YAML specification, the DevOps can put a really simple notation, and we'll take a look at the notation later in the presentation. A very simple notation that Yata specifies is how the traffic for this particular server looks like. What you know is the kind of expectation that the service has on the one side of things. And that is something that then through the process that we're going to talk about right now, the S1 can optimize for this particular type of service for the traffic that the service is generating. So, once you have this annotation or YAML specification, there is a component that we call the S1 operator that is running on the cluster and monitoring the services for this YAML annotation. And the moment you notice that one service has been annotated with this S1 metadata, it passes the information from external service registry. And today we support, for instance, a world cloud service directory as service registry, but this signal can be any service registry. So, once the information is available and easy to be consumed on the service registry, we have another component we call the S1 reader that is retrieving this information from the service registry is retrieving the metadata service with the IP and port. So, it knows how the service looks like from the standard point of view from the outside of the world cloud address. And it's retrieving as well the metadata associated with this service so that it can then pass that information to a component we call the S1 adapter. And the idea with the S1 adapter is that this component is aware of how to interact with the S1 controller. So, you're going to have different S1 adapters for each S1 solution you are using. And the idea with the S1 adapter is that the net ops is going to configure things. It's going to configure first on the S1 controller some policies, some, you know, S1 policies as the net ops can do today. And then it's also going to go to the S1 adapter and tell the S1 adapter, hey, look, I have these policies configured on my controller. I want you to map these policies to the information about the services you're receiving from Kubernetes. So, that way, when a service is retrieved from the registry by the reader and passed it on to the adapter, the adapter knows by looking at the metadata of this service and looking at the mapping of policy to metadata that the net ops did, which policy, which S1 policies should apply to this service. So, it can pass this information about the, you know, the information about the service to the S1 controller, so that the S1 controller can apply the proper policy for this service. So, I know that this was a bit, maybe a bit too abstract and high level. So, let's look at, you know, what this means in reality. So, for that, I'm going to use this super simple example here, that this kind of showcase an already common SD1 deployment. So, a typical enterprise may have multiple branches, maybe running some applications on the cloud in Kubernetes, right? And this enterprise wants to have good connectivity between those branches and the application that is running on the cloud. So, for that, it deploys an SD1. This is happening to the common deployment. On the SD1 side of things, usually you tend to have, you don't need to have multiple links over the network, so you can, you know, play with the different performance that you are saving on the different links and explain the traffic between the links and so on. So, in this particular example, we are showing an SD1 connection from branch to cloud that happens to go through an ISP network that offers two different types of connectivity. What we call in the picture, public internet and business internet. Now, you can imagine I'm looking at the slide. So, public internet is the best for connectivity, very much what you can get at Comrade. And it has maybe lower bandwidth, but also it goes right. So, the enterprise decides to put most of the traffic through the public internet is possible. But then it also has what we call in this example, business internet, business internet connectivity that has higher bandwidth, it may have also as well less latency, lower latency, and so on. But the thing is that it comes at a higher cost. So, if you want to use the business internet, you have to pay a bit. So, that's why by default, any traffic going from the branch to the application, so I'm coming back, goes through the public internet because it's cheaper. But then you may have a particular service that requires more bandwidth, for instance. So, and you want to deliver a good bandwidth for this service. Let's say, for instance, that you have a service that is a video streaming application running on the cluster. And then the net ops has noticed that for video streaming, the public internet is not delivering good enough performance. So, the net ops has set up a policy on the network that says, okay, when you see, when you know that there is traffic for a particular video service, I want to put that traffic on the business internet. So, that's a policy that net ops may have set up on this one. And then the devops knows that it can use an annotation on the Kubernetes site using the same one integration that is going to interact with these policies that the net ops has provisioned. So, the only thing the devops has to do is to, okay, annotate this service with a similar annotation and the net ops and devops will coordinate on how those notations look like. And by just deploying that annotation, and through the integration we have described with the Kubernetes D1 project, the traffic automatically shifts and starts using the business internet. And this happened, you know, under the scenes without any manual interaction on the net ops or the top side, just by the fact that you have this service with this particular annotation, the traffic is automatically put in the link that the net ops knows is best for this particular type of traffic. So, with that, let's double-click on each of the components of the same one solution. So, I intend to, you know, to offer a bit of an overview of the components and for you to get too lost, I'm trying to put a diagram at the top point at which component we are looking at at every moment. So, then we're looking at the same one operator. This is the component that runs within the Kubernetes cluster. This is your typical Kubernetes controller that we have written in Golan and we use QBuilder's framework. And the idea with the same one operator, again, is to look for annotations on services. And you have an example of this annotation on the right. You see here, for instance, same one dot IOT, the file equals video. And this is a very good example of an annotation, and we use it as defaults in the PRCs that we do with this technology. But, you know, this can be any kind of annotation. So, as long as there is some understanding between the DevOps and the Tops, which is the kind of annotation expected, you can put anything that you think of and then you can configure the same one operator to understand those annotations. So, the same one operator, again, is going to register this information with the service directory. Today we support a world-class service directory, but we have some online work to support DNS in general, and also AWS CloudMap. And we're making an effort to make the operator easy to install. So, today we offer some scripts that, you know, sort of automates models that have just been playing QCTL. But we have some online efforts, again, to offer this through operator hub and, of course, to offer the registration through Helm as well. And you have at the bottom the link to the code so you can take a look for the service. So, the second component, the same one reader. This guy is the one that is running, well, it can run anywhere, it can start on closer to the ST1 controller, it's closer to the ST1 deployment. So, again, this is written on and the idea with the same one reader is that it's monitoring the service registry. We need a service directory today or DNS or AWS CloudMap, hopefully in the future. So, any of those, monitoring the service registry for new services and annotations for those services. And the way it monitors those services, today we support the RPC polling towards the Google Cloud Service Directory. We're also looking at some Kafka-Pabshaft integration. And once it detects a new service or new information for the system service and so on, it's passing that information to the adapter that, if you remember, is the guy that is closer to the ST1 controller. So, the way the reader has to interact with the adapter is through what we call the same one events API. That is an API that all the reader and the adapter have to implement. And we provide an open API schema on the GitHub repository, you can take a look. So, on the right is an example of how this API is used, right? So, this example is showing a new service on the service registry that was annotated on the cluster with traffic profile equal. For instance, a plain example, depending on what you put in the cluster of how you configure the operator, you may see different metadata attributes here. But the idea is that the reader is going to pass this information to the adapter. So, this example is saying I've seen a new streaming service that was created. Here's the APM port. So, these are the APM ports you can use to identify the traffic over the ST1. And this is what the DevOps set about the service, the DevOps set, okay, the traffic profile for this service is with you. So, with that, we go to the same one adapter that is implementing the same one events API and taking care of this. Now, the interesting thing about the same one adapter is that it's a specific pair ST1 controller. So, if you have different ST1 solutions, you may need to use different adapters. So, as of today, we only have one adapter that is for Viptel ST1. And we are now starting to work on supporting also Meraki ST1. So, you can use any of those too. And any ST1 solution can help. Again, it's an adapter. And the idea with the same one is to be modular enough that anyone can write an adapter for the ST1 of choice. So, since the only adapter that we have today is Viptel, that's the only one I can talk about because there's nothing bad ideas yet. So, the way the Viptel adapter works is written in Python using the Viptel SDK. And the way it works is it offers a map inside the API that basically maps the metadata that it got from the reader to a particular Viptela policy. So, this is specific to Viptela, but if you look at the right side of the speech, you'll see that it's a meta-data key traffic profile with value video. I want you to map it in this particular policy. And what we define for Viptela is policy type application on a web route in this specific policy type on Viptela. So, the ST1 solution will have different policy types. And then the netops in this example created a policy type application of routing that was named Optimized VT. And then the netops went to this mapping API on the adapter to configure the mapping between traffic profile video on the Kubernetes side of the world with application of routing Optimized video on the Viptela side of the world. And in general, the ST1 solutions maybe have similar lists, so you can have a similar model. And we hope that the adapter will provide now such as an example or supporting other ST1s in the future. So, with that, we are at the end of the presentation and we would like to have some Q&A with all of you. And before we jump into that, we would like to throw some questions already at the table. And these are things that we are asking ourselves and we don't have answers yet, so we would like to see what you guys think. So, one is, okay, can this model apply not only to one, but also to land, right? So, what we have described so far is for ST1 connectivity or one connectivity in general. But if you think of it, nothing prevents Kubernetes from having a similar framework, which you can specify requirements and you can specify, you know, the types of traffic and so on that the service has, and maybe a LAN network, a local network, no, your cloud network, your data center network, whatever you have there, may also render a specific optimization for those services. So, that takes us to the second question. So, Kubernetes have semantics in general for types of facts. So, we have answers for this. And we hope to have a conversation with you guys. And you can set the bottom, the GitHub repo with all the code. We also have some documentation there. We have some nice automation that brings a small QoC on our cloud today. So, you can also try that. And also, there is a wireless, you can send us some notes there. And with that, we'll be happy to take any questions.