 Hi, and welcome to the session on cloud native for edge and service providers. This is a sponsored cloud native computing Foundation session My name is Taylor carpenter, and I'm an owner in the software Cooperative vault co-op open source advocate. I've been using Linux for over 20 years Working with CNCF on see it CD dev ops cloud native networking since 2017 Today, we're going to have a multi-part Session we'll talk a little bit about CNCF and the telecom initiatives it has and then give time for six projects and see if projects to talk about their project and how it can be used for telecom and edge Solutions we'll start with Falco with Spencer from the Falco team and then Andres from the spire team will be next and Then Wally from Nats. We'll talk a little bit about the Nats project We're gonna have a quick break and then we will look at CNI genie with Susan Shetty and then we'll take a look at network service mesh with Frederick Cots and And our final project will be with Alex from the operator framework We'll have a little bit more time for Q&A at the end To get started. I'd like to hand it over to Priyanka Sharma General manager of CNCF to talk a little bit about the cloud native computing Foundation following up her keynote Priyanka Thank you so much for that intro Taylor Hello, everybody. I am Priyanka as Taylor said and I'm the general manager for CNCF a few minutes ago I was giving a keynote the keynote for open source networking and edge summit. I hope some of you caught that I Have been involved in the cloud native ecosystem for quite some time now in the earliest of days in Early 2016 I was working on an open source project called open tracing which became the third one to join the foundation So as a project contributor as an evangelist as someone sharing the new Observability technology we were building spent a lot of times working with end users working with other technologies just Building on what we wanted the future to look like we thought the future should look like and Today and 2020 the foundation has just blossomed Thanks to the work of my predecessor Dan Khan and colleague Chris Anischeck and the rest of the team Very proud of where we have come. I think That cloud native is has evolved a lot in the set in the last four years And I talked a little bit in my keynote about the second wave of cloud native a lot of Sorry, someone's having audio issues, let me make sure people can hear me can people hear me It seems like they can but I'm now nervous because no one has replied to me saying that okay people just said they can hear me Okay, great great great. Thank you. Well, yes So as I was saying a lot has evolved in the last few years today We're in the second wave of cloud native and what that means is we're looking at how key technologies from our ecosystem Such as Cooper Maddie's and absurd observability tech and all the great projects that are presenting here today can support edge Workloads and also help service providers I'm really proud of all the work Taylor Carpenter has done to progress us in this direction And I want to thank all of the panelists who came together very quickly to present to you about their specific projects usability for edge workloads So without much ado, I would like to say welcome. Thank you for joining us I believe this session will be very useful for you and if you ever want to reach out to me I spend a little too much time on Twitter. So you see my hand. Oops, that's gone. I guess You see my my handle is on the slides. It's pretty anchor. I'll put it in the chat as well please feel free to find me there and Continue the conversation with that I'll send it back to Taylor and let you enjoy this tutorial Thanks Priyanka. All right, all these things were cool things which I could go over to Taylor if you recommend well, I Think for those that didn't see them. It's these are the driving Factors behind I think CNCF success and keeping things open the open governance across the board and you see that Really trying to pull in any users and committees That's a big one. We are a different open source. Absolutely Absolutely, and and then Promoting multiple choices for In users to be able to come and pick things and then sharing across the different projects is I think a big part of open source in general and CNCF is Has that as part of its Driving principles. I would absolutely agree with that Sorry, I'll just jump in there quickly. I think that what we have grown into is having diversity of voices whether that's small projects that come into the sandbox and innovate in a Neutral IP zone whether it's end users who are giving guidance and direction to all these projects and also sometimes contributing their own Sometimes doing doing road mapping with folks There's a deep integration of all kinds of voices in CNCF and I believe that has been our Magic sauce for like it's not secret because we are open source But it has been our magic sauce and we believe in continuing this now with this conversation with edge Developers with service providers. We want to bring you into the fold and have an even more diverse group that is working together for Eternally towards the eternal quest of modern systems and that's the path forward. Sorry if you can go to that slide, please The our path forward site. Yes. Thank you So your foundation of doers with end user driven open source We really believe in fostering developer education and engagement around the world So we do a lot of certifications and trainings which you can look at if you're interested in getting more More, you know, more conversant in your cloud native skills and highly recommend you check those out And finally, we are a global robust organization with communities around the world that come together to build the gold standard of open source So, thank you for being part of that journey Thanks, Krianka This ties in with I'd say web not just the CNCF specific things I'll talk just a moment about the CNCF telecom initiatives is trend to Foster and make available to the whole community and also all of the projects So webinars and everything else that you can see including for the projects that we're talking about You can get involved and learn more about those So taking a look at the CNCF telecom initiatives And and tying this back to Telecom and edge CNCF is very much trying to provide the technologies and and how to navigate and and bring the knowledge over into this domains and Make it where both developers vendors any type of creators Be able to understand and utilize those technologies from the cloud native inside of telecom So with that there's three Initiatives that CNCF is specifically started the telecom user group CNF testbed CNF conformance and all driven by those cloud native principles and technologies The telecom user group is if you're familiar with user groups, that's that's what it is It's a place to come and share your ideas talk about problems. You may have take a look at technologies discussed maybe Areas that there's gaps or talk about stuff from the telecom site and say here's what we would normally do What are best practices that we could follow from the cloud native and Kubernetes native? type of mentality and It meets monthly on zoom and there's also a mailing list The CNF testbed Is a It's a framework. It's a whole set of tool chains And examples that you can use it deploys a Kubernetes native Platform and can deploy various add-ons to Kubernetes like CNIs and storage or whatever you may want and Then test out how they work together and specifically with exploring newer Technologies that are coming out and and then looking at use cases and examples that may be more Traditional on edge or telecom and how what you would actually use them So it's a place to try those out and it can be used for the new development or to test out what's there the CNF conformance initiative is a Open-source test suite and its goal is to provide testing and validating of cloud native and Kubernetes native best practices for CNS so cloud native network functions and the underlying telecom platform and it's trying to provide service providers with a way to Pick out solutions that follow these when when it's important to them and then also to help any developers which includes vendors and open-source projects and Getting feedback to improve the technologies. They have it follows the Kubernetes Certification type of process and program itself for being able to download and run it against your own platform or the applications and All of these have slack channels on CNCF slack We have the telecom user group where there's conversations about all the different initiatives Feel free to reach out to myself Taylor at walk dot co-op You can reach out to Dan con and Dan at links foundation or as well for specifics on any of these initiatives And then you can join a weekly Technical-focused meeting on Thursdays for but the scene of conformance and test bed where we talked with different community Members from other CNCF projects and different companies come together to discuss Use cases and how to implement those and then the monthly telecom user group It's on the first Monday with alternating times to allow Different time zones to join We'll be talking more about all these initiatives on Wednesday at 1 45 Eastern if you'd like to hear more in depth about any of these Alright as Spencer if you are ready, then you're up to talk about the Falco project Excellent. Thank you. Sorry quick question from the audience. They're asking how they can access the slide deck If They can I'm happy to share the if someone wants to share it right in the chat Otherwise, we will upload the slides as a PDF to sketch after the session There you go. Hey, Shem. You asked the question in Q&A. Thank you for doing that everybody very much encouraged to Populate the Q&A with questions This is meant to be a tutorial and you have these great Experts here. So utilize it and HM. You'll have the PDF slides and sketch Thanks and sorry Spencer That's what okay. I'm gonna drop this in the zoom. No, oh to everyone All panels attendees. Okay, so there's some links and then I'm gonna share my screen. Hopefully This will work and then Present and then it in the chat Can folks see the slides? Can they see me? Maybe give me a thumbs up thumbs down I'll just assume it's working. So container runtime security with Falco Thanks for that great introduction to the CNCF. It takes some pressure off me to sort of contextualize I'll just say off the bat that Falco is a Santa an incubating project. So the CNCF has three tiers We're in the middle tier. Um, I Work at IBM. There's my Twitter and stuff. So what about me? I have an operations background so my job has typically been things like Site reliability devops wrench servers put disks into servers back when we used to do that kind of thing and I Approach most of the technology that I work with from that sort of operator angle and increasingly the the well not increasing but like Classically the position of I am just a Team that provides services internally to other teams inside my company, right? And how do people use technology to get that done? I work in Minnesota. I live in Minneapolis, Minnesota Which of course has been in the news lately But I promise everything's Maybe not great here, but it's not a lawless land either Um, I play a lot of Starcraft and play a lot of Counter-Strike if you'd like to play in the bottom leagues of either those games Please hit me up. Um, I've been doing containers professionally for the last three or four years ever since I Kind of left the open stack community and then I do a lot of stuff on Twitch Specifically with the IBM developer brand So, you know devops has been defined and redefined a million times But generally the way you think about it is to break down the silo between a development team and an operations team Very traditionally in the past Development would write stuff They would throw it over the wall to operations who would run it and have no real visibility into how it works get angry development They'd be angry at each other DevOps is increasingly about blurring the lines between What is the risk making sure everyone is responsible for everything and that they're co-developing the solutions and that there's quick feedback loops If you're doing agile development, you might see a pattern like this Where we walk from requirements to planning design monitor develop and then we go right back to requirements, right? If you flatten that out and zoom in on the development phase, which is sort of what developers are doing And then you linearize it so I can talk about it a little easier. You sort of write code You build the application itself whether that's a make or an npm install or something you run some conformance tests You build probably a docker image of some type You deploy that and then you monitor Right and what I want to talk about Today is in a way we can we can add security at three levels We can add it at the build stage at the build image stage and we can add it at the monitoring stage Falco helps the most with the monitoring stage um And there's a lot of security topics out there Falco is a security tool. There's threat modeling runtime detection Enforcement and security rules fuzzing that I am not the expert Falco doesn't do all those things what Falco really does Is monitor runtime environments for potentially? Malicious activity Um, and we're not going to necessarily do all these because I only have a few minutes But um, what we are going to do is we're going to create some synthetic security events and then we're going to we're going to see Falco pick those up um So containers complicate the security picture a lot. I think traditionally The operations team and the security team had by virtue of being in a particular place in the In the pipeline a lot more control over what was deployed and when um, you know the classic example is if you know the year is 2011 or 2010 or something like that and The development team would like to use a new library They had to ask the operations team to install it on the servers in production And if they didn't get it it wasn't going to run and so security and operations both had a very clear understanding of what was installed where um, when you get into node and ruby gems and then today in docker Development can pretty much throw whatever they found on the internet into production And it's very difficult for anyone security or operations or other to put some kind of structure in place to prevent that from exploding um And in a way you don't want to right because developer velocity is probably tied directly to revenue for the company um the other component of it is That because they can grab anything they want there's a diversity most large organizations are using pretty much every language under the sun and not just every Not just python ruby and node and go. They're also using a bunch of different frameworks and libraries and things The other component is sort of speed with You know, if you assume a kubernetes deployment model A container can spin up do some work and then spin down and it can do this multiple times in an hour Or the next hour it might not spin up at all And so security and operations used to be able to sort of have this steady state approximation And reason about things right if a malicious packet or whatever comes from this ip address We know which host that was we know what software is running on that host We know which team has run it responsible for the software running on that host Now containers might live for a few seconds do their work and then never come back for a month And it's difficult always to know when was a container launched What node was running on what was it doing? What code was in there? How do we recover any of that stuff? And so that ephemeral nature of the container deployment model Makes it somewhat difficult to track down exactly what we need to be doing here and how to how to trace any kind of security incident Containers make everything harder because everything is more numerous and everything is more ephemeral It's the giant bag of lego boxes now or lego bricks now There's a there's only so much chance we have so falco um So falco is a cncf incubating project. It's open source. It's Apache 2 It's driven heavily by cystic the company, but it has contributors from lots of other companies. For instance, I work at IBM It's a runtime security tool and the the core functionality of it is its kernel level system call watching So if you're familiar with something like wire shark where you're watching every packet on the network And you can determine traces and things like that. This is that but for the kernel So you're watching every system call every open every read every write every get time of day And then you're you're tracing through that mess looking for potentially security incident events um It has an expressive rule set which we get into and it has really flexible alerting so you can send Stuff to logs. You can send stuff to slack. You can send stuff to serverless functions Whatever you want to do with these events you can do I need to update the slide. There is a cystic slack.com, but it's actually the bang falco in the The kubernetes slack is where most of the development is happening at this point um One thing to bring up because it does come up most of the time Is is falco prevention tool? The answer is no falco is an identification tool is sort of like your house You have dead bolts for the doors, but you also have You know some kind of alarm system that tells you you know makes a noise when a window gets opened stuff like that You kind of need both in order to um to feel like you're actually secure So the the the system of architecture looks like this. I am bad at graphics, but If you say falco is this c++ program in the middle, which it is It has a number of inputs The first is the kernel module. There's also an ebpf probe for um Generally speaking just more safety And that's all the system call events running into falco Those are compared to rules and configuration and then if uh an alert is generated because in this filtering section We found a system call that matched one of the rules Then we go off into the alert system and that can either send a program or can send an hdp request It can dump it in slack or a database There's also a g rpc Output as well So you can make a g rpc client to like interact with the alerts and in your own programming language um Another thing that falco can do and we'll talk about this in a minute Is kubernetes has support For sending an audit log you can configure the kubernetes like the the back end I guess to uh Anytime someone creates a pod or deletes a deployment or whatever they can generate an audit event um And those events all get sent somewhere and you can dump them into falco and falco is one of the better tools off the shelf to Read those and then compare to it A dialect of its rule set designed specifically to process kubernetes autologs Finally, um, if you look at the alert system one of the because falco is in c++. It's kind of annoying to make uh All the end points under the sun Work with that. So what we've actually done is we've got a tool called the falco sidekick, which is like a little go program that just takes standard json events out of Falco and then this is where you write all the code to hit all these different services like data dog and alert manager Loki and gnats and stuff like that. I think we have a gnats talk coming up here in a few minutes So if you're thinking oh my team uses I don't know. I'll just pick on something. Uh hip chat. Is that real? Is that working anymore? I don't even know but uh, let's say you and your team work on hip chat and you want falco events It's like a hundred lines ago or something to add another output into the falco sidekick um, so I am not An expert in the edge There By any stretch of the imagination. I'm a total novice, but I wanted to sit down and try to contextualize Some of the things you can do with falco that are relevant to the edge. So before we get into the use cases Um, what I want to do is I want to show a little demo And let me know if this is big enough for everybody So this is just a unique system this it's a unique system We have falco running. Uh, there's a lot of stuff running but including Falco running is root You can install falco as a kubernetes application with like helm or something For security purposes given that it's a security tool. We actually recommend you install it sort of on the node itself It's difficult for it to secure something like kubernetes If it's like running on kubernetes And I guess it's also doing the falco Has great containers and kubernetes support But really is a first-class linux program as well. So it can run on pretty much anything um The command escapes me, but the the requirements of this Binary are not crazy high. It's also not multi-threaded. So if you put it on some tiny edge arm device It's actually going to work. Okay We have arm support for falco Though it's it's I mean, it's not We don't have like a room full of arm cores that we're testing every variation on it's more like best effort Although because we're in that sandbox incubating phase like this is the phase where we really want to get more users in more use cases So if you're sitting here thinking man, I got 2000 arm chips deployed out in some factory somewhere and I want to use falco like hit us up We would love to add you to our like core use cases Anyways, so the program is running. We can tail the logs All right. Now what I'm going to do is I'm going to do a quote-unquote security event I'm going to cat etsy shadow and because I don't trust any of you I'm sending it into dovetail, but etsy shadow if you didn't know with where it's a very important filing in In uh, unix. It's like where all the passwords and the secrets are if a malicious program is reading etsy shadow Something has gone terribly wrong All right, so we ran that and then When we come in here and we read the logs we get this nice little error And this is exactly what falco does. It says output some time warning sensitive file open for reading by non-trusted program user root program cat command cat etsy shadow blah blah blah the parent the the grandparent the great-grandparent um This is what it's for is when things happen on your computer that you're afraid of it will Identify them because it's running in the kernel. There's no escape. There's no escaping the fact that this is running um, it'll generate an alert and right now it's configured just to send it to logs, but it's super easy to throw this into You know, whatever else you're into Just to show them side by side because that's kind of fun too. We can do a net cat dash l4444 Thanks for not running. Um We can do like a pseudo Touch in rm Right. Oh falco error file below a known binary directory open for writing Um, and the way this is configured so as you can see like when sketchy stuff happens The program just picks it up and generates these alerts the way that it's configured is something called the rules file which I guess I can spend 30 seconds on This is a pre-baked file of rules if you look at it It's 2,000 lines long. This is rules that are in our github repo. You're not required to use those rules. They're just starting points for you um So that's the example When we talk about edge and I'm no edge expert Um, I think I have to fast forward through my slides here I'm no edge expert. So I'm trying to empathize with what your needs might be Um, and if this doesn't exactly map, I do apologize. I'm available on twitter and chat and stuff If you have any follow-up questions So one example is you having a you know, a tiny little iot device in some warehouse or something and it's it's only purpose It's like a little raspberry pi type thing And its job is to talk over serial to some even stupider chip, right? That's only purposes to run a servo or blinks and leds or whatever um You can write a rule very simply that identifies when a program is trying to talk over that serial device, right? And so you could say something like open right, which is a macro that exists and event.org name contains ttyacm0 Which is generally where that serial device is going to appear. You can obviously customize that And while that's going to alert every time your program runs You can also just whitelist the name of your program, which of course can be mob You know, there's ways to sort of if you're just comparing names and stuff This does happen at the system call level. So it's it's not like you can escape. Um, it's not like setting a bash variable to like The pit the file and then cutting that is going to escape this kind of whoops this kind of contain stuff, but If somebody writes a program called dot slash hack you and runs that and that talks to ttyacm0, you're going to get an alert out You can also alert on shells like realistically when we talk about edge. I imagine there's a lot of Edge deployments and those edge deployments Don't get updated all the time. It's just a fact of life, right? And so being able to identify immediately when someone is popping a shell or trying to pop a shell with netcat or bash Or listening on some high number port We have pre-built rules to try to capture that stuff and you can Come in and uh customize it. So there's a lot of Sort of whitelist in here. So you're not probably running git lab So you could probably delete that line. You're a little more secure because of it um Another option is sort of firewall bypass detection the um The example here is a little contrived, but I imagine you have like a linux edge thing a little tiny computer Lockdown network the application listens entirely on local host and then Packets to it, you know request data proxied through something like engine x or envoy, right and you have engine x envoy It's it's whitelisting ip addresses. It's Checking tokens and then it sends something onto localhost 8080 What you think is actually pretty reasonable, right because I know when I'm talking to edge tpu's and video grab video webcam type things and serial ports like My programs are pretty dumb. It's like a just enough python to get the work done because it's already kind of janky um So yeah, I might listen on localhost 8080 and trust the proxy to prevent But what happens if another service is compromised or somebody manages to do something where they can make connections to that Well, you can write a really simple falco rule just to watch and see if anything is hitting port 8080 Right and you can detect if the process that's hitting it is that engine x process or not You can check if the home base ip is used or not You can put this rule in place and then read the logs later just to determine if anything sketchy has happened um You can also so there's a there's a great blog post from chick filet. That's kind of like uh They put a small kubernetes cluster into every single one of their franchises like 2000 franchise chick filets And then it runs all the like edge services they need for that um What you can do with falco is you can read the audit information from all of those kubernetes clusters So there's obviously a lot of management and running a big scaled out multiple kubernetes clusters thing but You could you could filter all of their audit requests, which are really just htp post requests Into you know through your back end network into one giant Either a big falco process or a horizontally scaled falco process to process all of those audit Events against a falco rule set and either generate alerts or generate You know sort of data that you can use to determine things um You can generate a lot of stuff from that in terms of actionable activities, you know These are the the attacks that are in the wild. These are the attacks that have never happened to us before This has happened in one restaurant, but hasn't spread to other restaurants stuff like that um, and then finally this is this is like pretty similar really to the Tty acm 01, but you can also identify if something is talking to dev video zero um And this is an example of using fd.name. So that's like the file descriptor not just like the the arguments in the system call uh, and you can compare things like this you can You can either whitelist the program that you have Or you could not whitelist it and look at the exact cadence of how many times this is being opened and compare that to other things You know about your application and if that's expected behavior or not um This is an example of So if you look at this rule There's like this whole syntax in yaml for defining these rules. So In this kind of intro, we're not going to get too deep into it But there's a macro called inbound and this is an example where that 2000 lines of open source rules really help you And this isn't even all of it. Well, I guess this is all of it actually but um It's like a reasonably complicated command to try to make sure that we know exactly All inbound connections, but not Outbound connections and let's match them and then now that you have this inbound macro You can just say inbound and port whatever port i'm interested in and you get all the connections to your service And then you can filter down into what sketchy what's expected etc etc So that's like falco really quickly Um, there's a ton of resources here. I put some of them in the chat. I can put them back in the chat um, oops I am available for any questions. I think rather than Having people come up, you will just say like put the questions in the q&a and the chat The falco community meets once a week on zoom are and our slack is the The hashtag falco channel on the kubernetes slack Thanks to all these wonderful people And then if you really are ambitious you can go to this tiny url where you can grab a A falco workshop. It's designed to be sort of instructor led, but you could probably limp along self-paced if you'd like So thank you Um, I think i'm ready to hand it back to the the panelists if everyone else is okay with that Okay, there's a question of can you use falco on a regular linux server as a basic tripwire like solution? Absolutely That's exactly what we were doing in that demo There were no containers and in fact the containers. It's kind of cool the containers support Uh, it's not an afterthought But if you think about it, it's just reading system calls and it turns out a system call in a container is still just a system call It just has some extra attributes to it and what falco will do is it'll talk to docker It'll talk to container d. It'll talk to kubernetes to be like, okay. Well, this is like the namespace where that occurred What's the the kubernetes pod? What's the kubernetes namespace that is associated with that? Okay, puts that all into a nice log bind and sends it to you So, you know where on the file system that image is unpacked You also know what kubernetes thinks the pod name of that pod is You know all the information needed to go take action great question Okay, thanks everybody Thanks, spencer. Let's see if I can get back in Keep them the questions coming and for those that didn't see it There is an ones slack channel for this session as the cncf sponsored and probably we're going to have stuff in the the cloud Native networking as well. So if you have any other questions Join that or the kubernetes slack to talk with The rest of the phapra team All right. Next up is andris to talk with us about spire Andris, are you ready? I don't I am let me hit the share button And switch to my desk here Well, I hope you're all doing great. My name is andres vega. I work at vmware as a product line manager for cancer foundation services I am pretty much dedicated to all the aspects of product program and project of spiffy and the spire projects which are at the incubation level hosted by the cncf So I like to think that the problem is generally well understood with the advent of cloud and the proliferation of edge devices And the fact that there's typically not one platform your end users are operating in but a number of them we have A number of cloud providers. We have a mix of on-prem Wherever you may be like centralizing your orchestration systems that are deploying out to the edge or elsewhere So with all of that the notion of perimeter has evaporated There's really no defined perimeter in modern architectures And as spencer said, well, you throw containers into the mix like environments Your number of environments may be somewhat static, but the number of devices that come up and down in there are going to be a lot Uh identity typically has been platform mediated, but as we have At all these different layers all these different systems There's no uniform Identity or central identity system you can come to rely to on the past which makes it extremely hard to reason around authentication and authorization across this administrative boundaries across these different technology boundaries and intermediate and intermediate solution that Is often used as that of shared secrets But that in itself imposes a number of challenges if you have like if you require access control to a network function Or a secret store or say just service to service you have a workload that needs access to a database or another workload or a cloud provider service you come to rely on an api key or a password Now You need to come to reason Level down. How are you? How are you going to protect that api key or that password? You're likely going to have to encrypt it and if you're encrypting it you need to think of a decryption key and that decryption key You will need to protect it and put it somewhere like a secret store now once you put that there like like This is like iterative regression for Any api key or any secret you need you're going to need yet another secret to protect that And people often refer to as the secret zero problem. So the need arises for a solution that Can solvent precisely for how do you? Provide a boost bootstrap credential And get around the the operational challenge of the life cycle of these apis keys or secrets And that you move away from the reliance of having to embed that into your pipeline or hard coded into your application if you go do a github search for client secret You're going to be like astonished by what you find in there checking through into code. So there's a very important need currently for moving away from this dependency of hard coded credentials of long lift credentials towards Delegating it to be an api driven frequently rotated and Not requiring any a priori knowledge In order to to provide secure access so That bottom turtle like putting an empty turtle turtles all the way down around secrets Can be a strong cryptographic identity And this is where the spiffy inspired projects Come Where the problem that the intent to address Spiffy is is really in that specification of what is going to be That defines what are the your eyes? What does the scheme of the your eyes for these identities look like? What are going to be the standard ways for defining and retrieving those identities? And how are those identities meant to be transported? We're going to look into the specifics in the subsequence slide So we talked about the identities. We talked about an api for issuing and retrieving and managing all the operations around this credential and through it performing what we call Within the context of the project that's at the station Which is how do you determine with certainty that this service or that this machine Is what it is supposed to be that it should actually run In your environment that it's not rogue that it's not a malicious actor and If it can meet the claims of it can meet the policy of it's something that should run here that is entitled to an identity That then it can authenticate something you can authorize and encrypt all service to service communication There is the the notion of spiffy federation of how do Trust bundles get exchanged between different administrative domains or between different top level routes of trust That may be of relevance and telco and edge architectures There is the as fit which is the identity Encoding once you define your spiffy ID like this is going to be we support two formats It's either a jot or an x509. There is reasons why you may want to use one over the other tokens are susceptible to replay attacks, so they recommend the practices to use x509, but It's going to be a a requirement of the architecture whether you have intermediary devices Take an api gateway or anything that does TLS termination where you would want to use a jot s bit over an x509 and the trust bundles really really package For a workload to be able to validate the identity of any other system trying to authenticate to it just being part of the same trust domain so Peeling that apart a little bit and looking deeper into specifically spiffy IDs like I mentioned your eyes What do these look like? So the spiffy ID is very straightforward. It's going to be spiffy colon slash slash Followed by what we refer to as the trust domain and the trust domain is really the top level root of trust for An environment this can be modeled after a particular individual if these are Machines that belong solely to you it can be modeled after an organizational unit It can be modeled for say and the development life cycle This could be the trust domain for death or the trust domain for staging so we don't get really prescriptive of how do you uh Derive or like model these names we just define the the syntax around it But it really models for like the the certificate authority anything belonging within this trust domain Can validate against each other and then this is followed by the workload The workload identifier or the workload name and this could be Something that ideally is as human Understandable so you could have like billing slash payments and you know like anything that obtains and ID any instance of that application or that service that is able to present the claims can obtain that identity or this could be something opaque just derived of the name Kubernetes gifts to a particular service or to a particular container So there's flexibility there unlike human identities that Well, there's just a single one of you machine identities are are widely different because you may have one application that has 100 instances of it and uh, there's going to be permutations Depending on where this run What are the attributes so being able to perform those checks at the levels of different trusted third parties You may want to to give it give it some thought give it some reasoning around how this best fits your your conceptual your logical view of the architecture, but these are entirely composable so Talk to us thus far about the spiffy specification and we like the finder uri We say this uri can be encoded and to next 509 certificate or that can go into a jot token And uh, that there's going to be a workload api that a workload can ask who am i retrieve That identity if it meets the claims necessary uh required for that certificate uh It's left to you how to implement it. So we want one step ahead and helping the community out by building a runtime reference implementation of spiffy and that is spire You could have many other systems that abide with the spec uh and conform to the spiffy standard but uh We thought we'd take a stab i'm gonna head and just make it uh Plugable and extensible and half people just well This has been codified you you can now run with it and Put it into your environment get like A high-level fully automated Pki for heterogeneous environments the components of spire that it adds on top of Well, the spiffy ids s vids having federation in place is the server and The agent the servers you can run and high availability And we're going to talk look at a little bit at the different apologies for spire deployments as they may fit differently be it an sp that's trying to Uh cater for multi-tenancy Or if it's just like edge devices where you may want to consider while there's intermittent connectivity here uh What happens if connectivity from an agent to a server gets over So you have the server the server is uh Responsible like think of it as the global database for all identities Identities get registered here through the registration api. So just going back I would like register example.com slash my service there and I would say if this is running on an edge node and if it can meet a Dpm plane And if it's running this particular operating system and when I interrogate the kernel It has this security id and this process id When this workload comes up And talks to the workload api the agent is Going to look what registration entries are available If it meets the selection of the criteria And if so it's going to make a certificate sign request back to the server And the server is going to sign it and return a trust bundle back to the agent The agent's going to pass it to the workload and once uh, it's obtain its identity It can effectively use it to authenticate against any of the other workloads in the same trust me If you're using x509 you get the added benefit that With the mutual authentication, you also get encryption Let me just check real quick that there are no questions thus far from the audience No seems good carrying on so At a very basic level a spire server inspired agent deployment you're going to have Like if if you're looking at kubernetes since we're talking top native technologies You will have one or more spire servers for high availability purposes and just distribution of load depending how large how many Agents do you have and how many workloads per agent? And uh, the agent's going to run as a demon set on every kubernetes node That is great if you like have are doing like One big large cluster, but as You cross plot provider boundaries. It's hard to do data store replication across those cloud provider environments or If you have like shared multi-tenancy that is like a heart Uh shared multi-tenancy for obvious reasons you you may not want to have multiple ca So if it's soft multi-tenancy and you're providing like identity as a service Uh, you may want to think well, how how do I incorporate the use of intermediate certificates? so The not well-known keys, uh Don't don't come to be a factor and and to the mix. So we support what we refer to as nested a spire deployments in which A spire server can be brought up downstream And it will still meant Identities that can be signed by the top level uh root authority So you're you're growing the scope and size by just the number and we support any level we've tested up to Three downstream spire servers in this chain. You're chaining the the spire servers and The use case actually arrived arrived from a large, uh Telco, uh A large global telco where they were doing a lot of IOT and They had intermittent connectivity So they wanted to ensure that they could continue to issue certificates as new ephemeral endpoints came up And there would be a spire server available to to do the signing But that it would all go all the way to the top To the main certificate authority As You think of that you may say well, I I have the requirement for that that is great, but A individual like very large trust domain does represent, uh A large blast radius say you're using a spire How I support for upstream certificate Authorities to work with a ca you may already have before you put in a spire deployment And if your top level root of trust were to be compromised And you have to force rotate that well a way to mitigate the impact and compartmentalize that blast radius Would be to run a larger number of smaller size deployments This could also fit very well if like there is a big trend of As opposed to having a very large kubernetes cluster having Many smaller kubernetes clusters Depending what fits best your your architecture. So Having multiple smaller size clusters you may start to look at and I'll come back to the previous slide having uh, why do we call this a spire federation? Which is having distinct trust domains For the purpose of resiliency and availability And have these servers exchange their public keys once this exchange of uh Trust bundles has occurred a workload can effectively Cross authenticate to a foreign domain and that foreign domain may not necessarily be a Spire deployment it could be Something like a mystical service mesh that also bikes by the spiffy specification It could be hash corp console It could this is all modeled around oidc federation. So you could very well federate against aws I want this particular service That is spiffy identified to present its identity document to aws Without any api keys without any secrets. Just the identity document and upon presenting the document Get an sts token in exchange the scope to an iam a roll binding and it can go right to The s3 bucket or it could talk to rds. It could talk to a lambda function. So uh, a few different permutations of federation, but the idea is to uh grow interoperability of spiffy based identities across different administrative domains without having to rely on a centralized set of spire servers or just making your trust domains too big too large Where it just may be a little bit of overhead and it may just like expose you There is uh support for delegated authorities. This feeds a little bit and illustrating more nested deployments where you would have intermediate certificate authorities For different organizational organizational units for different environments Let me do a time check because I I don't have a clock and do want to end up leaving time for for questions So there there's quite a bit to unpack For spiffy alone and for for spire I didn't get I But I in hindsight I should have included into the slides a little bit more around The attestation workflow and how do we perform that for different platforms? But once you have the agents and you have these workloads as I mentioned initially Well We target heterogeneous environments. So whether this is a kubernetes note running on bare metal Or this is a ec2 instance Or it's a arm 64 note running at the edge We have attestation plugins that have The different attributes of objects that it should know for that system that it can resolve and attest So once we've established trust to the aid end and we know well, this is this particular image build and it Is orchestrated by this entity and we have all this other metadata We established trust at that level and again, those could be respectively All the different like supported platforms You will find on the github repository what those are and when the workloads comes up We perform this like multi-factor attestation check again at the different levels of What do we know from the kernel? What do we know from the metalware between the kernel and the underlying vm? What do we know from the vm or from the bare metal os instance? What do we know from the hardware root of trust? so We recently added support for like compilation of arm 64 The level of support was well, you can run it enough arm 64, but we don't have We don't officially run tests with every build We currently Through a community contribution added like full support We are in the process of migrating our build pipeline from Travis to circle ci and we're not sure if it's something we're going to be able to continue doing tests for arm 64. So It does work there needs to be some discussion among maintenance. Where do we fall in that? and It's something we should have an update on shortly A noteworthy collaboration We've had the past couple months has been with the parallax second the parsec project If you're not familiar with parsec parsec is a platform for performing crypto operations it essentially abstracts hsms tpms And it gives you an api and a wire protocol to integrate securely if you're running at the edge And you have multi tenant workloads. How do you restrict and isolate access from one workload? To do crypto operations to the same secret store and be entirely isolated from any co-resident other tenant workloads We are working on providing a Spire identity framework for that There's been some progress there. So we're hoping something we can Uh showcase and demonstrate to the community at large the integration of the two projects I'd be curious if that's something of interest to this group and uh then where we recently conducted a api refactor of the spire server and agent apis to add support for workloads that May not run in a node. So we do that note at the station I talked about certainly There are considerations that well, what if I don't have a note? What about I just need to attest the workload directly against the server? As it would be in the case of certain match deployments or in the case of serverless deployments running in the cloud So We got into pipeline Thank you. Um, where's where's a slack channel or somewhere where people can follow up more with you and the rest of the spire community? Yes, the slack channel is spiffy.slack.com If you go to spiffy.io or to the spire github page You're going to find the the link to the slack to register there directly Sounds great. Thank you so much. Andress. Um, we do have a q&a Question we'll come back to it later. Um, right now. We're going to go on to The next presenter Hey, thank you. All right. We have walle with uh, nat's team going to give us a presentation on messaging Okay, so thank you I go ahead and you should be able to share your screen walle Uh, yeah seems okay Looks good. Okay. So Um, this my name is walle kebedo from the nat's team Uh, iveen. I'm working right now in scenario communications. Uh, when many of the nat's core maintainers are working and uh, if you're involved in the nat's community since the Almost the very beginnings The nat's project started with cloud foundry And it was a operating platforms that was using cloud foundry and nat's as the message bus So around circa 2012 have been operating nat's based systems and developing Using nat's and that's just one of the technologies that I I just Really really like And I finally got around writing a book about it how two years ago So it's a practical net. I have it here behind me if you want to check it out Uh, you can find me in twitter and walle qs as well I'm maintaining a lot of the kubernetes tooling and the ruby python and southern go tools ecosystems So nat's is actually a fairly old. I think it's probably one of the if not the oldest Uh project in the right now in the cncf In terms of like the first implementation was in ruby So the first commit for that one was in october 2010 So yeah, it's uh next month is uh A big deal for the project But essentially the protocol has been the same uh and that The traits from the project has always been so about being performant very simple Very secure and have a high availability Since I think five years ago The community has there has been a significant optic in the community And now there's more uh client implementations besides the The officially supported ones And then over 30 different client implementations now Um, so this is essentially the it's a text-based protocol using tcp, uh, base connections Essentially the same you can still connect using clients From more than eight years ago to servers, uh the latest version of the servers We say that it's lightweight because it is a very small and not very verbose on protocol So the 10 megabytes uh the in binary size with the in it's a very small docker container takes very little configuration And the clients only need to know What is the endpoint that they need to connect and the credentials and then In that's the two we already have a namespace isolation of the subjects So you can have multiple It's a multi-tenancy, but just you don't need to present the credentials an endpoint to know To which account you belong And the api is uh, it is fairly straightforward So these are some of the Officially supported clients. I have go ruby, uh java elixir And python python python three And but in this talk what I'm going to show is a Very simple client using a micro python on one of the fibers uh to Make some let's blink And that's in a nutshell is uh all about uh streams and services So it's a pop-up system But in terms of what you can do to the abstraction that fits Much better. It's about having a sequence of messages that you can consume. So a stream What you can say is like a flow of data Or you can also have services essentially rpc endpoints That you can send a request and expect a response back Very much like a system like a grpc or producing protocols at hdp You just make a request expense response back Those type of systems can be built with nets, but nets gives you this A mix of being able to consume a stream And not only await a single response, but multiple responses even And with building load balancing as well by using the queue subscriptions Everything revolves around using uh subjects. Uh subject is uh or other topics called in other systems You can use the dot character to separate the topics And match these uh will uh wild cards. For example, you can have um Uh subscribed to everything that flows through the system by using the greater than subject And you can audit all the messages that are flowing uh through nets And because it is it's still a pop subsystem or the request response is actually built You send unique addressable uh inboxes for each one of the clients. So all of that you can also inspect In terms of the client api you can for services we make requests and responses And for streams, we can publish messages and subscribe to a sequence of those messages And either of those you can make them Uh Load balance by using the the queue subscription api call so you can have uh By making them uh load balance it means that whenever a message is published from a certain group of Interested clients only one is going to be receiving the message and that is going to be uh randomly selected by the server So publish and subscribe Essentially publishing on foo And then you have a number of clients all subscribed to foo and then everyone will receive the message if they are connected to the system So by default is uh At most once delivery system. So you have to be connected to the system to receive Um, there are some enhancements to nets. Uh, it's a project named uh, net streaming Which is an an api on top of nets that gives you the at least once delivery with a very similar api and there's also Another available project would call jet stream which makes these streams Uh, not using the same uh request response protocol from net streaming for at least once delivery, but using Uh core nets apis for persistent messages And for services, you can use the uh request response api. Those are Under the hood also publish simple messages But they're just in the inbox as to have a one-on-one communication and this can be load balanced by naming them into a Giving them a group so you can have a soup. Uh, everyone subscribe to foo, but they're part of a group of uh subscribers named workers So whenever I publish something it's published in foo only one of them is going to be receiving them as All of those subjects can belong to um a single uh account or our name space subject name space So in this example, you can have uh two separate accounts Uh, let's say an acme and the cncf organizations and by definition they're they're going to be uh isolated So there cannot be no data sharing between the services that belong to the acme account and the ones that belong to the cncf account unless there is permissions There is a binding between the exports and the imports From those two different accounts. So you can that way you can have multiple uh different teams being able to like freely use all the The subjects that they want in however they want But on the export at the services that they want, um explicitly to for the configuration to Other teams that want to import them That makes some of the permissioning which you can also define For example, like a certain user alice can only publish on On foo, but they they cannot subscribe to uh bar for example So that way you can um Makes it a little bit more uh Simple flies the way that you define the the permissions And similar for our services You define Whether this data share one another um account or names of its namespace or team is able to Make requests to the service from another uh account essentially in so nets v1 Uh was essentially uh what we call like a silo technology That means that the classical like, um application that was using nets was behind within the same um uh What to say like availability zone the same network domain same data center And usually there was a behind a load balancer where you would make an htp request for example, and then have the for the internal Communication in within the microservices Use nets so nets really excel for that But since a couple of years We under the vision of making the nets like a global utility that you can use to connect everything um The type of uh network topologies that you can achieve for nets have uh evolved quite a bit So no longer you only have a single uh servers servers that can form Clusters those tend to be within the same um Same network Say the same data center you can also create clusters of clusters So you can have multi region uh global nets clusters So you can have a cluster in in san francisco another in virginia And another in europe and you create uh gateway connections to these multiple Clusters and then those type special type of connections are going to be Followed with a different protocol to spread the interest as it travels from one cluster to another cluster so that is um Now basically You can you can say that we can stretch nets to a more than one uh network now so You can create uh Super clusters where you can publish a message from Any of them and then depending on the interest It will be consumed by a completely different uh cluster and in case you're using q subscriptions and the services Because they are load balanced Then there's automatic failover for example when you have a Service that is running in a vc one, but all the subscribers or services From that data center failed then any other that was available with a longer round trip time We'll be able to do a failover uh transparently to a different data center so those type of mechanisms now are built in into nets and To make it even more interesting that you can have uh leaf nodes which basically extend The authorization domain from nets. So and this can be a daisy chain. So you can have multiple of them And that that is actually the ones that are going to be using for the demo that i'm going to present right now because these leaf nodes allow you to uh Basically help in in are are much Very good fit for iot devices that have a more constrained resources You cannot run a net server for example, but you can have a basic uh client That is able to have a persistent connection. So in this case have a very simple by work here uh, that is able to use a Less than 200 lines nats client to be able to consume uh messages But Because we're using a leaf node. We are still being able to connect to a much larger super cluster with tls and securely publish the messages to any client that is connected So basically nats v1 we went from this a single region To arbitrarily complex network topologies with leaf nodes and super clusters and clusters of clusters okay, so now i'm going to show I don't even how this looks like. I like to check if there's any questions Okay, so uh The nets version two Authorization The centralized authorization works by exchanging a jwt's or juts So in this case, I have my own credentials and the credentials are defined the permissions that I have to be able to On which topics I can publish in this case. I have a topic named Use toggle that I can connect from my computer And I can publish Onto little toggle, but right right now there is no client connected So let's start the client But first I need to make sure that we have the leaf node. Okay, so yeah, here's the leaf node and the leaf node Which is defined by this snippet at the bottom so it's a leaf node connection that within my network is in this 144 on the 42 22 port So the micro python client is going to be connecting To the leaf node that is local to me, but everything anything that travels From my network To the supercluster is going to be using tls connection and because this this is also NAT server I can I can mix different authorization And schemes for example, uh, the previous presentation was about using speaking so You can actually do something like authorization and for example, I provisioned the certificates for each one of the The clients that is going to be connecting and say that users User name ify something Is going to be able to have some permissions. Yeah, so I can do this transparently without Affecting any of the having to reconfigure any of the other supercluster stuff because we're still within the leaf node So can I can the purpose this leaf node only for the iot related? or edge communications So this is just an example And I have the leaf node running that is connected to the supercluster cluster of clusters I'm going to be connected to one of the regions in closer to california So now I connected to have a session with micro python and I will Make a subscription to let dot toggle And whenever I receive a message to let dot toggle traveling from anywhere in the supercluster In this case by using my leaf node Then the toggle the let is gonna toggle and it's gonna blink blink So Now I'm gonna reload and I'm getting um Have subscribed to nets I can see now from The logs from the net server that I have a subscription on let dot toggle the first description and the leaf node has A forward the interest to the supercluster Now I will make a published message and it all goes well should be able to see That is blinking now. I'll send another message should toggle And yeah, again, it's a very for a straightforward client as uh, call it a unets um You can find it on my github It's around Well, I tried to keep uh To be better and managing the memory so it's But the protocol I mean still is a very simple uh to follow essentially and um That's it uh for the demo. I hope it's uh simple enough to get started with um using Nets for fun edge and iot Activities, okay So if there's any questions I can Yeah, okay. Let me go through the questions Uh nats versus mqtt. Okay, so we're actually working in the roadmap to have a support for mqtt. There's a branch in the net server Um, so yeah, mqtt is something that we have been looking at uh that we have heard um People like that have many uh officially supported clients for nets and But uh the other a bit similar um Probably the nats performance. Um, well, nats is uh very known for its performance. So But but yeah, we're working on integrating better with mqtt Uh github link, uh, I'll share it Here well, it's basically a wild qs uh slash uh unets mqtt pops up model So by default nats has only at most once delivery And you need uh other either jet stream nats streaming to have at least uh once delivery Which I think that it is the qo one of the qos supported by default by mqtt Is there any batching of messages to increase bandwidth? Minimize latency Uh You can uh by default the nats messages are one megabytes, but that can be Uh tune as well to That's a great question actually to uh ask the experts in the nets slack actually um So yeah, we're also the team members in both in the nats project have been doing uh nats messaging technologies for a long time So there's a lot of expertise in the nats slack if you want to chime in and ask that to there That's the nats team You can find uh the nats slack. Thank you. The nats slack is uh slack dot nats dot io There's a community dog site as well Um, I have a website now. So you can shoot us an email to info at nats dot io the council reached out to me and uh walle qs And twitter twitter i'll github Uh, so that's it for my talk. Thank you Thank you ollie Great stuff Um Now the spire and falco. So now we're going to have a quick five minute break And when we come back, we're going to take a look. Um, we have c&i genie network service mesh and operator framework After this quick five minute break Thanks everyone Feel free to keep adding questions. I click the q&a Um button and zoom if you have any more and keep adding them for anyone That's spoken so far Check check sound Yeah, we've got you there. Could you try turning your screen also, please? Sure just a second Hello myself susan Oh, there we go. Perfect. Okay, alex. We have you there so you could take off your share screen, please and Uh, susan, uh, can we just do another check, please on your screen share? Yeah, I think it's yeah, it's already shared by somebody. Yeah There so now your co-host so you should be able to do that Um, um alex, could you just take your screen share off? Excellent Okay Great Okay, let's wait for the finish of break Yep, so you can stop sharing your screen. Yes. Thank you. Yeah, sure. Sure. Sure All right Thanks everyone and and work back from our break We looked at uh, nat spire and falco and next up we have Uh, susan with c&i genie Yeah, i'm that Here you go. I'm gonna stop sharing and you can take over Okay Uh, is my screen visible? Looks good Okay So hi everyone myself susan. I am from c&i genie team. I work at Huawei Technologies India and uh My area of work includes kubernetes c&i and csi the stuff So, uh, today I'm here to present a your project, uh, which is in the uh, kubernetes networking area So The project is c&i genie. So this is an user friendly multi networking plugin for kubernetes Okay, uh, so This project, uh, is a kubernetes, uh, cnc of a sandbox project Which got included just at the early part of this year So Let's get into it. So Why we created c&i genie? Okay, so as we all know in kubernetes ecosystem So, uh for a particular pod we can attach only a single network So you maybe you can have multiple network plugins available But at any point of time, maybe you will be able to use one among them But you it may be a It will be a use case wherein you want to toggle between Uh, the network plugins available based on your use case So left side if you see it is a or b or c or d So our a our a must to make something like it can be a And b and c and d and you can select dynamically So the basic problem we want to solve Uh Is first two like we mentioned here. This is a supported features Uh, which we will be we will be going through each of these incoming slides So let's get started. So the two key features or you can call it as a problem statement this Uh project tries to solve one is a dynamic plugin selection. Just a minute. I Shared Okay, fine So the two key features, uh, which this project's trying to solve one is a dynamic plugin selection That means user can select user can install multiple network plugins in a kubernetes cluster And they can select any of the plugin to get ip for their pod And the second use case is a multi ip Assignment that is user can select multiple plugins and get the ip for their pod From different plugins So this is how, uh Use case look like in a kubernetes cluster. So here I have put a snapshot of a kubernetes pod Which is using cni genie to get the ip address So as mentioned in this snapshot, you can see we mentioned, uh, something called cni as an annotation And in this example, uh, there are Flannel and weaves are installed at the network plugins. So a as a part of annotation cni We specify like flannel weave and flannel, which means I want to get three ip addresses uh Two from flannel and one from weave and once I Create this pod I will end up having a pod with the multiple ip addresses And you can see the multi ip preference list which shows all of the ip's Which are obtained and the interface to which they are assigned to a pod So, uh, this covers the two key, uh problems which genie tries to cover that is dynamic plugin selection and multi ip assessment assignment And in addition to this genie also supports, uh Network attachment and network status support. So as many of us know, there's a kubernetes, uh Multi network multi network plumbing group which come up with the de facto standard For multi networking. So genie also Supports or accommodates those standard specifications in terms of network attachments and the output in terms of status objects. So through this, uh, user can specify ip provisionings and see the outcome So if you see, uh, here, uh, we have created a network attachment definition, uh, which shows two networks flannel and weave networks here And network attachment definitions are created as per the, uh, specification mentioned by the, uh, de facto standard So we can see one of them is weave and one of them is flannel So after creating this custom resource definitions, so the pod is created using the genie plugin So we can see like, uh, uh, the network status and the networks which are already Which are part of this pod So this is genie is, uh, one of the reference implementation for this standard and Much before the standard genie had a concept of logical network. That means, uh, when we install a plugin So that plugin will accommodate us particular, uh, network range So what if user want to split this range or get get the ip address in a very specified range? So that's the reason genie came up with something called logical network Logical network crd. So this is a custom resource object So what we do here is, uh, as mentioned in this snapshot There is a, uh, uh, custom risk crd object which has a specification wherein we can specify which plugin We want to use and which sub subnet that means, uh, actually the we was installed to work in 10.0.0 Uh, 0.0 slash 16. That's a 16 network Now I want to split this network into a 24 segment network and want to get the ip in this particular range So I can create a logical network and I can assign I Sign a logical network in the cni annotation. So in this case since the network is mentioned So cni is not mentioned means I am not Eager to get from a plugin way. I want to get it in a logical network way So we can see here like the ip is coming from the 25 range Uh, so otherwise it would have been something like 0.1 or 0.2 So the entire range would have been a candidate to get ip So in this case, I am restricting the range to a desired range And just an extension of this is our network policy support. So we have already seen we have a way Through logical network that we can split the bigger network to smaller network So there's a use case like, uh, there are logic There are logical networks having a different subnets and two parts using this There are two parts which are using two different subnets And communication between these two parts at the network level can be controlled through network policies So we know like we have a network policy object in kubernetes Where in for a particular pod we can define a rule with with whom and all it can talk or which Set of objects it can talk so in this, uh Mechanism, uh, what we do here is so we create Uh, logical network network one and network two. Okay, so just like the way we have seen So then we have some way to specify how we can control the communication So in this example, this is a snapshot of network policy object So we can see here, uh in, uh, network standard kubernetes network policy object We have an annotation so we give annotation in terms of genie network policy So we here we have two things one is network selector and the pr network So network selector is net one and the pr networks is net two So by default, uh, since they are all part of the same network All communication will be happening fine But our aim is to make restrict some communication So in this case expected behavior will be net one and net two can communicate with each other But apart from net two net one cannot communicate with any other network And if there are other networks such as net three and net four, they all can communicate with each other But we have isolated net one, uh, to communicate only with net two So similar case here in this case, uh, net one can communicate with net two and net three So this is the way we isolate this is an, uh, this is what we call as a network isolation So using the network policy, uh, using the network policy object and isolation is done at the, uh, subnet level Uh, we also have another feature called default plugin support So the sum of the, uh, use case can be like user, uh, are interested in getting multiple network But they are not, uh, very specific to change the network for each of the port That means they want multiple ip's and uh, they are okay to get any ip from those plugins So we have a feature called default plugins. So in this feature, what we do is User can specify a default plugin like, uh, he's every pod he creates. He needs the three networks So the in the order of flannel we went flannel, that means two from flannel one from we for all his workload So he can just specify this in a gd configuration file And he can go on creating his workload without mentioning anything about the network So it will be internally taken care to provide multiple network In this fashion So this is how it works. So here what like we have seen the eth1 eth2 eth0 eth1 and eth2 so eth0 and eth2 coming from flannel and eth1 coming from we So we can also mention actually which network it should be assigned by Specifying the interface name also. So that provision is also there Uh, so this is another feature. This is a bit ambitious. I must say so our aim was to This is called smart plugin support. So idea behind this mechanism is to build something like So user doesn't bother much about selecting the plugin, which is which is best for his use case Instead he can give specific conditions like network usage or cost based on that based on certain criteria Gd can be made to pick the best plugin. So we this project this feature We just implemented just by using the network usage statistics using c-advisor But it is in a very preliminary stage But this can be announced to consider multiple parameters and can really Bring it closer to a smart plugin So That's about the feature support of cni genie. So like some of our friends mentioned So the idea was we have not covered any Use cases which are specific to edge But our idea was to get this project introduced So that we can think of some use case or some existing feature can be announced to support the edge use case So the reference for this project This is the github link and we also have a slack channel cni genie.slack.com So where you can post your questions and Help us in finding more use cases and we can add more features to the project So That's all I have got Thank you for listening So I'll get in touch with you in the qna Hi, thank you so much for that Susan Are there any of the qa items that you see that you want to answer directly? Yeah, I will be answering Yes, one of the question is the ip address association in the same order as cni specified answer is yes And How default route handled for parts with more than one ip can i be set for Right now it is only for the one interface and we do not we plan to support Secondaries and service for the secondary network as well And how do you mention the association of respective logical network? So logical network as I mentioned so we create a logical network by specifying We create a logical network by specifying the plugin and the inner subnet And this can be directly used in the pod annotation by specifying network So through that way so that logical network will be used for the pod All right. Are there any other questions anyone has? Can add them Looks like we're Okay, thank you Thank you so much for the presentation And You can follow up in the onis slack channel For the native networking To ask more questions about c&i genie as well as in the c&i genie slack to continue those discussions Next up we have Frederick cots From the network service mesh team As up. Can you hear me? Okay. I can I can hear you Do you have access for sharing your screen? Um, I appear to let me give us try All right, I'll stop my sharer and you should Be ready to go Perfect. So you should be able to see my Hey It seems like we have an audio loop Frederick We have a okay. There we go. Frederick should rejoin. We do have someone from the ls av team To help with that Let's go a moment here to rejoin if we continue problems that can move on to the next Okay, I've switched microphones. Is this any better? Yeah, you sound good to me. So as long as you can share your screen, we can continue on. Okay, let's give this a try. Okay, so what is network service mesh. Never service mesh is a cloud native L2 L3 service mesh. It is orthogonal to Kubernetes and other orchestrators. What I mean by this specifically is that it doesn't it doesn't use CNI it doesn't use it is designed specifically to avoid conflicting with with those particular environments and the reason that we did that is that we also support other types of networking interfaces be besides besides kernel interfaces so we have support for shared memory based interfaces devices and so on in a way that plays well with Kubernetes and also with things that are off of Kubernetes. We also have designed it with zero trust as a first class citizen you'll see what I mean with by that in a few moments. We also have API is designed for integrating highly heterogeneous environments or everything at the top starts with a with an API and that API does not know if you're on Kubernetes or something else. So it's designed to be agnostic to that. And we focus on a smaller primitive we focused on connections as opposed to the subnet and VLAN as being the core, the core primitive. So I'll go over a couple, a couple things that will fly through this real quick because we have a demo to show. So in, in short, there's a few things that we just this that we look at as as non solutions. So first of all, it's connecting Kubernetes networks together. What ends up happening is you have to deconflict every network that you add that you add on to that the complexity of this gets out of hand very quickly. And you, especially when you start to have to deconflict the, the variety of subnets and synchronize services. There's a second pattern that we're seeing where people will try to ferry everything under a single SDN. The semantics of this are not particularly compatible with Kubernetes run with way Kubernetes works. And it also introduces problems when you try to to join this up with other solutions when you start to look at how do you run this in a different language environment than this, this starts to, to break down. There's also another path, which is you use inter cluster gateway, which is also a potential solution for some of these, but the problem that you run into is these are only designed for L three or sorry for L seven not for that low level connectivity you still have to set up the underlay in order to make this work. So this is still a useful pattern, but you still need to have something to establish at low level connection to begin with. So the finalization is that the connectivity domain is, it should not be connected or well to directly do Kubernetes instead, you should be able to make custom shaped things where like if you want to do database replications only things that you need to connect into that database specifically the pod not the Kubernetes network can get can gain access to those you could also create an Istio domain that spans multiple versions of this. There are examples of these in other NSM videos you can find on on YouTube. I won't jump into too much in this particular path. We are moving from this traditional environment to a zero trust environment where we connect workloads to workloads rather than networks to networks. And a nice side effect of this is if attacker enters into your network they don't immediately gain access to two other systems. We depend on things like spiffy inspire, which you saw earlier to show off how to how to get identity into your systems. We also depend on open policy that allows you to declaratively state the type of transactions or things that you want even across multiple systems multiple networks. We do multi domain federation where you can specify based upon the the the source and destination you have different CAs each owned by a different organization with a trust set up between them that allows the systems to communicate with each other, as if they were on the same system on the same network despite the fact that they're actually rooted in different trust and the administrators control the trust as to as to who they allow into the system or not. So basically we also are set up to work with environments where you don't know exactly what these what the system looks like when you connect in. You also don't assume that the underlay exists and so you have to establish that connection. We can render a service function chain which in this example might look like this a application trying to connect to a core secure corporate internet goes to a firewall intrusion detection system VPN established by policy. And what this looks like an SM is we establish a control plane and a data plane and we separate them out. We have endpoints that control a firewall and then point the controls and intrusion detection system and endpoint that controls the VPN. And what we do is we give each one of them an identity that is unique. So, when they have a unique identity that is cryptographically secure then we're able to establish those connections and perform negotiations at this control level that then feed information down to the individual devices that they are going to configure and these and these devices such as the firewall could be a Kubernetes based one it could be a system that is a physical device. These are all abstracted away behind this top level distributed control plane. So, in essence, this particular example of Sarah wanted something that looks like this, what it ends up looking like is something like this where we have spiffy in organization one, providing identities on each side and, and then from there, we have the connection that occurs where in assemble negotiate each of these connections and establish that that connection with open with open policy Asian implementing policies at each relevant location. So it's supposed to create more interesting topologies such as network slicing where you could have a hospital with with identities and that's data center with different vendors each to have their own set of identity stories, and, and apologizes there's a typo here that should be a cloud like AWS or GCP, which then connects VPN VPC to a Kubernetes workload that's on there and allows us to establish something that only brings in the type of things that you need to only that all you only need to see across these things rather than trying to de conflict everything globally like this de conflict the whole hospital network from the whole edge data center network for the whole cloud network so it eliminates that that problem and scopes it down to only things that are local in that connection. So, let's jump over to a demo so first can everyone see my, my screen. I assume so. So we'll do is we'll start by installing the, the cluster and this is, this is going to use kind to establish a test environment on my local system. So we're going to, we're going to install two clusters. And while we're waiting for that to start up I will show you what some of that looks like. This one is literally just creating a cluster, you'll see NSM is installed by helm. There's no special things you have to do outside of that you just do helm install and it'll do the right set of installations on to onto a given cluster. And so I have two, two clusters that two watches are bringing up so we can see the pods as they start to come up. And this one is the second cluster that's this coming online. Okay, so that one's deployed it takes a few moments for it to properly download and, and install the initial pods but you'll see them come up in a moment. Okay, so these are Kubernetes initial ones that have popped up, we now have spire is starting you can see that that will provide the cryptographic identities the X5 and I certificates that we'll be using. Once spire starts up and then they'll unblock NSM from starting. So we both aspire server and agent is waiting for this last one there it is. So now that that's running the next one we will install good lab and what this is doing is it's saying, I have a, a helm chart that installs good lab. So go and run that right now while that is installing, we will take a look at what's inside of there. We have inside of the template. We have here the something where we are we are injecting in a content a sidecar called get called good lab. This one is is explicit just to show it as an example. We're able And this one is is going to assign it a specific network to to use. You don't have to assign it. This is actually considered not to be best practice best practices to allow the system to choose a set of addresses and networks for you. There's a later version that I will eventually do with this in a future demo that shows the automatic selection of this address and it'll select something that is that is not conflicting across your whole stack from workload to workload. So good lab is now running. So the next thing we're going to do is we're going to We're going to expose an SM what to the other clusters we have two clusters running this will download the the service. The custom resource that resource that exists and install it into the other into the other servers so we're pulling them from the good lab cluster and installing them into the client cluster. And so they're now installed just to give you an example of what's inside of one of them. There's a whole set of metadata. It's probably the wrong one to show It shows like I have a payload IP has a name good lab is a whole bunch of metadata that custom resource shows in that's custom to standardize to all to all of them. Now that we have that installed, let's go install the client. And we should see the client is now appearing in the second one that will pop up in a brief moment. So I now have a server that's running good lab on us on one cluster, a client that is running in a second cluster. And let's go ahead and log into that client. So we can see in here. Sorry. We can see that there it has now been assigned a default IP or an IP address to an NSM port, you can name this however you want this is a default name. And this particular system here we can see has, if we ping it it's peeing that particular system as expected but let's Let's go ahead and show off. We try connecting we can see that it's trying to connect there's there's nothing there's nothing there authentication fails. So let's go to our So what we'll do is I'll open up local host on here. Oops, I forgot to tunnel to establish the tunnel properly. So that sets up the tunnels we can access that particular cluster that's just standard Kubernetes tunnel. So we will give it a default password. And what we'll do is we'll log in real quick. And we'll create a new project. We'll call it Hello, we will initialize it with a read me create project. Now we have the new project in place we will try cloning it again with the username and password and we can see that it works. So what we have seen here is a multi cluster is a multi cluster connection that was created between a client on one cluster and a and get lab on a second cluster with the with an overlay network that where everything between those two systems has been tunnel through with all IP address with all IP addressing and and related information established. So that includes this particular demo. So we, let me jump back to the, to the presentation. So we have. So we have our GitHub repo at GitHub comm slash network service mesh that is our organization. We have meetings every Tuesday at a Pacific that everyone is welcome to join and we are also in the channel NSM on the CNC of slack. And we also have work that we've been doing and then integrating the variety of different projects. Not all of them are shown here and this is this is non exhaustive, non exhaustive. Just for information, the data plane that we used in this scenario for this example was phytos we were actually driving everything through a user user modes switch which was VPP. And with that that concludes my presentation. Thank you very much. All right. And does anyone have any questions for Frederick and the network service mesh. All right. Well, if you can join the cloud native networking slack channel on the only a slack and ask questions in there, as well as on the network service mesh is in SM channel and seems just like. Any more questions with that will move on to Alex from the operator framework team and hear a bit about this project. Alex, are you ready. Yeah, sure. Can you hear me well. Sounds good. And you should be sure your screen. I'll stop my share. Let me try just a second. Can you see my screen now. Looks good. Okay, cool. So it seems that I have a little bit more time. So I don't need you to to rush right now. So hello everyone. It's a pleasure to be here. Speaking to you. My name is a little bit easier. So I work for Red Hat work with the work on the operator enablement team. And what I basically do is helping partners and customers to develop or get their operators to the operator hubs, which is a concept that we are also going to take a look on this presentation. What we will discuss today. So first I have three simple points that I would like to address. One is what is a Kubernetes operator. The second one would be an introduction to the operator framework. And I just noticed that I have a typo here, which is a question mark that should be there. But that question mark kind of makes me think about some things that are kind of important, because it is a framework and I'm going to show to you. Why it is a framework, but at the same at the same time, an operator is something that is already there in Kubernetes. We just need to put the pieces, the important pieces together. So we are actually taking advantage of something that is already there. And we call it operator. So we are going to explore this concept. And what value an operator could bring to our clouds, to our workloads, to our application, right? So first of all, I would like to give the general definition that we have, and we can find in multiple repositories in the network, in the internet. And this is the most popular one. An operator is a method of packaging, deploying and managing a Kubernetes application by taking human operational knowledge and encoding it into software. So what does it actually means taking human operational knowledge? So we've seen a lot of applications being presented in the Open Network and Edge Summit. We are going to see a lot of them in KubeCon and many places, right? All of them, they will have some operational toil. They will have some configuration tasks. They will have some concerns that are more related to the environment, others more related to the inner environment of the application as we build them as a resource in our clusters and managing all those pieces together when they begin to get really big and you begin to combine many resources to build a microservice approach to develop an application becomes a little bit hard to manage and to have a deeper and wider view on top of those. So in simple words, in very simple words, we can put that an operator is an API extension and the sense that we are extending everything that is already there in Kubernetes. So we already touched those concepts a little bit on some of those presentations that we just saw because we are kind of easily putting the word CRD all the way in in many, many applications. So we are actually building custom resource definitions and custom resources to be applied to our clusters. Right. So this is something that is kind of already well known and at the same time we can say that an operator is a controller. Why? Because if we build something that is custom and that is based on what Kubernetes already has, we need something to watch that object. We need something to look for that and to fix that and to reconcile that in the sense that we are going to declare the state we want for that resource. And we need somebody or some piece of software that will be all the time looking after the state of that resource and reconciling that resource. So with that we have two important concepts and we can also say that an operator is also a deployment because we are not using the regular embedded controllers that we have in Kubernetes, but instead we are building a custom one. A custom one that holds all logic that we need and that looks after our application exactly the way it should be. And thus we build that deployment to run our controller inside of it and to be constantly talking to the API endpoint to look after our CRDs, our CRs, running resources that are built after those CRs in the Kubernetes environment. And then actually reconcile them to a particular state. From that, we can say that our application, whatever application it is, we can design after a certain pattern that we already know. So Kubernetes has, I think all objects in Kubernetes will have kind of those related fields. If not fields, they will be kind of owned pieces of metadata into relating the resources that we have. So we have the metadata type, the type metadata that will actually say to us what is in the API that we are reaching, what kind of endpoint do we have, what kind of version inside the API we do have. And we may have also the object metadata that will bring down the name, the namespace, annotations, labels and things like that. And we will also have our spec fields with everything that we want on our applications in terms of state. So every aspect in its configuration, every aspect in its running state in the sense that if you want, for example replicas, if you want, for example, a specific data to be there available for the application, things like that and volumes to be mounted and everything that we already are kind of used in Kubernetes, right? And the status field should be the one that should be reporting all the time the status of that whole object. So to try to make a little bit of sense on top of this, I would say let me try to, let me see if I can just grab a little bit of the Kubernetes API, for example. So here's the canonical API, right? Here we can find every single object that we have in Kubernetes. So let's take something very, very well known from probably for all of us. If this is the API, so it's the beginning of the path of the API. If I go into core V1, I'm looking into a specific API endpoint here of the version one. And if I go into types, I can find all the specifications that we can have in a Kubernetes object, right? So the question behind the CRD, so if I try to find here, for example, the pod spec, let me see if I can quickly grab the pod spec here. So here we have the spec field for a pod, for example, with volumes and containers, containers, ephemeral containers, termination grace periods, and everything that you can have in a pod. So what if we can design our application to have exactly something like that, right? This is a CRD. This is what we are doing with CRDs. We can build our data structure for our application the way it should be, right? So getting back to the presentation, when we look into that and we go a little bit forward, what we are getting from that API is all this list of behaviors. So we have the watches, we have the basic operations that we do to request things from that API. So we have patch operations. We have all kinds of things that are already there available to us when we build a CRD, right? This is why it's so useful. And an operator ends up putting together the controllers, which is the infinite loop that I was describing, all the time looking after our application, tracking the types, tracking the objects that we put to run, reconciling its object all the time, right? And when we put together the controller with our CRD, which is the one we are trying to model our application, we have what we call an operator. And then we pack that thing into a deployment. So it's pretty simple, right? Although it has a lot of complicated pieces, if we try to do that without a framework, because we need to learn about a lot of things in the Kubernetes API, it's kind of simple. If we try to imagine just these two pieces, CRDs and controllers, right? With that, we build that primary resource that will take the name of our application or, let's say, our multiple microservices that will be actually running to represent a running application, right? And that CRD may own multiple other Kubernetes resources such as deployments, demo sets, stateful sets, and config maps, right? Basically, that's what an operator is. And to make things easier, we have the operator framework. What is the operator framework? First of all, the operator framework is a CNCF project that you can find under the incubating project. It just got into the CNCF world and it is here. So the operator framework has this main site over here, right? You can check that after. So the operator framework has a bunch of tools that are quite useful in order to develop really quick an operator. Wrong button. Okay, let's keep going. So inside the operator framework project, we have a couple of projects that I think are the most important in order to get in touch with the operator language. So one of them is your peer SDK, which is a tool that will scaffold all the necessary code to talk to the API in Kubernetes. So you don't need to run and develop everything out of client go from scratch. You have a very well organized standard pattern to use with the generated code. It will also scaffold a lot of resources for us. So we have the operator life cycle manager, and that one is the one that will actually give us a little bit of an app store like experience, right? And I'll show in a moment what it does mean because your pretty life cycle manager X as a catalog and the community operators. The community operators is a repository where we can put all our operator metadata in order to have those operators published into the operator hubs. So we are going to talk about about them just a little bit. So we have also the operator SDK website that you can follow through that one. And talking about the operator SDK, what it gives us is basically a tool that is based on the controller runtime project. There we have controller tools, we have controller Jan, we have a lot of automation inside of the operator SDK. And it just merged with the Q builder project. So if you are really looking to develop an operator, I would just show another one here that I really consider extremely valuable because the documentation is there for a long time, which is the co builder book. If you look for co builder book in Google, if you just look for it in the internet, you are going to find this one where you can learn all of those pieces in a very deeper and a very deep way such as the API itself, its parts, how to communicate with that, how to design the API around your application, the controller, how do you build a controller, how do you register a controller and things like that. And then what if I have multiple APIs in my application, what if I am serving multiple CRDs and things like that. So I definitely recommend taking a look on the computer book because it is very, very, it has very, very valuable information over there. So an operator SDK, as I was saying, is actually merged with computer. You can you can leverage all computers power, but also with testing tools with all kinds of tooling in order to build your manifests, for example, in order to check them, your operator metadata also to publish your operators as applications in the operator hubs, right. So and then we have a list of things that it is actually doing for us, right. So in the end of the day, it will build and publish your container with a special Docker file over there and you can publish that to the community operator project, which is the one I was talking about. So the operator lifecycle manager, which is the second one I was talking about is actually a catalog is the one that builds what we call the operator hub. The operator lifecycle manager is also a set of operators right now that holds all the operators metadata. And I hope to make it a little bit a little bit easier to visualize in just an instant. So, and that's it advertised installed operators, for example, takes it takes care of all dependency. So if you if you build your application install when operator. Let's say that I need authentication, for example, let's say that I need identity provider, for example, if the other applications that I'm using to do that. I also have operators I can depend on them, and then install them as a dependency and then those operators become kind of, they become kind of owned by my own operator, because they are being used by my operator in that case. So, and they also take care of a lot of things like ensuring stability, upgrade paths, things like that. So, from that we have the community operators, which is this project. And that will help us to put our operators into the operator hub IO and the open shift embedded operator hub. They are quite the same like they are in the project, the repository where we put our metadata is the same repository, but they have different. They have paths in that repository, because operator hub IO is 100% Kubernetes and open shift embedded operator is 100% open shift some applications can run seamlessly in both. So just a copy of the metadata of both and some applications require something special from Kubernetes or from open shift and then you you may put your metadata to describe your operators and to make it work with the catalog in a different way in each one of them. So it integrates everything in order to to allow us to pull the operators in the applications themselves from from those those clusters may be Kubernetes or open shift that are running OLM. So both of them can run seamlessly OLM. Let me just see here. Yeah, okay, so from now what I can quickly do is kind of show a little bit how it looks like. So this is the operator hub inside open shift. This experience, I never tried but this experience with the graphic interface can also be done in Kubernetes. This is this code is actually open source, but here we have an open shift cluster. So, basically, the operator hub receives all metadata from the community operators project. Right. So, as a quick demo I could I could try to install one operator here. And since my friend Frederick was presented presenting the NSM operator for the NSM application for example, we are trying to build the NSM operator. So we have an operator here that is in alpha version it's being migrated. And I have some work to do on that. And it's installing a previous release of NSM. So in an open shift it's not fully functional, but it can demo the installation of an operator so quickly I just filtered this operator from here. And then I opened this window that has some instructions, some information. I can install the operator choosing like special channels or specific namespaces. I've created a namespace for it. I can provide multiple APIs if I want. So then I can click on install and the operator will begin to install. Everything that I'm doing here can be done by the command line using kubectl as a regular thing because the operator hub is actually a bunch of CRDs that we are manipulating with Kubernetes itself. So here the operator is running and if I click on the operator I can see that it is a deployment. It has some service accounts. And also I can come back to that and deploy a whole NSM infrastructure like this. So just by creating, I could have a YAML view of that. This is a very simple example and I could click and create. It will try to create the infrastructure over there. So just a few pieces and I can click on resources and see the resources being created and running. I can go into the pods and wave them here to see if they are going to run correctly and everything else. So this is the operator hub. That's the idea we have behind the operator hub. If I filter, for example, for networking, which is the subject of big interest here, we have a bunch of operators from all companies, communities doing a lot of cool stuff. So if you have an application, you want to publish your operator to have this kind of app store experience, taking care of the upgrades and everything. This is possible with the operators themselves. And just real quick to show a little bit the workflow on that and what we have is basically we create our operator S, a deployment, and we put that on our clusters by deploying them. And we have the metadata that is actually published on the community operators get hub. With that metadata, then OLM that is running on your cluster, maybe OpenShift, Kubernetes, any distribution, the operator will become available because OLM is currently working with this community operators get hub. And yes, the operator has a maturity model that is going towards multiple possible phases. And the basic install is just installing that and taking care of the version that is installed. So yes, we do have with operator SDK the opportunity to develop an operator just using Helm, not using Golang, for example. But then we get stuck to phase two, if so. And then yes, if you have an Ansible script that is running your application, you can bring that into your SDK generated code and run Ansible instead of Golang. And then you can go up to phase five, which is autopilot. But if you have the, if you really want to have the full experience and flexibility, Golang would be the way to go. And here in those phases, the dream, and we have quite a few operators in that phase as well, would be the autopilot one. Horizontal vertical scaling, like auto config, auto tuning, noticing like differences. For example, workloads that are stuck, they're not communicating, they can restart those workloads. Or suddenly you need to do some sort of traffic shaping on the fly and you want to configure something to do that. The operator could do that if it has all the deep insights, the metrics and the alerts embedded on it. So that gives us a lot of flexibility because the operator itself, it is in the center of the cluster. It is running a standard controller. It is also talking to the Kubernetes API so it can see your CRDs. It can see your application, but it also can talk to external workloads, whatever they are. So you can, you name it, you may have anything on a cloud provider like block storage, on-prem data centers with very specific APIs. As long as you know, as long as you have that operational knowledge, you can code into your operator, give the operator the right credentials in order to do that. And the operator will go there and configure what you need to integrate your cluster and your external workloads. And this would be for anything represented here, like cloud services, external applications that may be even legacy applications, special gateways and routers, network appliances, load balancers. And we may have actually very special rules or, let's say, configurations that we want to put on those and make them communicate very well with the application that is already a cloud-native application, network service application, and so on and so far. So in the end, the question is, what is exactly the value that we bring if we bring operators into our game, right? If I want to empower my application and have a very easy way to install that application at the same time, embed all that technical knowledge around the application in the Kubernetes way and the cloud-native way, I think that definition, especially the highlighted words here, are pretty, pretty encompassed, aligned with what the CNCF cloud-native definition would say. So building and run is cable applications, no matter where. So if it is public, private, and hybrid clouds, all the time using the clarity of APIs, so we have resilient, manageable, and observable. And this is an important point, like observable, because we have also in the operator SDK, it will deliver an already built-in metric, for example, metric endpoint for you to use and to connect. For example, your Prometheus instance, if you want, or it may be a specific one for that barrier. So you will have tools to build all of that. And that would give us the robust automation that we want with a very, very minimal toil. So I guess that's it for me for today. I hope you enjoyed. I hope you all keep, stay safe, and these hard times that we are living in the world. Thanks for your presence. Thank you, Alex. Does anyone have any questions for Alex on the operator framework? We could really say, does anyone have any other questions or comments? You can type them into Zoom, if you'd like, or add them right to the Q&A on Zoom. I'd like to thank all of the speakers who have joined us. Spencer from Valko, Andrus from Spire, Wally from Nats, Susan from CNI Genie, Frederick from Network Service Smash, and Alex from the operator framework. Thank you all so much. If anyone has any questions for them, you can join the ONES, by Native Networking Slack channel or any of the communication channels for the projects themselves. Please join us for the CNCF intro to the Telcom initiatives. We'll talk a lot more about the CNF conformance test suite, the CNF test bed, and Telcom user group. That's Wednesday at 1.45pm Eastern. And mentioning these weekly meetings, we'd love to see any of y'all join. And the projects, all of y'all are welcome as well. We can talk about how to use these in Telco use cases and domains. And then the monthly Telcom user group with the next one on October 5th at 1.500 UTC. There will be talks like these and more at KubeCon in November. So please register. I'm sure there's going to be continuation of more Telco talks at KubeCon. So continue the conversations and ONES Slack channel, Cloud Native Networking 2. If you want the slides, they will be uploaded after the session to Sketch. You should see them there. This recorded session should be up and available for everybody later this week. Thanks for your participation. And have a fun conference.