 Hi there. Welcome to the CNCFCI working group call, first one of the year. Hello. Happy to be here. Nice to meet all of you. Yeah, hi everyone. I'm glad to have you. Thanks for joining. We'll get started at about five past the hour, so allowing four more minutes to folks to join. If you have any agenda items, please feel free to add them to the google doc and please feel free to add your name as well. All right, great. Looks like we've got about 18 folks on today's CNCFCI working group monthly meeting. A little bit about this meeting. It's held once a month on the fourth Tuesday at 12 noon pacific time and this is our first meeting since October due to the holidays. Thanks so much for adding items to the agenda. This should be good. So these meetings are recorded and the recordings are available on the CNCS YouTube channel. And I guess we'll jump right into some upcoming events. I think we're good to have this CI working group call on the fourth Tuesday of February. That's the 25th at 12 noon pacific as well as March 24th and April 28th. There are there do not seem to be any conflicts with any conferences or holidays that I can see. And the next KubeCon, CloudNativeCon, will be in Amsterdam at the end of March, March 31st to April 2nd. The announcement should be public tomorrow, so we'll be able to share that after tomorrow. Looking forward to it. And then about four weeks after that will be the open networking and edge summit in Los Angeles. There's still time to submit CFP. Those close end of day Monday, February 3rd. And then the announcement for ONES will be announced on Thursday, March 5th. Accompanying these notes is a shared slide deck which you're welcome to drop in your sides if you'd like. Otherwise, I can unshare my screen and love to see a demo on litmus to see chaos engineering and CI pipelines. Uma, are you here on the call? Yes, I am. Hi, thank you so much. All right, let me go and share my screen. Hopefully you can see my screen. Awesome. Hello, my name is Uma Mukhera and thank you for providing as an opportunity to present litmus here at this work group. Working group as well as to talk a little bit more about how litmus it can be helpful in adding some new functionality into the CNC, FCA pipelines themselves. And I'm here with a couple of my colleagues. So we will do this in two parts. First, I'm going to introduce the project and the technology and how we think it will be helpful to this group. And then my colleague, Raj and probably Karthik as well will do a quick demo of one of the pipelines that they were able to build. And probably I think it's going to be recorded demo because to run a pipeline, it's going to take about 20 minutes or more. I'm sure we don't have that much time in this. So I will probably take about 10, 15 minutes to skim through the slides pretty quick and then Raj will take about 10 more minutes. With that, let me talk a little bit about myself. I'm a co-founder and CBO at MyData with a cloud native data management company for Kubernetes. And I'm a co-creator of the following open source projects OpenEBS which is a cloud native storage container or test storage for Kubernetes. Litmus is specifically of interest here. It's chaos engineering tool set for Kubernetes that will be helpful in both CA pipelines as well as for doing chaos engineering in production. And then Kubemo is another project that is a cross-cloud control plane for data movement across Kubernetes. As you can see, we're all about data management on Kubernetes. And that's what led us to create these three projects and we're a team of about definitely people. Specifically here, I want to talk quickly about what led us to create Litmus in the first place and then we embraced chaos engineering in a cloud native way. And then I will quickly talk about what are those principles of cloud native chaos engineering and then quick use cases. I'll talk about what Litmus is today and then I'll probably introduce an exciting concept called chaos hub. Very similar to operator hub but it is for chaos and create software applications. And then we can see an example of chaos hub. Then we have some proposals which we shared with the some of you during the last clip which were received very enthusiastically. Thank you for that and then we were asked to talk here and probably do a demo. So happy to report that we were able to clone cross-cloud CA project and then implement Litmus as a reference implementation of chaos into co-DNS which is what we'll be doing with it. So Genesis really started about 18 months or little more and then that where we wanted to start building chaos pipelines for this open source project called OpenEBS which is our first open source project which is sponsored. Now it is sandbox project inside the CNSafe itself. So at that time our idea was we need to be able to convince the community that this project is well tested. And we want to be able to show them that there is lot of negative test cases that were run and E2E end-to-end tests were run in a very complete manner and we want to be able to give the communities an opportunity to run those tests themselves. Then we started looking at what chaos tools are available on Kubernetes to run those tests for an application like OpenEBS. And we started looking and then obviously Kubernetes itself is coming up. The tool sets around them are relatively better now but looking at two years ago you need to build the tool sets yourself. So we started building the actual E2E tests in a Kubernetes native way which means that we will start writing them as jobs and then we did them for OpenEBS but soon we realized we need to run the same jobs for other applications on Kubernetes as well because OpenEBS is a storage underneath those applications. So then we realized this chaos engineering is a need for all of these applications and then this infrastructure we should really open up to a larger community and it can become a project in itself and a community can build around that. So then we announced the latest project in the KubeCon 2018 Europe and now we are almost two years from there and we recently did a 1.0 release of my latest blog. So in the process we defined a real chaos infrastructure architecture and then we also started a chaos hub which is really the central piece for community to come together to build chaos experiments around various applications for Kubernetes, the current Kubernetes. So that's really the genesis and let me actually define what's cloud native chaos engineering. We all know what is chaos engineering. It's about, you know, build, work in some purpose, think of the resiliency. If that is what is chaos engineering and what is cloud native is really doing the chaos engineering in cloud native environments in a cloud native way or in a Kubernetes native way. So this specific topic is about chaos engineering for Kubernetes or for applications that run around Kubernetes infrastructure. And the environments, why this is very important. I took a slide from Dan in one of the GitLab conferences where he talked about the mix of the software. If you look at the cloud in a native stack, you have your 50,000 lines of code but it's really sitting on top of a very dynamic environment called Linux and which could be more stable. Which is more stable than any other piece in the current stack because it's been around for a long time. But looking at Kubernetes there are a lot of change going on and it's a huge code base and there are applications that run on these Kubernetes which are actually running as macro services which means that they come as very, very often as an update and then there are lines. So if you look at this entire thing your code that you're really bothered about is really less than one percent. A lot of change that's going on underneath. So how can you really define the resiliency or quality of that? So CA pipelines are important. What CA pipelines you run in some environment but you run in production in a totally different way. It could be changing. So the real answer to that is chaos engineering. When your environment is very dynamic and you need to continuously validate the answer to keeping your system resilient is doing chaos engineering. So that's how the big difference is. But for cloud native chaos engineering you need to have one specific thing that whatever you do you need to do it in a GitOps model. You need to have your ML manifest so that you manage chaos also as a way you manage similar to the way you manage other applications. Where you define the ML spec you do kubectl apply and things happen. Somebody is watching for that ML and operate the function does its job. So that's exactly what is cloud native chaos engineering is. So in other way if you do chaos engineering with the ML files but with ML engineering it's really called cloud native chaos engineering. So what do we need? For regular development you have the spec defined by Kubernetes and then including some crds but for chaos testing or chaos engineering you need some chaos resources like new crds. That's what we started developing or defining. So in that process what we defined was we defined some crds and we defined an operator developed and stabilized it and also built some chaos methods around it so that you can actually go and see I did some chaos what's happening and if I put some perspective of the results of the chaos over a period of time. And that is the set of chaos resources that we really built and we call it as cloud native chaos engineering. So to summarize the principles of cloud native chaos engineering it has to be open source because you know anything that's cloud native in general is built around you know CNC app and which means it's it's open source Apache license etc etc and then you need to have very generic APIs which are commonly accepted which means that you know it needs to be built around a community and then you do not want to be saying this is how a particular project should not say this is how you exactly do chaos right so the chaos itself should be pluggable right you can kill a pod maybe I can kill it differently right so somebody else can write a binary or a library of their own and plug it into this infrastructure and then do the type chaos their own way so it should be pluggable and then it should become into driven that is because you know the APIs need to be become more robust over a period of time because communities and the road map also need to be driven by the community so these are the principles of cloud native chaos engineering and it must really follow all of that and then I've written even a blog that's published on CNC app you just talk a little bit more details for this and with that let me talk about litmus litmus really started about two years ago we got some good contributors in the early days and then after we opened up chaos hub we're really seeing some core contributions coming in our way with defining new experiments etc and there was about 600 stars all over coming from various geographies some of the popular chaos stars on the planet or like ring or project and it's on the CNC landscape and then there's a blog about it and there's just one more point that I forgot to add here is last few weeks ago on the 15th of January litmus went one odd zero and that really means that the API set is stable there is a toughness implementations we ourselves use litmus day in day out we when I say we the open in this community so there is a good amount of usage for litmus to say that you know it is stable enough and it can be used in various other projects and chaos hub can be expanded on this set of APIs so the typical use cases for litmus or start negative testing for your applications in CI preference that's how you need litmus there's the first use case any application starts the testing of the application start in CI preference and then you need to do some negative testing and then you need you need not build the entire negative test cases yourself somebody could have built that for you and you can just pull it and use it just like there's a Docker image you pull and then run the application you can pull a chaos experiment and run it right so that's in CI preference and one more thing that happens is the stage testing or UAT before going people want to make sure that hey my deployments are good and as you would have seen it's only 1% of the code is what you own and how do you actually do the stage testing so you need a way to do a lot of negative testing and then you can use litmus there and then one of the other major areas where we are seeing litmus is Kubernetes itself is a good because you know I'm running a lot of very big applications on Kubernetes set of applications but Kubernetes itself need to be upgraded quite often it's not a clean external I might need to upgrade once in six months if not you know more often than that so how do you actually do pipelines in pipelines right so I want to make sure that Kubernetes itself is good so you can run Kubernetes set of test cases and make sure that this Kubernetes is certified for a given set of applications now in your real production you can upgrade Kubernetes and of course the last use case is chaos and production itself so for this presentation I think we are going to concentrate a little bit more on how we can do chaos in pipelines and the CNCF CI is really about it's a project that defines the CI pipelines for all the CNC projects so how can you improve the resiliency or credibility of those pipelines by adding chaos staging for each of those projects right so litmus is cloud native I just mentioned as a repeated way it's open source it has pluggable chaos because it's not just litmus libraries it has it also included two other well-known chaos libraries one is for Full Seal which is from Bloomberg and the other one is Pumba you might have heard about Pumba as a chaos tool used for introducing applications etc so right now litmus is a chaos infrastructure embraces three different sets of chaos libraries it's got the CRDs and it's got a way to for the community to contribute various experiments and then get the experiments right it is cloud native I'll probably skip and these are the different CRDs chaos engine is a way you can tag your application following our chaos experiments chaos experiments are the actual experiments with the logic of you know action to kill something and then chaos result is a CRD that actually encompasses all the results and then you will have multiple chaos experiments say in a given for a given application of chaos engine and then it's got pluggable chaos like I said we have our own libraries in addition we have a few more and then you can have one more library if you want or most likely you will have enough experiments that you can just add and you might need to add more experiments rather than more libraries itself right so as an example powerful seal this is how we built it we just built a darker image out of powerful seal chaos and then we created an experiment and then we set up chaos lib library to point to the powerful seal so it's very simple to plug chaos and then you have chaos hub which is really the most user centric piece of fitness is chaos hub and you will have a bunch of experiments in a given place which I'll talk in a little bit and then what developers do is after the development experiment if they want those experiments to be used once their application is shipped in production by the users they can push them into chaos hub and they saw these are the users whoever is using the topic patient can pull those experiments from chaos hub and then run them in production or they may be running their own pipelines before pushing them into production before doing the CD so they can use these experiments to run those pipelines and this is how cloud native architecture looks like it's got some experiments and then there are some libraries and you will have some CRDs so this is how users interact and developers will interact really by developing applications so that's a quick look at this one and how do you start litmus start using is you already have a chaos hub with a set of experiments you have your app running and it's pretty simple to use litmus or you can use either a help chart or a ml file to apply to install litmus that installs the libraries and in operator and then you can pull whatever the charts that you want you may not want all the charts because there are plenty of them and then whatever the charts you want you pull them onto your Kubernetes cluster and then you inject chaos by creating a new chaos engine CR and once you create the CR chaos operator PCTAP introduces the chaos on that given application and it creates another CR called chaos result and you can go and see what has happened and then you will have chaos exporter metrics it's a metrics exporter which you can use to really put some time series based metrics into perspective and say hey this application was working well all the time but now there are some issues that are observed when a particular service or chaos is introduced so you can get some analytics out of it and make sense what is going on and you can take corrective action so that's how litmus really works and it is developer friendly because it is just like like developer paid support or other resources you will inject chaos as well so injecting a chaos is create a set of ML spec I mean the CR is chaos engine and then you create other you specify which experience you want and then you run it it gets executed and then they get the result it's a completely Kubernetes way however you do your object creations you do your chaos creation as well so this is how chaos have looked like it is defined as it's split into multiple charts the charts are generic or application specific as you can see that there is generic chaos and then there are multiple applications and OpenADS is one of the first applications and then we have somebody fronted it in and for the purpose of this demo we actually created new chaos for CodeNS and then there are more applications that are in pipeline that are coming so we hope to see more applications coming in onto this hub pretty soon so how does the generic experience look like what they are right now we have port delay, container fail you can introduce CPU hub at a port level network latency at a port level network loss network packet corruption and at node level you can introduce nodes CPU hub you can run out this class just well as you can see that these are all already available and then you can create a good effect of a particular sequence of attacks using all this and then see how your application can behave and if you want to even more simpler automation you can take applications and how you do that is you have to an application is really defined about there's a pod and then there is a service and there is some data so you can actually create some logic by creating a new CR put some new pre checks on how the application should behave and then you induce some actual experiment and then you introduce some new post checks that becomes your application-specific experiment you can write your own experiment push them onto chaos hub so that users need not do all this things whatever you have already done and then that can be used as a new CR itself that becomes an application-specific chaos experiment on the hub itself for example OpenEBS OpenEBS is a complex application which is of course very simple to deploy but it's got various components and it's not just about killing a part and then seeing how OpenEBS is working the logic is more complex so for example you have so many specific applications that are really talking in the language of that application for example I want to kill a target of OpenEBS what happens is everything happening as per my expectation or an actual replica of what happens underneath you'll be doing some Kubernetes resource kill but the logic that you write above and below is you are really going and verifying the application not Kubernetes resource so that's how application-specific experiments come in so the proposal that we have is as just like we are using Litmus in our pipelines Y-Point other Kubernetes CNCF projects use Litmus for doing end-to-end testing right so it's really as simple as that start using Litmus for it should be fairly simple for example for Kodi and us it took about two weeks but most of the time was about understanding across load CRI and not really about writing an experiment so anybody with good understanding of the pipeline and no reasonable knowledge of Litmus you should be able to do it in a fairly quick manner about a week or so right there is easy to use chaos experiments for Kubernetes already and then Kodi and us we added and more can be easily developed with Elf from respective teams we think the project team should come forward because they know their applications best so the applications such as you know NOI from ATS White House and at CD should be we should be able to help these teams develop experiments based on Litmus and add them as chaos stages into CNCF and to begin with at least you know Kubernetes itself has a lot of experiments that we ourselves have defined so Kubernetes it has can be added into the pipelines so with that I would like to see if there are any questions before I pause on the phone to my colleague Raj for a demo all right it's a very good intro thank you so much awesome so Raj I'm going to just share top share and I guess let me ask you a quick on the the hub itself is that configurable can you self-host charts somewhere if you're talking about say network functions I think was mentioned early on or network services that could be used and you want to be able to test that in the stack so thinking about that if you look at say telecom operators and they may want to run their own images and and have more control over what's there so would it be possible to have run your own chart hub for the chaos test yes absolutely it is possible you can the hub itself is I mean the port itself is open sourced so you can clone the hub and set it up probably a documentation is missing around how to set it up I'll take that as a note okay but it is very easy to set it up you can have your own hub and then you can even set up some synchronization to the upstream yeah it's very much possible thanks Omar so is is any question is there any question thanks Taylor that was a great question and it prompts us to add a documentation around that so I'll create an issue and hopefully somebody picks up and adds a note on how to do that yeah thanks Omar thanks for the introduction for Lidmus hi guys my name is Raj Babudas I'm a kiosk engineer hitmus project so as we had earlier discussion in the select channel that we are trying to add a kiosk stage in the code in this pipeline so my talk will be related to that thing which was one project from the CI dashboard that is code in this so I will explain the workflow and I will explain how we can integrate the Lidmus to the pipeline so as you can see my block diagram so here as we know whenever developer commits to the source management source code management it goes to Github or GitLab where we can trigger pipeline so this is the pipeline of code in this currently we have two stages as I can see the first one is the build stage and the second one is the package stage so in build stage what we do is I saw from the code that it build a source code and upload the artifacts which will be used by the packaging stage and in packaging stage what we are doing is just building and pushes to the Docker hub so after that after this stage here the test stage will be available in test stage we have multiple jobs so in in this stage we are using Lidmus so in Lidmus we have an experiment called pod delete code in this pod delete experiment so this is the workflow of the experiment in first we create one cluster Kubernetes cluster that is called Kubernetes in Docker cluster where we install Lidmus and all operators and CRDs and the main functionality is this part this is this will replace the kind cluster code in this image with the latest build which is pushed to the Docker hub and test this latest image and its functionality just like we install the code in this pod delete experiment and run the experiments and based on the experiment pass or fail the build will decided to fail or pass so this will the workflow I will explain a little bit more on the experiment on the latest session moving forward we have a pipeline so I try to clone the pipeline so I clone from the code in this configuration earlier we have three stages first one is build as I told and second one is package and third I made is the test stage so I will explain the code and I will show you some demo of the code later on the session so moving forward so these are the earlier build pipelines so it took around nine to 10 minutes to build yeah this is a Swimlin activity diagram of code in this pod delete experiment here we have three lanes first one first lane is called the user lane second one is litmus kiosk lane and third one is the code in this application so in the first lane first user will install the code in this experiment based on the checks if litmus is not installed it will be it will be sure error or pop up then another if it is successful then we have to annotate the code in this deployment to be used by the litmus operator and after that we have the main component called kiosk engine so I will show you a kiosk engine spec how it looks so now on creation of the kiosk engine it will automatically create one runner pod so the runner pod what it does is it will create one experiment pod which is a code in this pod job so this is the pre kiosk and post kiosk checks as we know the code in this main functionality is to service resolution so what it will check is it will create one engine next pod and another pod will liveness pod the liveness pod will recursively checking this engine next service if it is filled then it will show in the logs and if all things are up and good then it will be running there's one demo in coming session so I'll show you a demo how to inject the kiosk on code in this so based on the code in this pod we have two libraries as I have already told first one is the litmus library and second one is the powerful seal which is brought us by Broomberg so it will kill one of the replicas of the code in this deployment and based on the result it will save on a kiosk result custom resource the result may be pass or fail so this is the activity diagram moving forward we have a demo of killing a pod of code in this so as Oma already told this is a chart hub kiosk chart similar to the operator hub we have around 19 charts so you can see that there are different application coding as Kafka OpenVS and around 19 experiments are there so we have to install the litmus operator and CIDs after installation I check the status of the litmus operator then the CIDs so we'll are going to install this experiment so currently we are having only one experiment that is called port delete we are also planning to add a few more experiments of code in this so I install all the experiment of code in this that is only one so I already installed that's why it is showing already exists I'm going to apply if you you still get yours experiment in the Qt system a key system name space then you will get one code in this experiment which I created 16 minutes ago before the recording so every experiment is run based on the service account service account have the permissions so before going to service account we have to annotate the application we use by the litmus operator so if you see the service account spec we have the service account name is code in this essay here I give around six resource permission which is necessary to run the experiment and they will have some actions like create delete list and I bind with the cluster role I apply the RBEC if you see the kiosk engine this is the main component of the litmus here you can see we have many information of application like by default by default code in this have a label called ks-app equals to iPhone DNS so we put this app label and it is under the the Qt system name space and AppKind is deployment always so in the kiosk type currently we are supporting two kiosk type one is the application level kiosk and another one is infrastructure level kiosk so we classify this experiment under the infrastructure level kiosk so I put the kiosk type as a infrastructure level so here you can see the service account name so I earlier created the service account code in this essay I am using this in this spec so these are some tunables by default it is optional but you can add your kiosk duration that how much time you want to inject the kiosk and second one is the kiosk interval suppose you have two part just like in code in this first the time interval between the first part killing and second part killing is the kiosk interval and the rest of the things are optional so if you see yeah the service account name code in this essay so I am applying the kiosk engine so immediately it create one part called engine x engine code in this runner which will automatically create as in the flow diagram I show you that creating your engine runner part it will create one job so it is creating a job so it will create two more part which is the engine x part and the liveness part the liveness part will recursively check the engine x service so as you can see that two part have been created liveness app and the engine x app if you log the liveness part you can see and yeah we can also see that the part is terminating six seconds ago first part goes down and waiting for the second part depends on the kiosk interval yeah second part also goes down you can see here that it is liveness is failed because it failed to curl the engine x service so if you don't want this error so you have you make sure that you have sufficient amount of replicas of that deployment that is what kiosk engineering so we check the resiliency of the code in this application and one more thing I forgot to mention this all things are done in minicube so you can do in the gcp also or your waiting clusters so after the job completion it automatically leads to all the external application it created so you can see the liveness app is terminating first checking the result you can check in the kiosk engine you can describe the kiosk engine yeah here you can see that the pass yeah quick time check rolls yeah yeah we may have maybe a minute here quickly to have do we have any questions regarding liveness before we move on to one of the other topics thank you everyone for giving us an opportunity probably we'll talk on the channel on how we can take more feedback on this and if there's a need to get this into one of these pipelines like kubernetes or gaudiness thank you again lucina denwer what's in for increasing this to be here thank you so much for your time and definitely shouldn't see more of this and on some other initiatives including the cnf test bed this is coming to mind and there's a lot of other things happening or we need chaos testing and improve the resiliency so thank you yeah thank you everyone so if any question time stops shedding my slides thanks so much all right I think pati you are next if you're available to talk about the cds sig interoperability or sig interoperability yeah it will be quick just summary and that find more information and thanks for giving me a chance to talk about it I I updated the slide on the deck so if one of you can share that slide yeah so it's basically as I said it's just a short update so cdf is a relatively young community foundation which was founded around february 2019 so it's nearly one year old and the purpose of contents that we foundation is to bring different content integration contents delivery projects together with users to work on cicd in a collaborative manner and provide neutral platform and it has many members which we can check from their website to see who they are and as some of you might already faced the challenges when you look at the different cicd tools and technologies out there and if you intend to move from one tool to another you might have faced that the things are not really streamlined between these tools and technologies and apart from streamlining them they can't interoperate together so cdf governing board came up with nine strategic goals around q2 last year and one of those goals was to work on tooled interoperability and based on this feedback and based on our own learnings from the communities we are working with or from our employers companies we said maybe we should go and propose a special interest group for interoperability to bring users and projects together to work on this important area and based on those discussions we proposed the formation of this seek to cdf talk and about two weeks ago the formation of the seek was approved by the tech oversight committee so as I summarized this seek aims to bring users and projects together to collaborate on cicd on interoperability area because cicd if you look at it it is so vast and it is nearly impossible to tackle all the challenges in one group so that's why we went ahead and create this group to work on interoperability aspects of cicd landscape domain and we have representatives from various companies in this seek such as Netflix, Google, Ericsson, Wolf, Cope, China Mobile Club, Biskit Love and Puppet and Lumine Networks apart from the company representatives we have representatives from various projects like Jenkins I'm sure all of you either use it or heard about it Jenkins X, Sipinikar, Tecton and also CNCF cross class CI also takes part in these conversations to you know work on these whatever challenges we see whatever ideas we have share them with the rest of the participants of the seek and the basics work is basically you come together with the people and you just start talking about the problems, the ideas those possible solutions with other people and then perhaps form some work streams to even minimize the problem domain and identify the questions and work with those things perhaps ending up with some kind of de facto standard or at least call for action for broader participation and to enable that we as the seek meet every second week every evening we come Thursdays at 3 p.m. UTC and we just talk about these things and our first meeting was last Thursday and the first thing we start working on is simply documenting the vocabulary we are using or these tools are using so we can at least identify or come up with some shared vocabulary to call communicate across humans so that is one of the first thing we start doing and there are other ideas like pipeline starts the standardization event CICD maybe event standardization and so on and that work will hopefully start soon and if any of you is interested to at least look at what we are doing or come and contribute to this work you can just look at our repository on github under cdf and just add your name under members send a pull request comment on existing pull requests share a document which you might have put your ideas on and just talk with other people so that's all just if you want to collaborate on this area just come and join us and thank you for cross-collar CI team for joining to this effort since we have been talking about this for years and perhaps this is our chance to do some good work in this area that's all thank you excited to see where it goes does anyone have any questions or comments about the new SIG or anything related I have one question will there be any events with this SIG at KubeCon Europe? CDF is arranging day zero event called continuous delivery summit and it's in the planning phase now the full day event with talks will happen during that day Monday and we have plans to submit talk about giving updates with the work we are doing under SIG and we also plan to have some kind of get together there find a coffee machine and stand around it and just talk about this stuff and perhaps have dinner and continue talking so just if you want to keep yourself up to date with what's happening and this type of information you can subscribe to mail lists both CDF, take our site community mail list and SIG mail list I can put the links on the slide all ready thank you yeah thanks does anyone have any other questions just got a few more minutes let me ask Watson and Denver on I believe are on this next one per CI on the CNF test bed but the only two minutes left is that enough time or do we need to defer this until next month I'm fine with deferring if you want oh is it four minutes my clock is oh my clock is fast all right you have four minutes so I'll give it to you and then if we need to continue then we can do it next do you want to take over the screen yeah okay go ahead all right so this is a review of the CI for the CNF test bed if you're not familiar with the CNF test bed you can get there at the GitHub CNCF CNF test bed it's a project that has a bunch of different use cases for networking functions for Kubernetes and it tries to solve problems or test different technology within that space there's a bunch of like I said different use cases some that are pretty simplistic and then all the way up to trying to get to the point to where we're maybe testing like evolve mobile technologies and things like that and so for CI with the CNF test bed there are some challenges as far as needing access to certain hardware resources and these types of things and trying to keep things in banned and as far as a proper way for installing things with Kubernetes and so there's there's there's plenty of challenges that are that go along with that so for networking as far as like some of the challenges like deploying hardware of course the provisioning data plane technology such as VPP installation and then customizing those things so that other people can run to test that in their environment these all have their own unique challenges as far as deploying hardware you can think of different things specific networking hardware that you might need for within our case with these these cases we have things like smart nicks and other types of hardware there where you might need to do some type of boot option or bias option or some changes like that so for deploying hardware we're making kind of deploy everything in packet so we have a neutral environment we are using a a script that we that we that we created okay that we created on on your house hardware hardware provisioning estate basically is going to make sure that you you know it's customizable you have your node structure that you want to be able to set up for Kubernetes the different facilities and the different other options like plans node types things like that for packet and what it's going to do is output a list of of nodes or IPs when it's done doing the provisioning of the hardware then we the next step is provisioning Kubernetes we use kubespray and we have a wrapper around kubespray under the cross cloud CI called k8s infra and this is going to in again it's a another script that we have a Kubernetes provisioning dot sh when you after you're done with that it's going to output a YAML file that kubespray can use intelligently so see what else here there's some options that you can change as far as release type for Kubernetes so stable master that type of thing and then something a little more specific to the network domain how is it that you're going to maybe configure some of the network specific things like the data plane so most of our use cases are going to have VPP which is a an open source data plane and using that to configure vswitch on the node itself I believe it's still on the actual node so this is going to be one of those things that might be out of band because you're configuring the node but this ends up being something you need needing for performance so there's different options here I'm configuring the vland there's specific playbooks that you might use for doing the installation but we have a specific one we use and the output is going to be that actually an input could be another specific kube config that you're using for your kubinettis and then as far as if you're getting going on trying to do some of this stuff on your own you want to have more options you can use the make file directly and then there's these options that we have here for firing off the different stages yourself manually so if you end up wanting to maybe do some run some of your networking functions and use the cnftestbed to do that this is going to help you there we have a packet generator that helps generate all the packets and stuff like that for the for the tests and things like that so some of that is more granular in this so I think that's it another time any questions I think we're out of time so anyone have any questions say hi for the cnftestbed yeah it's a good one definitely in looking for to take a look at it and see you know if you can use it in all this networking testing in all projects for sure thank you great probably see more of this as the months go on and there's more of the automation automated testing on the cnftestbed all right so the other topics will defer and the flu will have more folks that'll reach out some talks for next month um the next call is Tuesday February 24th 25th sorry at 12 p.m Pacific time thanks for joining thank you thank you thank you everyone have a good one everybody thank you