 On Zoom, it's not showing as connected. Okay, if you want to dial back in, that would be awesome. Okay, I'll keep my finger close to the mute so that we don't have that feedback loop. In the meantime, in the chat, I've posted the notes for today's call and agenda. Thank you so much for adding your names and contact information. We've got a slide deck prepared for today's demonstration that'll kick off the meeting and I've shared that link in the notes. I'll share it here in the Zoom as well. Thank you for your participant ID. You are in the meeting now. I think the audio work. Morning, afternoon or evening. Hey, early morning. It's just five this time of the year. Welcome to the working group call. The calls are held every second Tuesday and fourth Tuesday. And you're welcome to take a look at the MCF public events calendar. That's a handy way to copy the MCF public events to your own personal account. This working group as well as I think we can get started as a team. But I was. I was expecting. Your screen share. I will give a visual of the slide. Yeah. I'm not sure what's showing. You'll see. He's just black screen for me. Just a moment. It's black screen for me too. Can you. Are you able to stop the other screen share? I see it now. I couldn't. It was actually completely black. There we go. Sorry, I couldn't even stop it on the computer. Now I'm good. And let me know if you can see it this time. Good. Okay, great. Sorry about that. Actually switch computers. I don't want to debug that later. So. I think for some of y'all this is probably a first time to. See the cross pod. See a project. Some of y'all. On the. I've looked at some things before. Give a little bit of an overview on that. And we'll go ahead. Go through where we are. There's been a lot of changes. Several months. We'll go ahead and talk a little bit. About. How to integrate projects. One of the goals here. So. As solution said, this, we've made on this. See, I work group. Monthly. Some of the hooks on the team. Doing the. This last release. So why are we here? CNCS. Has a lot of projects that y'all know. It's growing. Every month. A lot of projects and a lot of. Clouds coming on. And. The goal is to. Test the interoperability between all the projects. And. Get the results on how they work on as many cloud providers as. Possible. So. That's the goal. Of building this out. We have a few of the people that founded. Found this project. And the hacker and Denver. The project. Consist of a testing system. So this would include some of the stuff that. You may be familiar with. CI. Status repository server and a dashboard. The testing system. Has the built components. That's your traditional CI stuff. And other parts. The cross cloud. Provisioning. So provisioning. Kubernetes on all the cloud providers. Across projects. Shows app deployments. And then. These. Parts together. Validate. How the projects work. Doing tests for stable and head across all the. Butters. And then we collect all the results. And we're going to show them on a dashboard. And then we are going to show them on a dashboard. That's our project. Targeting all the projects. What's life right now. On CMC SCI. Is Kubernetes. For D and S. Will be releasing. As soon and we have a few others in the works. Likewise for clouds. We'd like to. Target all public their metal and private clouds. We have AWS. Google GC. for bare metal. And Azure, we have Bluemix ready in testing, ready to go real soon. And we're working on OpenSEC. So, we recently had a release of the dashboard for SCNCF. And we actually have another one coming up here that we'll go over some of those. I'm going to go ahead and open up that dashboard to take a look and then kind of work our way backwards. So, right here we have the SCNCF dashboard. Let me know if y'all aren't able to see anything on the screen. So, this dashboard is showing the status from builds, from app deployments, E2E tests, and the Kubernetes provisioning on all the clouds based on the upstream commits. So, when I go through here, you can see this was a Kubernetes 181 commit for a Prometheus commit on master, so the head release. And then we take the builds and go through deploying those to each one of the cloud providers. We can see there's right now there's some upstream issues with Google. These are actually some account issues we're increasing quota. Some of these though could be, right now we have successful builds, but if we had a maybe an upstream core DNS issue, then we would see that as we come through here. We've, one of the recent items I guess that we're just about to just releasing in the last few days would be this spitter, so a small thing, but something's been asked for. So, what's that project? So, these are your standard things going to the GitHub cross cloud. So, this is the actual org, or the cross slide CI project, and then all the different components that make that up, or how the whole thing works. We, one other item we were asked for to add was a prominent, where do we report bugs on all of these? So, the main focus would be going into cross cloud CI org and adding issues there, unless it's specific to a certain point. So, let's go ahead and see how some of this runs. Normally, this is running every daily for CNCS, the CNCS CI. It is going to go through and do builds and check the status of whatever is currently available. We actually pushed a few things out for fixes on projects, and so it was run recently. I'd like to show doing an actual app deployment. So, what I'm going to do is go to our staging environment, which will look very similar. See how this is. Okay. And then I'm going to go ahead and start a deployment over here. So, just a little command line that connects to the backend system. So, I can get that started and go back over here. I'm going to go to end system. I had all these tabs ready on the other computer, but to reload them here. So, we can see on cross project, which I mentioned earlier, which is the app deployment component for the testing system. We can see that a bunch of jobs got kicked off. These are pipelines for doing the app deployments. And if we go back and look at the dashboard, we can see here's from ETS getting deployed on some of these clouds. It may not work on some of these where there were failures. This is a testing right now. But as these go through, we'll see some more. So, let's go look a little bit more on what these parts are. And then we'll come back and check here in a bit. So, the testing system. This is the built pipelines. It's per project. We can optionally use a project's built artifact. So, if Hermitius has a CI system where artifacts are stored and usable by the public or potentially by us, we could pull those out or we build our own. So, that was something we're looking at earlier. Go back to cloud provisioning pipeline. That's cross cloud. That's Kubernetes deployment to all the cloud providers. App deployment pipeline. That's cross project, which we were just looking at over here. And that's deploying the Hermitius and any other application onto those Kubernetes clusters or potentially some other environment. The built pipeline. So, let's look at those stages. Again, your traditional stuff, compiling the binaries, E2E tests, and getting all of those things ready. Some stuff that's done a little bit differently is we'll take the E2E test and we'll put those in a container and get those ready to deploy. We'll also pin, create a pinning configuration for all of the versions and information for the environment and the containers and make those available at different stages. So, even if we're using an external system for the artifacts, we'll still create those pinnings so that we know what they are and we can use them in the next pipeline. So, the cloud provisioning. Again, this is Kubernetes provisioning. We'll collect all the pinnings from the builds of Kubernetes and then deploy those to the cloud. And then we'll update the dashboard status patches as we're going along. So, as Kubernetes is getting deployed, those will start saying running and we can look at the provisioning. And then we'll go to the app deployment stage once Kubernetes provisioning is done. Those clusters are ready, which may have been done beforehand or may be available when otherwise these can go consecutively. We use Helm charts to deploy each one of those projects and then we'll deploy the E2E test containers. Similarly, run those E2E tests and then based on the results of each one of those stages, we'll be updating the dashboard status patch. So, if anything fails, then it'll be marked as failed. We'll have the status patch will link back to the appropriate job it passes, then it links back to the appropriate job for that. Let's go take a look real quick before we look into the tools. Let's see. I want to go to the staging environment again. Okay, so we can see like blinker day on packet or master pass. Let's go ahead and take a look at that job. So, we can see this deployment. These are the Helm charts going in here. We can go back and look at the pipeline itself. So, here's some of those stages we talked about building any of the source for the pieces that need to do the deployment for the actual app deploy. This is the app deploy we're just looking at running any end to end tests for the actual. I did click. I'm just waiting on that to go through here. Internet. Okay, well each one of these jobs will be similar to what we just looked at on the, this again was the app deploy. The next job was the end to end test and this we saw it. It went all the way through and passed. And then if we go back here we can see more of these are starting to complete, like Prometheus on Asher looks good. Prometheus version 200 on AWS looks good. We can go through each of those. Any failures we could similarly look like this failure? Well, as we know on GCE it's probably not going to work because it fails on the provisioning stage. But if it was a failure for say a Prometheus specific thing, we could look into that. So, some of the technology behind this. We're just looking at GitLab. I'm going to move the zoom thing here. So, GitLab is one of the main components right now that we're using. And GitLab, if you're not familiar with a unified CICD platform, it provides runners, ability to do builds, your normal jobs. It also does mirroring of the Git repos so that we can do triggers on those certain things. For the provisioning, we're using Terraform cloud in it. And then we do custom Kubernetes configuration for each cloud enabling us to set various flags. The app deployment helm as well as E2E test. And then hopefully working as much possible with upstream projects like Prometheus to build those E2E tests and then we containerize them. The dashboard. So, we were referring to that as two parts. There's the status repository. That's where we're saying the backend API. That's Elixir and Erling. It allows us to have high performance for any clients, multiple clients talking to the one API. The front end that we were showing is VJS is primary. And let's go back and take a quick look, see how that looks on staging. So, it looks like we're mainly green on the available clouds that worked. And since Google was completely out here, that one did not go for us. Okay. So, what's next? We're in the midst of adding support for ONAP right now. And this is specifically their service orchestrator. And we're doing an integration with their CI systems. We're actually using their build artifacts and using their test results from their CI system to update on the build status badge. And then we'll be doing deploys of those artifacts to Prometheus. And that's in progress right now. The Google side, just real quick, I mentioned earlier, increasing some quota around that for the account that we have right now. So, that can be re-enabled. We have IBM Cloud, the Bluemix side is plus being release. That should be real soon. And then OpenStack, we're in development, finishing that out. Loonti, as I said before, and we'll be enabling and testing. We'll be enabling Kubernetes newest 1.9 release. Quick mock-ups that you all can see on some of those things we're adding. We're going to be continuing collaboration. This is the ONAP integration. So, we're working with Shea from Plotify at Crosshodge from OpenStack. Prometheus, I mean, all with the EDE test. Happy to be doing that. And we have some other demos coming up that we're going to be doing. And some of the events that we're attending. So, next to our working group is the 27th Mobile World Congress end of this month. And in March, we're going to be attending O&S. And there's face-to-face prep for that that we'll be going to. And then QtCon in May. Okay. So, is there any questions? Happy to go back through any of those items. Maybe if you show the GitHub repo for Prometheus. The one that is doing for Prometheus. Because I think this is the one we'll use to open issues and start the discussions, I assume, right? As far as EDE test, is that what you're referring to? The EDE test for Prometheus? Yes. Yeah. Okay. Yeah. So, we'll probably use, for that specific one, we may want to switch to a new ticket that's focused on just the adding EDE test and improving what we have. It's pretty minimal. I think I just clicked on the wrong place. Let me go down here. Fourth bag, good issues. I think we have one. Yep. So, this ticket right here, ticket seven. Can you add that ticket to the chat? Have you had a chance to look at what we have on the prom bench? The Prometheus benchmarking? The one that Fabian did? I haven't. I think the last thing that we had reviewed was when we were looking at this CI slash cluster needs and it was kind of the overview. Yeah. So, I think what our most urgent need is is some sort of way to automatically run performance tests because Prometheus is so highly optimized in places that it's super easy to do a mistake and ruin the whole performance tuning. So, we're looking to somehow automate that and the prom bench tool that we have there is basically what we've been doing manually. But it's a lot of work to do that every time. So, we were hoping to somehow integrate this with our with your efforts. Okay. Because the general setup is very similar to what you're already doing. If I understood correctly, we just set up the Kubernetes cluster and put some resources in there and that Prometheus run for some time and see how it goes. We even have dashboards and all the things to do all of this. We basically just need the automated Kubernetes setup with enough resources. Right now, we usually do something like a 10 node cluster with... I don't know. I would have to look up what the actual resources are, but we then run some 500 pods that expose metrics and see how Prometheus does with scraping those. Okay. And that sounds good. So, right now, the test results that we're seeing here, these would be more of functionality in the end test versus performance. Although, I understand we can think performance. You do need some capability on that. So, trying to see where the overlap is. Makes sense. Actually, I have to run, but the others know just as well as I do. Sorry about that. No problem. Okay. Yeah. Thank you for the presentation. Absolutely. Is there any other from, guess anyone else on the Prometheus same questions or comments on the end to end test? I guess more general end to end test. Right now, we're looking at ensuring functionality on Kubernetes and each build essentially that works. I think we'll need to look more into the performance since we're doing a large set, but we can look at how that would fit into this. Right now, we want to make sure we cover all the functionality or any other questions. Well, my main question was how do we basically work on this thing together? I mean, I don't see any specific GitHub repo or any separate project where we can collaborate. How do we ask questions and how do we work together on this thing? Okay. Sure. So, in the cross-cloud CI org, are you all able to see my screen? Yeah. See. Coming up. Just spinning from here right now. That's not what I'm showing. Clicking on cross-cloud CI. There we go. Okay. So, the two primary parts would be on build and on how do we build and making sure that builds are working fine if you don't have an artifact that you can provide. And then the next would be ETHS. So, the builds, how we're building Prometheus is in a Prometheus-configuration repository. And is that showing for y'all? Yeah. I guess it's delayed. Okay. So, I'm in this Prometheus-configuration. There we go. Okay. I'm not sure why the zoom screen's delayed, but okay. So, this repository here, it has the information for doing the builds and for running GitLab and pulling in all the different parts that we're using for the test. So, we have these GitLab CI ML files. This would be like Travis CI or Circle CI, anything else, describing how to build the different parts. So, this would be one area if this is something where Prometheus seems to say, hey, we can help with making sure that it builds successfully or we should really add another component. So, this would be one area. And then the other area would be the actual end-to-end test. And the end-to-end test, what we try to do is we'll take an upstream test. So, an example would be, and Denver, if you have any thoughts on this, speak up. But Core DNS or Kubernetes, it's taken a while to load for some reason on my internet here, but Core DNS has some test functionality under here and we have instructions on here's how we can run the specific end-to-end test. Kubernetes has end-to-end tests as well that they could run specifically for testing the pods besides the full conformance test. So, if we're doing like an app test or something, they also have a, there's the cube DNS test. So, if we plug in Core DNS. So, what we could do with you would be defining what are, what are end-to-end test all the functionality on Prometheus that's reasonable for running on here. And then where would you want that? So, if we're looking on, maybe you have a Prometheus-test repo or maybe you say, we want to put it inside of the main Prometheus-Prometheus repo and you have something in here and then here's how we would run those specific tests. Not the unit test for builds, but actual end-to-end tests that are doing that. We've sometimes built some smoke tests that will run for a project. You know the test, you know what should, how it should be working for Prometheus. So, we, we don't want to spend a whole lot of time trying to do that ourselves if you can help with that. It should be similar to what you're doing with the benchmark except for right now we're not looking at like a large set for performance. We want to test all the functionality. Yeah, thank you. Well, Connor is on the call. He's on the call as well and he started working on some end-to-end tests for the service discovery. So, maybe this will be a good test run to see if this can be added to the tests that Prometheus asked. I think he wanted, he wanted to show a short demo but I'm not sure if he's, he had time to prepare it for today. Hello, can you hear me? Yeah. Yeah, sorry, I got back from Paris last night, like 2 a.m. But yeah, so I've been working on, I stood up a Jenkins server that basically set up for the service discovery tests with Prometheus. So, I've set up a few examples like with EC2. So, we try a slave out there on EC2 and see if the Prometheus service discovery is working with that for a POR that comes in on Prometheus and things like that. I wasn't actually aware, to be honest, that the CNCF had this continuous integration set up. So, I'm not sure if it is worth continuing down the road. I was continuing but maybe some of those tests I've written could be integrated with what you guys have because we are lacking service discovery tests and it's an area we want to cover. I can drop links in Jenkins server, the tests into the chat actually, which might be easy. Yeah, that sounds like a good match for this Ticket 7. So, the end-to-end tests that we're trying to run to show this by the results for CNCF, it seems like service discovery for Prometheus is a good choice. Yeah, I guess, yeah, it's funny I have a few examples sort of thrown up at the moment, but yeah, it's kind of a working process and I guess the kind of response from the rest of Prometheus developers was quite quiet, but yeah, it fits in I guess and happy days. So, what we need to do is find out how to reuse what you have already done and try to fit in this CI system, if I understand correctly. Yeah, I think so. I guess I need to... Sorry, go ahead. I think that's exactly what we want to do and it sounds like the service every test will work. So, I think would be a good idea is to, if y'all want to look at something that would be a standard way of where you could put it in Prometheus for running those tests outside of the system. So, what are the requirements to run the tests? What are you expecting? And then, the instructions for running those. So, if we can figure that out, then we'll be able to build the containers and run the end-to-end tests and we can discuss a little bit about once you've figured some of that out, how do we actually put them in containers? What are the expectations in our environment? Yeah, I can definitely, if you give me those questions, I can definitely chalk up answers to them. I guess it depends on each test, what's required, because the requirements will be different for, say, your tests on your EC2, and then if you're testing, say, something like Zookeeper, it'll be different, but yeah, basically a box that can container that has go and that can build Prometheus and reach out to those services. But yeah, I can write up a more detailed sort of document on what we need if you give me those questions. Okay. And so, are you referring to running it directly on EC2 versus in a Kubernetes cluster? Directly on EC2. Haven't done anything for Kubernetes yet? Okay, so right now what we're targeting is, and what we're saying with Prometheus here, I'll just go back to that. So this is Prometheus deployed on Kubernetes, which is, so right here we have Kubernetes, a cluster that's running on AWS, and then Prometheus was deployed onto Kubernetes that's on AWS. And that's what this was. So we actually deployed the different Prometheus components into the Kubernetes cluster, they're up and running. And then we will run the end-to-end test that's actually going against Prometheus running on top of Kubernetes. I don't know what Connors test looks like, but I imagine it doesn't really matter for us where Prometheus is running as long as it could hit EC2 or something else. That's what it seems like. So if, I guess, Connors, if you can go through and define what you need just to run the test, and that might be the first step, and when you feel comfortable with that, then we can look at containerizing those so that we could deploy just the container of EDE tests and see how that's going to run in the environment and what we may need to adjust. Yeah, sure. That sounds good. Okay. And if you would like us to look at anything, if you can drop it in this issue seven, this is where we'll be working to try to collaborate and build the EDE test for Prometheus. Cool. Yeah, for us, the prompt benching is quite urgent, but it seems like this will not fit so easily. Yeah, I think that may be a, like, maybe a separate, almost a separate project or something to what y'all are needing there versus what this is. We could, I think that one might be more appropriate to continue in the other ticket, potentially. Okay. Prometheus ticket. Well, I think that was a good start. Now we know where to submit issues and we can take it from there, I guess. Awesome. Well, if there's not anything else, I think we might be able to call this an end of meeting. Got anything? Nope. Any other questions or comments? Thanks, everyone. Cheers. Okay, bye. Bye, everyone. See y'all next time. Cheers.