 Hello, can you hear me yep, we hear you again, so it's up to you. We have four slides, please and take us to the agenda Great Okay, so today we'll have a presentation, but I want to talk a bit about the working groups Before that you go to the next slide, please So we've had five people vote for the safe slash security working group I'm not really somebody who feels very comfortable talking about security, which is one of the reasons I haven't voted or I think something like this is needed Is there somebody else who's on the call today who has a TOC vote who would like to be the 6th vote or Or at least say clearly what they're concerned about that we need to finish to get this over the line I would say I have not voted I Generally have concerns about working groups and What they're trying to achieve and what they have achieved and what our mechanisms for Oversight are and since there have been a number of cases where Disagreements have Started in working groups whether I question whether we have enough community management to really Properly manage Expanding Number of working groups. So like other than the serverless working group, which are viewed the white paper. It's not clear to me what the other Working groups are Are are doing, you know, we have occasional brief updates, but I don't feel like they've been that reflective of Like what value they're actually providing to us Okay Does anyone else want to add to that? We've got this this will be a good topic. This is a good segue into the next topic actually about generally Cascaries and working groups Does anyone else want to say anything Brian country will use you online? Yeah, and I was I mean I shared those concerns I think the thing that I also get concerned about is that It's unclear what authority that we are investing in these working groups and I just become concerned when People feel that they've got more authority than they might have other people on the working group of people not the working group are Vesting this with more authority than we are kind of perceiving. So I've got all the same concerns. Yeah, that has definitely come up in the past I Mean those would be things that we absolutely need to get more clarity on especially if we're going to be Creating more different sort of subgroups within the CNCF Those subgroups will need to have some degree of responsibility and accountability in order to function well And I think we also as we saw recently with cloud events would like to have more clarity around the projects are coming out of the working groups as well What their role is and expectations around that? So We could try and discuss that today or we could try and work it out Offline Brian and Brian. What would your preference be? I don't feel comfortable asking you to vote if you have these doubts and I'm not gonna vote until you know, we've got a bit more clarity. So what would you like to do? Do you know should we come out of working session on this and try and plan it out or is it a face-to-face topic? What do you think? Yeah, I think planning Some meeting time to discuss this so we can prepare ahead of time and the people who have been more involved with some of the working groups can Can bring to the rest of the TSE what they feel The value is that the working groups are delivering and you know have an opportunity to address the concerns or propose changes to how We provide oversight or don't To the working groups and things like that. I don't know if we have all of the working group sponsors on today I don't see a band, but I think Ken is often networking and Quinton is not on are you on Quinton? Okay, yeah, Ken and Camille are on I guess But you know the the working groups have chargers that are fairly broad and vague in general and this one is no different and it's I'd be more comfortable with like more specific Goals, I don't know if we just plan for these groups to exist forever You know as we expand the scope of the CNCF fracturing into sub areas or something like that. I think that's Worth figuring out But I don't really feel like I have You know other than the one white paper that was produced and the cloud events project that came out of the serverless working group It's not really clear to me what the other working groups that are really Providing to us And I think we have had some examples where the working groups have churned more than we would like Where it feels feels less productive than I think they would like or we would like Yes, right Okay Well, there's definitely tie into the next topic. So let me let me take you through that because I think this will That will be useful. So let's go over to the next slide, please Let's just skip this slide, please Project presentations go check them out. Let's skip the next slide, please Okay so this is the categories topic which kind of overlaps with working groups and this was trying to Figure out a way to make the CNCF Project universe a little bit more manageable a bit clearer to have more focused efforts around the sub areas I talked about this a couple of meetings ago and there was some some good feedback on it This is a very very slightly Tightened up proposal. It's not a full written out document And I do think we need that but I wanted to offer this as a sort of another Update and try and figure out how we're going to break this off and make it make it work Because I think we don't do something like this We're just going to end up with a massive sprawl of Projects ranging from large down to small so the proposal is to have a concept of a category and a typical category might be observability or security or It could even be for a very large project such as Kubernetes It could be its own category and it could have affiliated projects around it in the same category and then The things that would be associated with that category would could include You know as a landing page on the CNCF website, which describes the category. What's in it? What it's for He's supporting white paper along the lines of the serverless white paper and a supporting landscape along the lines of the serverless landscape Then this the second point is a mechanism for essentially federating the TOC So what could that look like number one? The sandbox projects that at the next specific category when they get regularly reviewed could be reviewed within that category because I think that That would create greater focus and it would also So save the TOC from doing a lot of that kind of work and then have a quarterly and annual check-in with the TOC so just as a small amount I've suggested a quarterly check-in could be a 15 minute presentation from the Working group running the category and then an annual check-in might actually have a more formal structure With a you know document of what we've done this year and what we'd like to do next Do you see the answer to your question is yes on the next slide Another thing that might happen is we could delegate some of the work supporting technical marketing To the category leads and the members of the category so user guides talking to analysts talking to other people I Don't know if this is good or bad idea, but I do think there is work to be done here that could be done in a different way So Alexis can I ask what is so what's the problem? We're trying to solve here? Well, I'm worried that we're just gonna have a ton of projects and the CF TOC is gonna be completely overwhelmed Okay, which is fair Maybe it's worth exploring that a little bit and what some other alternatives are because I've got a lot of concerns I mean just You've got domains where you've got a couple of different approaches Where there's been a lot of more? Heat than light I think in some of the the public discussions anyway on some of the issues And I would be very concerned that we would be Recreating some of the problems we saw where the working group model didn't work where you effectively have your you've got a battlefield for belligerence with without any sort of of control or oversight And everyone believing that they are fighting that they need to fight a battle to the death because that working group will decide the future of that particular domain And I don't know that that's going to be productive And I think it's definitely contrary to what we want to avoid being king makers and I'm concerned that if we Federate ourselves we are effectively saying that well the job of king making is so exhausting We're going to delegate the king making out which is to me a recipe for for this kind of federated conflict. I think that Those are all good points the TOC should remain the decision-maker on project acceptance and fundamental graduation issues, but I think there is a benefit to You know trying to channel Desire to contribute in the from the community into productive areas Otherwise the TOC could become a bottleneck so the question is You know, what are the what are the things that are really important for the TOC to do and be as objective and fair about as possible and Try and be vendor neutral about and what are the areas where? You know, we we allow for the fact that contributors who volunteer may have some kind of agenda so One thing that would that would be helpful would be to have some of these domains Kind of operate as investigative staff does in a legislative body where they are They're tasked with actually the findings of fact to actually set the landscape And help inform decisions that we might make where they're not trying to they're just trying to say here Which approaches best but here are the five different approaches that are out there in the observability world say and and maybe talk about the the the trade-offs that that each is making in their own words That I think would be helpful my concern I mean observability in particular Maybe I've been spending too much time on Twitter, which is almost certainly the case But I just feel like that's a good example of where there's been a lot of just Fierce combat from people who should otherwise agree about the abstract problem They're trying to solve but they happen to work for different companies So they're just at one another's throats and I really really don't want to create a venue for that under the CNC F banner I Don't want to have the CNC F be a lightning rod for people's anxieties about the success or failure of that company or project either That would be very demoralizing But you know what what are the Brian what do you think are the good aspects of this proposal? I mean you mentioned the investigative groups in the US US legislative system What else would you think is important here? Because it's obviously touches on with it's a role for the work trying to formalize what working groups actually do around these areas Yeah, and I think it would be helpful I mean if we can find some way to to help inform the TOC because obviously we're not domain experts in in in everything by a Long shot and there are a lot of these areas where there are kind of subtly different approaches that Vemently disagree with one another and it's hard for us to figure out what's actually going on And it would be very very helpful to have some documents that help inform that and I guess my question is how So I think it's that is a good problem to solve I just how can we solve that problem without creating additional ones? I guess the challenge. I don't know. So I have an answer I think it's tough Okay Well, I think if we create a bunch of subgroups one thing that's definitely going to be required is actual active community management to keep the peace and You know someone mentioned isn't that what we have code of conduct for unless you have people who are very Skilled with how to deal with these kind of inner personal Issues and intervendor Arguments and disputes things can escalate very quickly You know despite some people's good intentions, you know Engineers and whatnot aren't necessarily the best people to do this, especially if they're involved in the heated technical discussions You really need to have people who are Whose that's their role to do that You know, and we've definitely seen this explode in a few instances in the past So I also share that concern, you know to answer the question of what could be good about this, you know, I do think we need some more structure. I'm Not super happy with Us just reviewing projects as they express interest I feel like we need to get more Proactive and we are bandwidth constrained So if we could say look we aren't gonna accept any new projects in this area until we have like a landscape overview and we actually understand what the Significant projects are in that space and then we will ask them all to present and we will decide which ones are a fit and which ones are not a fit according to you know our various Principles and other criteria I Think if we don't if we don't do that. We're just gonna I I just don't I I'm I just don't see how we like scale this there's so I mean like the straw man list of categories is so many categories and It's already a ton of work. I mean frankly It's already like a reasonable amount of work doing to see calls Looking at projects voting, you know kind of keeping an eye on things adding active working group You know Commitments is Like I just at this point. I feel like we are not scaling the TOC very well I don't know if my fellow TOC members feel that way But you know, I think some like obviously each of us has a bit different ability to commit Time to this process But you know the more fine grained we break things down It's useful in certain ways, but like we just can't expect that there's gonna be a TOC member who's going to be You know in it for each one of these different categories and like, you know trying to Mediate and provide advice to the other TOC members like I like it's just not scaling I don't think and and I think that's a big problem that we're seeing right now with the With the order. I'm not sure I'm breaking this down in categories It's not a bad idea if we can figure out how do we then Make that so that delegates can do the work and not have it be TOC members But it will be political if we do that period in a story I mean the fact that Brian suggested that this is like, you know legislative analysis It's like yes, I'd let like and guess who's guess who's influencing them. It's like the lobbyists are all over the place Yeah, that's a totally fair point. I think and I think I Totally agree Camille that the that we don't have The time or bandwidth to do this and I also feel like adjudicating between two rival startups is something I've got zero interest then In even if it's a domain in which I've got expertise. Maybe especially if it's a domain in which I've got expertise I think that part of the problem is that we have implicitly been tasked with too much in that there are There are too many people who believe that the that that being a CNCF project means more than I think we think it should be And as a result they have created this this kind of very high stakes Kind of interaction that I don't think we want I mean, I think that that's part of the concern that I have is that we are Under siege in part because of the popularity of Kubernetes and which is great In the popularity of a lot of these technologies but the flip side of that is that everyone believes that that being a You know a five pixel high icon on a landscape website is going to make or break them Which of course it won't but it's very hard to to to prevail when that that belief is so strongly set. I This is a Louise from Portworx. I think the main issue is that Customers are making decisions due to those those rankings. Yeah, they're not They're really not sorry No, no, maybe it's about you raising your next round, but but customers No, no, there's no reason about it. I don't I'm not in that I'm engineer But I'm just saying that we have seen customers make decisions because they say this project is part of CNC They're very clear about that Well, you know Brian, I have to I have to add one up on that is that I also think there are plenty of people at big companies That are saying my next promo Is based on me getting my project? Yeah, I am a hundred percent sure that is happening and Right of our sandbox projects, right? So we've just we've got a project explosion on our hands that's coming from you know from startups who believe that this is a Competitive advantage from people at big companies who are trying to you know get their staff engineers senior staff engineer approved whatever So we are literally standing between them and their bonus, which is not where we want to be. Yeah, yeah So but how do we change that perception? I don't think adding a blurb to the website having some disclaimer saying hey, even though this is on here Don't make your decisions based on that is not going to cut it. So I mean, I have to agree with Camille from just the number of projects trying to Really do a technical analysis of that. It's impossible at least coming from me Like wanting to really look deeply into each one of these projects. It's not scaling So, I mean, how do we change the culture of what that means and then how do we get enough coverage on these? So that if these are as high as steaks is People are putting in there regardless of what we say. Are we actually doing enough due diligence for each one of these? And personally, I'm only going to speak for myself. I don't feel like I'm doing a good job at that I mean, we're doing quite a bit of DD for incubation. We've deliberately lowered the bar for sandbox, right? One of the benefits of the categories is to kind of push the sandboxes over to under each category So it's kind of even more out of the way You're just in the sandbox for some particular category, but I was like this that we could for example, you know Whether it's three categories or 10 whoever's associated with Managing that category should I hope be more of a community thing who'd come to the TOC periodically with an overview What's going on so that the TOC's interaction with a lot of this noise could come through that signal? And of course that then creates potential for politics, but I can't see it Another way of doing it without just increasing the number of voting members of the TOC, which brings another set of issues. I Don't think that we've successfully distinguished the sandbox from incubation for most people. I just looked through some Press articles this morning and most of them Well, none of them mentioned sandbox in the headline and many of them don't Even mention it in the article or if they do they certainly don't Explain it. So I think the sandbox projects are still getting more publicity than is warranted, you know based on our press releases and You know, we talked about well, how do we lower the stakes? I think As we accept more projects I am starting to hear CNCF will take anything now From people which again, because they don't distinguish sandbox from the other projects I don't think that's what we want either. So I don't know how we can more strongly brand The sandbox projects differently, but we're not succeeding in that right now right Okay I don't think we're going to reach a conclusion today So I'm going to call a halt to this part of the discussion But I think this has been a good airing of the issue and I do think it's very much around scalability politics Can we go to the next slide, please? That was just basically parsing of the landscape into some buckets And then go to the next slide, please Wait, I have some question a question about the buckets I mean, so we we have some buckets on the trail map and in the reference architecture Do you these don't look like they exactly line up with those? No, this is based on the landscape rather than the trail map I'm happy to use any other bucketing. This was just a again a straw man Okay, yeah, I have no emotional Okay, thanks Okay, so let's move on to the presentation because we've got 30 minutes left net data Okay, do we have a speaker on from net data, please? Yes. Here I am The floor is yours. Okay. Thank you very much So guys, thank you very much for for giving me the opportunity to present you the data Let's go to the first slide immediately or I Would like to give you a brief overview of the motivation behind the data. Why why people love it? Why we did it? So I used to work for the last 10 years for a FinTech company, you know financial transactions are quiet in case you Etc. And we had a very simple as a lay or 90% of the transactions have to be completed within one second and The maximum duration for any successful transaction is five seconds. We had an infrastructure a large infrastructure back based on hadoop and time PSDB and You know all this kind of a patchy ecosystem bundled with several commercial services everything was host on Azure etc The problem was that Okay, we had the clear indication of that of when transaction failed to meet the SLA But after that we had nothing so at the end of the day We were forced to run console tools all over the place Almost permanently to be able to troubleshoot performance issues Then we started developing this tool The data the data was initially a small a very small data collector it collected metrics or fire resolution per second and had the memory class and Could could it was able to visualize them using Google charts and the likes immediately understood that People I you know most of the companies Consider having a time series that a base where people can query it and do stuff like that We immediately figured out that this does not work Why because people do not understand the metrics. They don't know the metrics So a developer is capable of understanding his own metrics the metrics of the application but after After that he knows almost nothing. He needs the expertise of sys admins of network admins of DPAs, etc So we understood that we need of course We needed more metrics in order to find correlations between them But we also understood that we need a different visualization a visualization of that could provide Insights on the correlation of the metrics and how they relate to each other. Let's go to the next slide So as time as time passed we added more more data collection Mechanisms we use Everywhere we use the most efficient way resource efficient way So we want this thing to be very fast very very thin and very resource efficient We changed the database the format of the database so instead of working on a bunch of metrics You know almost time says that a basis work on a bunch of metrics It's something different because in the in the database there is the meaning of the metrics each metric has units has Calculations to do that to convert the units It is correlated with other metrics in charts charts are in families families have an application, etc So the idea is that we wanted to give the users Something they can they can browse immediately without knowing Anything by or to that and then we wanted to be open and better and degradable So not to introduce another engine another, you know ecosystem in the in the enterprise Let's go to the next one So what is net data today net data is a data collection agent You install it on your or your machines. It has around dropping database So it's a cast for a few hours. Let's say The data collection everything is real time everything happens per second It can stream metrics between a data server So you can have a headless and headless in a data collector and then a master server or a proxy server That collection restrooms the the metrics receives and retransmit the metrics It can archive its metrics to time series databases like Prometheus like graphite open this db, etc It has embedded a health monitoring watchdog. So this one stock has statistical algorithms and Can support rolling windows and all this kind of stuff in order to to to evaluate alarms to generate health monitoring events It can dispatch this events of course and then API the API is pretty simple pretty simple It is five calls all of it But then we built on top of this API Visualization engine as a static web page as a static static web app So it's just a HTML CSS and JavaScript that queries the appy and presents the the whole dashboard Let's go to the next please So that's the history we released it March 2016 to when half years ago today That's all the history of so what we built through these two years Today we have 32k github stars about a million unique users about half a million unique nodes monitored 24 actually is today docker pools 24 million docker pools a lot of contributors and the community is growing by 2k new unique users per day and 1000 new nodes Next one, please So collecting matrix out per second to stuff here is a chart to the animated chart is of a of a busy, let's say cloud environment a cloud VM where you can see that Latency data collection latency on proc file system reaches even a hundred milliseconds randomly This is a problem because this this introduce about 10 percent error Randomly on the data we collect the data does this right so it it it collects it it measures the latency for each For each five proc file separately so that the interpolated data are always accurate at the end Let's go to the next one, please So that a collection of that as a portion turn plug isn't external plugins internal plugins and written in C Also, they are threads inside the data external plugins are in any language. You like of course We we wanted to to to simplify things so we support data collection orchestrators the main orchestrator is written in Python We will see later. It has a lot of modules, but we support all other languages too Let's go to the next these are the collection slides just to give you a brief overview of what the data does it collects a lot of Information from proc. Let's go to the next one It collects. This is nice because it collects the containers via C groups control groups So it is able to from the host to collect Information resource utilization metrics for containers for an of any orchestrator of any manager or container manager So it supports as a side effect is how it supports also VMs LibVirt, QMU VMs and system these services two things are missing from C groups one is the networks that are attached to each container for this thing it it queries the network stack of the of the system to find out the the the network interfaces that are attached to each container and It also has another helper that queries the Docker or Kubernetes or whatever to find out the name of the container Let's go to the next one It has an internal starts the server starts the plug-in So it is a high-speed starts the server and because my data is distributed. It is installed all over the place This means that all applications are sending stats in metrics to local host which is faster and more if follows the availability of the server let's say and because we require from the The metrics to be to have some means some meta data attached to them We support synthetic charts. So although the stats the protocol supports just a bunch of metrics Which synthetic charts will you can have patterns to have to assign the charts the the metrics in charts and applications and families, etc? Let's go to the next one so more plugins Netfilter display quality of space quality of service Enterprise hardware IPMI it runs on free BSD on Mac OS. Let's go to the next one The interesting one are near the end. So I'm trying to I'm trying to go fast. So This is a up plug-in is very very important because it process the whole process tree every second all of it Even the the it is inside the containers from the host. So Unlike all the other tools we have seen so far. This is this has a unique feature This can break down the total CPU utilization of the system To the process is running. It does this by by using also the Utilization the usage of the exit children of process. So if you have a self script that spawns hundreds of commands per second Each each command for a few milliseconds this plug-in can correctly allocate the total CPU utilization And all not only CPU all the same even this given whatever the total resource utilization of the of the of the Self script to that process. Let's go to the next one This is the Python plug-in. Okay, we collect databases. We collect web servers. We collect Q engines Many plug-ins who three are of special interest is the web log plug-in that tails Access logs web server log files or proxy log files and turn them into real-time real-time metrics HTTP check does remote HTTP and HTTP checks on local local host or remote web servers and poor checks poor check Examine's remote remote TCP connections Okay, these are these are written in Python actually, I think we have reached the limit with Python Python is not it's not so Due to its logs. It has some issues when too many things are done in parallel So we are moving almost everything to go Let's go to the next one, please Node plug-in we have all simply made it an SNMP collector there We have a bus plug-in for someone that wants to quickly do stuff with a few commands Collect metrics with a few commands and an fpq plug-in that collects letters in jitter and packet loss over remote endpoints Let's go to the next one So metric streaming each net data can run in these modes so it can be an autonomous monitoring the default installation It can be a headless collector a headless proxy when we say headless will mean that the data does not listen to an API The API does not exist. It is not there. There is not a database. There is nothing So autonomous monitoring headless collector of headless proxy store and forward proxy. So this proxy can also expose the the Metrics via the API and the master that collects everything together this allows The whole setup is the whole idea is to beat the era keys of net data this the The images on the right you can see on the top image Setup with ephemeral nodes that push all metrics to in a data master and this one Visualizes and does everything for for the ephemeral nodes and on the lower right you can see the as a top with three teams Which where two teams are responsible for a part of the network for part of the infrastructure and the third team has full visibility on everything Let's go to the next one Matrix archiving so the data can send all the metrics it collects to all these Formats to this backend databases time series databases It can send them in two formats as collected is what we normally do with all other collectors. So It does not process them. It does not interpolate them. It does not do anything on them. It just receives them collect them and send them to the time series database and the average is Where it exports the metrics through each database. So these are interpolated metrics everything is go gauze and Presentation ready with the right unit. I mean net data can also Because it collects thousands of metrics per node the typical setups about 2000s metrics per server or per container, let's say Everything is per second. So In order not to overflow this time series databases, it can lower the frequency it pushes metrics Let's go to the next one Health monitoring health monitoring is an independent thread within a data It is it is lockless. So it's actually woodstock It supports statistical algorithms and rolling windows and the likes alarm tip. It supports also alarm templates so you can say okay. I want this to happen for all my Network interfaces all my disks or all my my SQL servers or all my containers Uh, so it's a tabletized and there is a mechanism for sending notifications to all these actually a lot of You know slack and mails and web notifications and all this kind of stuff Let's go to the next one What what I didn't talk about in the then the health monitoring is that the data is comes With hundreds of alarms pre-configured. So you just install it and immediately it will say it will it will tell you if Detects any anomaly on your system Meaningful presentation as I said meaning presentation was very important because most people do not understand the metrics They don't have a clue what the metrics mean. So what we did is that out of the box You get a presentation like that. Everything is very well organized in you can see on the right on the right of the slide the the Main menu let's say let's menu of the of the of a dashboard Let's go to the next one The the the page the the dashboards in the data provides are optimized for a normal detection We have done a lot of work for that. So you can see for example that You can mark an area and once you mark it all the other charts are marked all the all the all the charts are Like one. So you do something on one chart. It happens on all charts There is no point to go one by one. Let's go to the next one You can see all this on the if you navigate it dashboard Snapshot snapshot system is a nice feature when you want to To open a ticket. For example, we had to open tickets to hosting providers or ventors Somehow and we wanted to share the metrics with them. So what the data does it allows you to save a snapshot This the snapshot is a download a download Including everything that does board includes for the visible time frame So you just pan and zoom the charts to the to the time frame you like You press a button a snapshot is saved. It's a file of your disk This can be shared to third parties attached to incident support tickets When loaded back You get exactly the same view you had when you saved it with all the data at the desired resolution and Uh This actually the snapshots are not uploaded to a server. This is just a That's a web browser feature. So the data are loaded On any net data server on the on your browser. So you go to a net data server. You say import the snapshot And it loads it even if it comes from another server. Let's go to the next one embeddable visualization embeddable visualization allows net data to integrate its visualization its charts on any party app. It's any web page actually uh, here you see Four web servers coming from all over the world to a conference page This is funny because you know, when we when we develop that On the on the company I work for we had a team the qa team that had all the errors They had correct collected all the errors that could happen for introduction They put it on a page on a conference page for a table. Let's say Then the devops guys show that page and say, okay I can add charts for all this for each of these of these errors I can add a chart And they did it. They added a sparkling charts just In the in the table Then the operations guys saw that and said, okay, look this happened should not happen This error should not happen I can talk to the banks And try to solve them suddenly they They started attaching gira tickets on that with the communication we had the with the banks So three different teams out of nowhere in a few days They managed to build a very powerful tool To examine and troubleshoot and you know scrutinize all the errors The infrastructure had it was it was very nice very I loved it. Okay, let's go to the next one. I find magical so Let's say comparison. This is what a typical setup a user run a number of nodes It needs health check automatic health checks and alarm notifications and wants to have A visual review of the performance of their systems and applications. Let's go let's go to the next one So the traditional model is that okay, you will have a number of Data collectors on each of your notes. Usually you need many Then You will have all these data collectors will have to push the metrics in real time to a time series database But because this database is a bottleneck. It needs resources. You have to cherry pick the metrics and you know Don't push them too frequently try to be gentle with that because That will be expensive at the end to do it at full resolution This database somehow either with a third party engine will send alarms Most probably you you will have to build each alarm one by one And to visualize the data you will have to learn query language and Learning visualization tool to have some dashboards But at the end this dashboards are usually too poor for real-time performance monitoring troubleshooting So yes, of course they give you an overview But most probably are not good enough for visual anomaly detection and for troubleshooting The underlying performance issues Let's go to the next one This is the simplest model the simplest form of any data. So you have your you have what you have Just installing a data on it. This is it every every node is autonomous If you want a time if you have a time series database, you can put it there You can stream your metrics you immediately you will receive health notifications and You will have real-time interactive automated dashboards that visualize every single metric And which are perfect for visual anomaly detection And of course you will not have to learn a query language for that. So no additional resources Let's go to the next one This one is set up with ephemeral nodes with ephemeral nodes, you know, you don't know the url of the server Where is the server to connect to so in this case? What we do is that we install the collector at the mastic node of kubernetes, let's say or somewhere that The ephemeral nodes with the same configuration all the ephemeral nodes can push metrics in real time The permanent node node in this case my data is self self-cleaning. So it knows if it spawns If hundreds of ephemeral nodes have been spawned to clean up after some time you define all this kind of stuff, etc Again, this is a lot a lot more cost effective you actually get real-time performance troubleshooting and visual anomaly detection for no additional cost. Let's go to the next one So the bottom line of the of the nother today is that We believe changes the economics of monitoring It uses only available resources. We never require dedicated resource from users It requires minimal human skills You just install it all the expertise and all the knowledge is embedded in it So it will guide you it will it will drive you and We have automated everything that can be automated. Of course, there is a lot more work to do In contrast with other solutions, my data has been designed for sysadmins devops and developers So it's not it's not a data scientist tool. It's not something a data scientist can query and do stuff And the idea is to to to have a tool that will improve the operations immediately. You just install it immediate results Let's go to the next one This is what we have in our on our roadmap. We need we want to improve the internal matrix database today We need two points two bytes per point. We believe we can we can go below one um We have We need a new streaming an improvement of the streaming protocol to support active active clusters today My data supports active passive passive clusters only And improve it in the cases of iot that the network is is not reliable To integrate it with service discovery tools port it to windows Have automated anomaly detection on it support annotations and many many dashboard improvements. We can build We want to build Custom dashboards editors directly on this on this on this setup. So you just point and click and nothing to be done above that Let's go to the next one So the data in the cncf ecosystem. We believe that the net data is a smarter data collector. It's a super set of most other data collectors They provide high resolution metrics. It is lightweight They are the things we said so we strongly believe that net data can be used in the it is it will be useful in the whole cncf observability ecosystem Providing real-time troubleshooting And simplifying we believe it simplifies a lot data collection Let's go to the next one Today it is gpl v3. We have created a company We have moved all our copyright ownership to this company all the contributors So today 99 of the data of net data is owned by net data inc We will collect the rest one percent and then we can change the license to apache 2.0 So that this was the presentation What we what we we want from cncf is mainly everything your service desks says everything I mean we need help in all areas to to to go through That's it Thank you We're very low on time But a couple of minutes of questions might i'll start Um You mentioned that it does ephemeral systems Have you got any examples where you're managing Being cuban any specifically and the kind of container micro service environments that epitomize cloud native Yes, the whole idea is that you're installing net data It's very simple you to install net data on the hosts of all kubernetes nodes not not in the containers at the hosts so Once you do this then Even the familiar ones Then you can have net data stream metrics With static configuration to to whatever your master is Do you want me a demo? Do you want to a demo to of this of this setup? Anyone else got a question Which it seems How many nodes to people typically monitor with net data? I mean obviously there are many nodes being monitored with net data But how many how many do people do people tend to to monitor in the hundreds and thousands or in the the the tens and hundreds Well, I can't answer that. I don't know I know that There are a lot of people that collect Actually because my data is distributed you install it or not your servers It doesn't influence the server. It is just one second one percent cpu utilization of a single core So it's ignorable. It does nothing on the server Once you do that, then When you want to you will immediately get alerts. So alerts is something you will get whatever you are With all the channels you you like But at the same time if the server gives you an alert It will give it will also send you a link to connect to that server and see what's happening What what what is happening there? The whole idea is that Net data is not it scales infinitely because we are not requiring Requiring the metrics to be centralized the metrics are kept inside the servers Of course, you can you can centralize them if you if you want longer history than I don't know 24 hours But this is not required So, uh, you can just install the data in hundreds of thousands of servers and nothing will happen No, no, no, no It will have no impact on anything Is that clear? Yep. Thank you Okay, we're out of time. Um Thank you very much Costa for the presentation and uh, thank you Matt for I just saw that you posted about the early discussion on the toc list Um, please please. I want to solicit feedback on this topic of categories scalability politics working groups It's very clear that if the cncf is to grow sustainably We need a solution to the fact that there are only nine toc members and Camille is not the only one of us who is time poor. I think we are all time poor And we need to come up with a good solution that goes beyond what we've done so far So, thank you very much everybody talk to you soon Bye. Bye. Thanks. Bye