 Live once again ladies and gentlemen. Good morning. Good afternoon. Good evening. Good night. My name is Michael wait and we are here for another Episode of the open shift commons briefings operator hours. We have we have yeah, you're Cohen from Data dog. He's a product manager And we're going to be talking about containers and ecosystem trends. How are you this morning? Hey, Michael good Thanks, it's great to be here. We're both we're both East Coast. You're in New York in Brooklyn, right? Right It's no longer snowing here No, no, no, you were when we were trying to do the dry run for For this this this conversation you were down in Texas somewhere and we had to reschedule a couple times What happened down there a little bit? Yeah I Decided to take a short trip after, you know many months at home and wanted to get away of the freezing cold of New York so made it all the way to Texas and Got stuck in another storm apparently so much for being in the desert How much snow what where'd you go in Texas? I was in Austin for the first time. She's a really great city I've been down there a couple times for like Docker Khan used to be there I think Kube Khan they had down at the convention center one time. Yeah, how long you can visit Elon Musk How long did you get stuck there for? That was a week. My flight was canceled three times and I got lucky on the fourth Oh, wow, you could have maybe just taken it taken a class or something to get out of there, but Yeah considered all the options Well, we're glad that you made it back Thank you, I hope for everyone who lives in Texas, but it now have power and water back That was definitely Yeah, we're a red hat just red hat just donated. I think ten or twenty thousand dollars to the American Red Cross to help out down there Oh, that's amazing. Yeah. Yeah, so you work at Datadog and Tell tell us about tell us about Datadog. It's certainly one of the more You know sexy software companies that's out there these days tell us about tell us about Datadog Yeah, well, I work at Datadog, which is a SAS monitoring and security platform We're really like try to be containers first cloud agnostic Been pretty exciting journey so far I think we started the company but Almost ten years ago. I joined a year and a half ago and You know every year we're releasing new products Focusing a lot now also on security It's been very exciting, you know, given all the changes that are happening now with the digital transformation that was really accelerated by in order of magnitude over the past year as well as you know a Lot of a lot of the modern cloud native technologies that are appearing there, which is more my focus And we're really trying to like basically break the silos between teams make the you know experience of Running applications more efficient and more effective and really bringing everybody together devs ops and security analyst as well as like business decision-makers and especially everybody Now you know what I asked you the day I was like, oh, okay, so datadog. That's that's APM, right? That's application performance monitoring. You're like, no, no, Mike It's it's it's much much much more than that You folks are pretty pretty diversified as far as the types of software that you guys are making these days, right? Exactly. Yeah, I mean we started there dog as a monitoring platform that was focused on infrastructure monitoring and metrics Or time-series database later on we launched our logs management product as well as application performance monitoring But datadog is really a platform. It's a unified platform. We're like We make it easy to correlate between all these types of telemetries and jump from one section to another It's not really different products. It's a unified experience and ever, you know Over the past few years we continue to launch new products such as like network performance monitoring Continuous profiling security monitoring compliance monitoring all things monitoring, but aren't there So I work I work with a lot of software companies who certify their their products and their offerings on red hat You know what red hat enterprise Linux ansible open stack open shift. It seems like there's No shortage of you know companies who do what you do out there What makes you guys better than? You know name, you know any of the others Yeah, I mean I think that the last the last year right with The unfortunate events of Kobe then and all these difficult times that we're experiencing Made me understand. I think a lot of our customers where we are unique in in the industry in the sense that In order to move fast in this world and in order to transform your organization and your business faster you just Need to have less tools. You want to like work together With the same view and that's I think what sets the other part the ability to scale really fast and adjust Quickly and seamlessly to your business needs Whether you scale up or sometimes down in in times like this and bring everybody together with no, you know difficult permissions or deployments or Cases where like you need to hire more people right like one of the main benefits of the dog is to take off Some of the complexity in writing applications in the cloud some of the complexity of monitoring were for example with you know cloud native technologies and femoral infrastructure Trace like traditional tracing logs and metrics solutions became quickly inefficient With data that would really put the focus on the developers would put the focus on being containers first and cloud agnostic And we allow our customers to run on any runtime in any types of infrastructure cloud and premises and so forth Using the same tools the same agent the same platform What is it? What is it about data dog that? You know makes it easier for people to manage their cloud environment meaning like so if someone stands up an open shift environment and they don't have Something like data dog monitoring it What's the what's the experience like as opposed to when when you folks are involved? Yeah, that's a great question Michael. I think you know first of all most of the companies are kind of still in this journey to the cloud and that digital transformation to modern architecture and You know modern cloud cloud stacks And and that journey is where like we're focusing on the most right Traditional solutions Not or I think still not support both like legacy and and modern cloud environments, right with data dog You basically use the same agent and the same platform For all your infrastructure all your stack both your legacy environments your on-premise environments and like your cloud ones Regardless on the right time you're you're using and I I think that you know just Trying to get started with the dog and being able to like get everything together Is is Making us really shine and really focusing on the experience here Right like instead of moving away between different tools for monitoring When you do this migration from like legacy or from on-premise to the cloud You have the ability to like monitor and measure your performance as you're doing this journey as You're lifting and shifting your architecture as you're going to multi cloud architecture and hybrid with the same tool I Don't want to give maybe specific examples yet of traditional tools But those usually targeted one type of stack or or the legacy or the cloud, but usually not both So the focus is really about the user experience And so it's it's a SaaS offering. So how does that work? There's you know, you folks have an infrastructure back in your in your knock in your data center And there's just an agent that people set up and and run on every different pod or every node or every every server How does that work? Yeah, exactly. So first of all, the log is a SaaS platform. Like you said for monitoring insecurity Which means that all your monitoring Telemetry and data is sent to our cloud We are running on multi-cloud. So it's not necessarily a cloud We have data sensors in different regions in the world. We're running on Google on Azure and on Azure And AWS of course And we have two types of integrations We have agent-based integrations which require using that unified single agent that runs on any runtime That collects metrics logs and traces from within your workloads and containers and posts And we also have web integrations and cloud integrations that directly fetch using, you know, API's and public API's Data from cloud providers and different technologies Mainly SaaS and PaaS We have more than 400 integrations Every day that I checked it we're adding a bit more part of what I also work on with my team And we basically make it really easy in a single page, which I'll definitely demo you later to add more and more integrations into your data.platform front So I was gonna say so you're gonna you're gonna provide a demonstration here, and then we're also gonna talk later on About your container survey that you folks put out every year, right? So this is how many years have you been doing this container survey now? Oh, it's a good question. I think that that's about five years And if I can correct you like what what's unique about this this study I think is the fact that we're really relying on on real data We're trying to provide visibility to our customers and any anyone in the community to the latest container trends that we're seeing For more than like a billion and a half containers and tens of thousands of customers Yeah, I was gonna say it's probably five years. I know I know You guys must have a tremendous amount of telemetry information about the apps. Yeah, that's an understanding, right? I mean, I mean, isn't that almost like an unfair advantage that you folks have I mean you have so many customers That you're managing and monitoring that you guys must have your your you know your fingers on exactly what's running where when and how, right? Yeah, yeah, exactly. I mean as a product manager, you know, I Am trying and like my colleagues are trying to like build data dog using like, you know Data-driven approach for taking mindful decisions based on on what we're seeing our customers use where they are So that's that's really help. It's a huge advantage But also, you know, some great responsibility, right? Like we need and we want to stay ahead of those trends Such as like, you know to give an example our move to Kubernetes Happen already started like a few years ago before Kubernetes was even, you know, very popular And about a year ago, we, you know Got your place where we're running 100% of our workflows in Kubernetes So, you know, taking the risks on betting on new technologies And Kubernetes is just one example is another, you know, thing that we're doing, which is, you know being there like before our customers and Knowing kind of like practices. What was it running on before if it wasn't running natively on Kubernetes? Was it originally just written for bare metal or something? Yeah, we're running on demo. We're running, you know, monolith applications, then running Docker containers And now we're running all those containers in an orchestrated environment. We have like a Multi, you know cloud architecture where we have a Pretty robust Kubernetes platform that we built ourselves That allows, you know, running physical Kubernetes clusters on bear at all, but also and by bear at all, I mean on cloud VMs and Developers can create their own virtual Kubernetes clusters within those physical clusters, right? So you can think about like child and parent And and that really I think amplifies the Benefits of orchestration, right, which we're focusing a lot on this report That we'll talk about later as orchestration Abstracts the complexity of the cloud from from users with a container first approach, which we're also taking in data One of the main benefits is that you can run containers everywhere, right? You can move them everywhere you can You can, you know, run them in multiple places and and Kubernetes is is one of those technologies out there that that abstracts that complexity for you And allows you to move move containers anywhere Okay. Well, we we've we we're happy to have you guys on the show here today I was expecting I was expecting Elon to be on but he must just be a little too busy He's your VP of marketing. He's usually the guy that we work with I know You know, we've been working with data dog now for for many years to make sure that you're Your software runs and runs well with with OpenShift and our other products and Dr. You guys have your annual conference down there in New York although you used to it was Data dog dash, right? Yeah, we have dash around the summer which is usually the most exciting event of our company where we are announcing a lot of new products and features And inviting our customers to try it out and hear more about it and talk with us I went down there Well, of course, I didn't we weren't we weren't there this year because of the the challenging times that we're all working in but I went I was down there last time. It was down in Down in New York down in the waterfront there by the piers and I gotta tell you I mean it was it was really impressive seeing how many customers were there and The the excitement around, you know, data dog the platform And not and you know, I mentioned this to your marketing people the other day But it was also probably one of the best trade shows I've been to The food was phenomenal It wasn't like the little mini sliders, you know that that are cold and the hamburger bun is stuck on to the Onto the patty mean it was there was a really well really well run Event had this is great here. We really do I'm hoping that we're gonna be able to get back to in person events again And I can't wait to have back down. Are you guys planning on having it in in New York again the next time that you know We can all travel and go of course. Hopefully this will be this year, but it doesn't look like, you know We will all of us be able to do that. But as soon as things get back to normal, I'm sure that those events will Is that because your headquarters isn't in New York City, is it or is it My power what sorry Yeah Like last year, we had a great virtual dash I think it was a very unique experience for all of us at the day dog and it was a pretty successful event Even though, you know, we couldn't hand over like squags and delicious food to our guests and users unfortunate. Yeah All right. Well, anyways, so I was I was talking about Elon and you know the the the reason why we have you folks on the show here today is not because You know, you're just some random company, but you know, we we consider data dog and the services that you guys offer a pretty key workload to helping our customers be successful running You know, Kubernetes and specifically open shift for production environments and Folks have a have a red hat certified container you have a red hat certified operator for open shift and I think that you're available in the red hat marketplace as well as Okay, and you're a member of our open shift commons community, which I know Diane Mueller is very excited about I don't know if anybody who's listening has it has had an opportunity to meet Diane but she's like She's probably she's probably one of the most amazing people and that that I've had the pleasure of working with She actually is responsible for the open shift commons program and all of their events and those are all over the world if anyone's ever has a chance to attend one of the commons briefings they're they're pretty terrific Anyway, so we have a demonstration that you're going to show us about, you know, what it is how it works. What do you what do you need we need a drum roll or What do we need to set the stage for for what you can show us here today. Yeah, you're I just can go ahead and give you a quick download. I'll go ahead and turn my screen down. Are we ready. I think I think we're ready. I think we're ready to roll. That's great. If you just give me one second. I'm going to start sharing the screen. Can you see it all right. I can. I can see your screen. Cool. So we'll do a quick demo here. Again, this is data log for those who haven't seen it before. Data log is a SAS monitoring and security platform that combines your metrics, traces and logs in a single place to enable visibility across any kind of stack for all teams and stakeholders. This means that everyone devs off security teams are able to break down silos and collaborate more efficiently. So what we're looking at here is a dashboard. Used to bring in critical data such as metrics logs and traces across your environment in a single view. This specific dashboard is for our demo app shop is, which we'll use for this demo, which basically powers an e-commerce retailer that we've set up. As you can see, it's showing key information about our applications such as system health, uptime, and two more advanced things like, you know, synthetics, network performance, and real user monitoring. Tests. You asked me before Michael about our integrations, right? So we have the agent, we have cloud integrations. You can see about like 400 and plus integrations in the screen. Each of them enables you to quickly set them up user using very few steps. Our agent, you know, supports Redis and OpenShift as well as all the other like types of infrastructure and runtimes. It's a single agent that usually can be deployed in like one or two steps. So once you set up some integrations in your environment, each of these integration comes out with out of the box dashboards. For example, this is one of our many out of the box dashboards for Kubernetes where you can see an overview of your Kubernetes clusters and OpenShift as well. Once your integrations are set up, you can also take a look at your infrastructure. For example, here we're seeing all the hosts and the VMs. We can use tags to like group them and slice and dice. For example, such as cloud provider and availability zone and choose any metric to color them. For example, we can choose user CPU and notice that there is one instance here that is pretty busy and really down to understand what is running and what might be. If we want to switch to containers, we can also take a look at all our live containers including our Kubernetes and OpenShift workloads. Here for example, I'm using again a tag, which is the cluster name to pull all my pods. So I can quickly get an aggregated view of how many of them are in each state. For example, those that are crushed back off. With a single click, I can drill into the specific pod, look at all the containers that are running in it and correlate between logs. If I have any errors, metrics that are specific to my pod or container, processes are running in it as well as network data, traces, performance and more. Can I just jump in here for a second? I did want to say to the people who may be watching and listening, we're live on YouTube. We're live on Facebook as well as Twitch and certainly our bridge here. If anyone has any questions, we'd like to play Stump the Product Manager here today. I wanted to share the screen because I have something really, can I share the screen for one second? Yeah, absolutely. Yeah, I'm going to, I'm going to share. Now, can you see my screen? I see this beautiful shirt, yes. We're going to play Stump the Product Manager. If anyone has a question and they can Stump the IEA about something that's specific to his area of expertise, we're making up these T-shirts that I think everybody can relate to these days. Can you see my screen and the year on mute edition here? If anyone has any questions for a year, please put them in the chat and then we'll get you one of our one of our new challenging times T-shirts. Right, thanks Michael. And you know, since we're very interested in getting any questions, I'll just maybe quickly explain what my area of, what my domain of focus or expertise in data dog, right? So I'm the Product Manager for containers, which means a bunch of things. First of all, you know, focusing on making data dog a container first platform, which means, you know, with the femoral workforce such as containers and infrastructure, the ability to like basically run everywhere on any runtime, as well as like, you know, the challenges that modern cloud stack provides. We really like focusing on making those challenges disappear when you use data log. For example, right, like with the number of workloads and containers and microservices, the tagging or the number of signals and how we classify them, exposed by an order of magnitude. And one of the things we're working on in data log is on making it easy to like control that cost that that cardinality right so you can control control those metric tags you can control traces and the logs that you index and so forth. The other thing is my team and I are working on different Kubernetes open source projects to contribute to the community, such as our extended demon set and the data operator, our water pot, watermark for other scaler and so forth that I can talk about later. So we're trying to like build developer tools to make monitoring Kubernetes and other environments easy. The other thing is that of course we work with the major cloud providers, different CNCF project on monitoring those with our integrations. And lastly, we're working on all the different open source standards to keep our customers, you know, where they are, to help them where they are and keep them from vendor logging. Lock in, you know, such as like open metric standards. To use open telemetry and so forth. That's kind of my area. Should we go back to. Yeah, yeah, I didn't mean to interrupt. Well, actually I did mean to interrupt, but I do want to offer up these these shirts. I think they're, I think they're pretty cool so. So, we're going to send some down. We're going to have some co branded with data dog red hat and we'll send some down to you guys as well. So, stump the product manager challenge starts today. And having said that, please resume with your demo and I won't interrupt you again. I promise maybe. No, I don't like it. Feel free to let me know if there are any questions from the audience. But cool. So, you know, we kind of like looked at the host and the containers in the infrastructure. Right. The next thing I want to move to is our APM services. So we're now looking at the service map. And, you know, in today's paradigm of microservices were like, you know, we run a lot of different. Like we run a high number of different services. It can be difficult to keep on top of dependencies between them. So what we're seeing here is a map of all our services. And we can understand how each of these services behaves with any, you know, request that it receives, for example, if I have an incident and I wake up in the middle of the night, I can quickly understand. Where the service that has a latency or higher rate and which other services might be impacted based on these different dependencies between them and the communication. We'll switch to the traces. Sorry, we'll switch to the service page of one of those services that we just saw the Web Store. Here you can see an overview of all the basically the application performance of the service Web Store. For example, we can see the requests that the service is receiving where each color represents a different version, as well as like the latency, which I can choose from and many other things. And those cool things such as like compare like the performance of my recent version to the previous one. And understand if there isn't any difference in maybe an application bug introduced a higher error rate and investigate this real quickly. All the way down to the infrastructure itself, right? So for example, the services running on Kubernetes and we can see all the containers, how many pods are running there, how much security is and so forth. So all this information is received by the data agent that collects like traces and sends into the data. Here we can see basically all the traces that are received. And I can, for example, use one of those tags to filter the traces to only show me errors that are type payment service unavailable, right? So click on one of those traces, one of these application requests. And as you can see with this flame graph, I can quickly understand all the services that were involved all the way down to the payment API that received an error. Look how also easy it is to quickly pivot between infrastructure metrics, logs, and so forth. One of the nice things that we added again when we think about container first is the ability to get all your traces with no filtering, no sampling in live for the past 15 minutes. Those are extremely helpful when you're like troubleshooting an issue in production where you don't really need to like index and store those traces for a long time, but you're right. Just want to understand what's going on at a specific time. Yeah, I have a question for you. So, in a distributed computing environment, people notice that, you know, there's something wrong or that there's, you know, something's consuming too many, you know, processes or something's consuming too much memory. How does Datadog help with affecting a change to fix that? Or is it just purely monitoring? I mean, Datadog does not role changes to your own applications. It just, you know, receives telemetry from these applications. What Datadog does is it makes it very easy to detect issues and to also investigate and understand the root cause that they happen. So the application developers or any other users can, you know, get their applications back and running as quickly as possible, right? For example, if I deploy, you know, if we go back to my page, right, I can look at my deployments. Let's filter them to show a specific app that is deployed to multiple clusters in different regions. Right, I can use this screen in real time to when I roll out a new version to see how the rollout performs, if there are any errors, if all the replicas are up and running as expected to view metrics and logs and so forth. Right, once the application is up and running, I can use the application performance monitoring to compare between these versions, right? So I can, for example, open the active version, compare it to the previous one and see if there are any higher number of errors which I can then compare and look into to investigate what issues. Then of course also monitors that I can set up to automatically detect me when an error rate goes up or when my replicas are not available and so forth to really, you know, reduce the time for detection and reduce the time for investigating. So moving forward, right, we're looking at the logs here of this application request that returned an error from the payment service and I can now move to our logs product to quickly look at this log. As you can see, the log itself, each log message is tagged with all my infrastructure tags as well as my application ones. With the trace that allows me to understand what happened before this log line. And, you know, with the logs, one of the nice things that we have here, in addition to exactly the abilities to filter and group by different tags is also the ability to understand what happens, right? When I look at application logs are very usually noisy. If I don't know exactly what I'm looking for, it's hard to understand and find what I need. And with this patterns detection, I can quickly identify repetitive patterns that they will automatically discovers and help me, you know, understand if there are any outliers or specific issues that I can quickly look into. Similarly to like our application performance monitoring that allows me to send traces without limits, our logs product does that as well. And I can change to a lifetime where I can see any logs that are received in my environment, but by any containers and any cloud services that I'm using those logs are not indexed so they're very cheap. And I, you know, we build this because we understand that, you know, some logs you need to keep and store an index, which you can control and choose and some are not that important. But in case of an incident, they can be extremely important, right? So with the lifestyle, you get all the logs without limits available to you. Lastly, let's move to our security product. Our security monitoring product allows you to automatically detect issues we collect and store those security signals that data detects for up to 12 months, I think. So you can really understand the patterns in your environment and keep it safe. You were looking at once security signal for an account takeover with a boot source attempt. And we can get like a message that also tells us like how to triage and response it. Lastly, I want to show Watchdog. Watchdog is a page that shows you a feed of all the things, all the unusual things that you would less likely to detect yourself. We're using some machine learning and advanced algorithms to identify any issues in your services. For example, we're looking at a watchdog story in one of our MongoDB database databases that show us an higher error rate for some queries at a specific time. And we can quickly create a data log monitor that will notify us with alerts via Slack or any other like notifications system that you have about this the next time it happens. So I'm going to finish here and see if we have any questions before we move on to our container reports. I know, I know I have a question. So what size, what size clusters are you folks monitoring out there? I mean, are we talking, you know, like any kind of size. We have a lot of customers, some of them are small and medium, some of them are very large. I can tell you that we run some of the biggest class, many clusters, I think, in the world. And I'm talking about thousands and more of nodes per cluster. So how do you deal with configuration management then? And, you know, so if your agent needs to be deployed on every node that stood up. How do your customers manage, you know, updates and changes to the data dog agent that's running on those nodes and keeping everything in sync? That's a great question. You know, we're, as I said, like we're trying to stick agnostic to any cloud technologies, any cloud tools, our customers use, you know, a huge variety of tools that we support, right. Some of them, for example, adopted the GitOps approach where they keep everything in a source control and with CICD deploy changes. And our agent, you know, provides health charts and an operator where you can keep those manifests of YAML files in your source control and deploy them across multiple nodes and multiple clusters. With Kubernetes, of course, and OpenShift, we use the demon set approach where the demon set basically updates the data agent on each of their nodes. We also support Ansible, Chef recipes where, like, people use VMs directly and deploy the agent on that. So, you know, the goal is like to create a single agent that provides you, you know, you can find everything in our documentation, which I control there, support for any CICD and configuration management tools that you have. Okay. And does everybody run this in the cloud? I mean, or are there people who say, well, sorry, but our policies are that we don't want anything outside of our own infrastructure. So can people use Datadog on site? Do you have something other than a SAS model? Would you not have anything other than SAS model, but we do provide a lot of capabilities that allow customers to securely and efficiently monitor their own premise clusters. You know, we basically, you know, these features include things like automatic reduction of data and scrambling of sensitive data using like flow processors to remove any sensitive information. Usually metrics are not that sensitive, but we also provide them capabilities to like remove tags and things like that. But the point is that, you know, even if you're running on premise, you can keep all your sensitive information and you can keep your applications running there, but you still want a unified and reliable monitoring solution in the cloud. I can tell you that we have a lot of different types of customers from, you know, different industries and verticals and some of them are, for example, financial customers with the most strict compliance requirements that, you know, they use Datadog and we work with them to meet those requirements. Our Datadog agent, you know, provides you all these capabilities to like customize and control what is being sent, what is being delivered. And I think specifically for monitoring, right, like the SAS having a reliable SAS platform is really one of the main reasons for using Datadog in the first place. I was just curious because I would imagine that there's some companies who are extremely paranoid about, you know, maybe there's some government agencies or, you know, the IRS or, you know. Yeah, well, you know, for example, we were working and I think we announced like a cloud for like a cloud offering for government customers. Right. So, so that kind of cloud that we built for the government customers is isolated from our public cloud offerings and is more secured in some ways or it meets different compliance needs. Does that make sense? Sure. Okay. So we said earlier that you were going to talk about the results of your survey. You put out a survey every year. I think it was, it comes out in October or November, right? Right. We usually, we usually releasing the report during the United States, North America. And so this survey you put out is a status of container adoption within what's the sample size. It must be about at least 100 different sites that provide information for this, right? Yeah, I mean, for the report, we're using, we're basically examining more than 1.5 billion containers that's run by. Sorry, did you say 1.5, you said 1.5 million? 1.5 million. 9 zeros. Oh, with a B. Okay. Actually, I actually knew that I was just trying to tee it up. Right. Yeah, that's a lot of data as you can imagine. We have a really talented data science teams that helped us like producing this report and finding all these trends that we published every year. Okay. And so you're going to, we're going to go over the one that you folks published this, published this past year. Correct. Okay. Yeah, so one of, you know, the first trend that we wanted to start with is about Kubernetes. Kubernetes, of course, has a lot of flavors such as OpenShift. And our finding shows that about more than 50% of the containers are now running in Kubernetes. That's pretty, pretty exciting to see the rapid and the steady rise of Kubernetes. As opposed to running on what? So, before, you know, using Kubernetes or, Kubernetes is an orchestration platform, right, which as I mentioned before, like abstracts some of the complexity of the cloud and managing the infrastructure. Before that, you know, organizations still use containers or in some cases they have not yet, they run monolith applications and they deploy them directly on the machines themselves, right? So you need to say, I am going to run this container or this application on host X or Y. With Kubernetes, things are changing. And, you know, basically the orchestration is responsible for scheduling those containers on your behalf, on your infrastructure. One of, you know, for example, the changes in terms of like the user experience or the behavior is that the application teams do not need to even know or care much about the infrastructure or where they are deploying, whether it's in cluster X or cloud provider Y, but instead they just, you know, tell Kubernetes, I want to run these applications and Kubernetes goes to that super machine and runs them. Before Kubernetes to also complement this answer, there were other orchestration services, right? One of the most popular orchestration services for what we see is Amazon ECS, which, you know, provides a simpler way to kind of run containers in terms of like, you know, the different types of options that you can customize comparing to Kubernetes. And Amazon also was one of the first companies that released and managed orchestration platform that became super popular, right? So fact number two was that by now we see that 90% of the containers are orchestrated. That means again that all these Docker containers and now we're seeing, you know, the rise of the increased popularity of also other container runtimes. Those are just managed by an orchestration such as Kubernetes or ECS. Moving forward, this was a pretty surprising fact, right? What we found was that the majority of the workflows that are being deployed to Kubernetes are not utilizing CPU and memory efficiently. So for example, with CPU, you can see that about less than 10% of all the containers, sorry, about 30% of all the containers are using less than 10%, or 49% of the containers are using less than 30. With memory, we've seen a similar case. And that's kind of like counter-induitive to, you know, Kubernetes being able to be impact and automatically schedule containers in the most efficient way. And there are several reasons that, you know, we explain, I can talk about quickly why this is currently happening, right? One of them has to do with how the journey to Kubernetes looks like, right? Most companies had their own applications that they ran before Kubernetes. And kind of the first phase of this journey to Kubernetes or to orchestration is more like a lift and shift of your applications to Kubernetes. During this process, like you really try to preserve high performance, you want to scale, especially during, you know, this month, the past year, where like, you know, we see the pitch-slot transformation accelerating. And you do not want to, for example, risk your application being like um-killed or brought up by, you know, Linux. So that's kind of like the first phase, right? The other thing is that when you think about where customers that we work with are now, most of them are relatively new to like running Kubernetes. And we think that now that, you know, in the next year, we'll see the focus shifting to like from performance, like now that performance is good and scale is automatic scaling is working to cost optimization, which basically means utilizing the CPU and memory, which are usually one of the major, some of the major expense factors in running cloud services and applications. I was going to say, so what, you know, what's the ideal number? I would think it would be somewhere around, you'd probably want to be sitting around 80%, right, Ish? Exactly. Yeah. And, you know, if you think about it right, if you already have your, you know, applications before, you know, moving to Kubernetes, like those who are not necessarily monolith, but they were composite of a relatively few or small number of services. With Kubernetes, you basically need to specify for each container how much CPU and memory it uses, or, you know, that's the requests. The problem is that if you have very large containers, and you want to schedule them or be impacted efficiently on nodes, there is a certain amount of large containers that you could, you know, be impacted on a single load. The reason I'm mentioning it is because another trend which will, you know, show a little later in this report is about the move to microservices or the adoption of microservices. Microservices, you know, basically is an application architecture where you have a high number of services, a high number of containers that are smaller. And, you know, kind of like if you try to take a lot of small stones and put them, you know, in a glass, you'll probably have less air left rather if you, you know, try to put a few large stones in a ball, right, that will have a lot of gaps in between. So that's kind of what we're seeing in play here, and we believe that, you know, as companies move more towards like microservices and service mesh architectures, that would also increase and improve the utilization in cloud resources on Kubernetes. So that kind of like captures what we've seen here. It's pretty interesting. Let's scroll down a little bit and talk about Fargate, right? So Fargate is a compute system or service by AWS that allows you to run containers on a serverless compute platform. So it basically abstracts or moves the need to like manage and use hosts. As you can see in this report, like we've seen Fargate increasing to about more than 30%, pretty high number of usage of serverless containers in a single like service such as Fargate. Pretty exciting, you know, service containers, I think, will unlock a lot of use cases and benefits over the next few years and, you know, worth mentioning here that Fargate is probably a good representation for a lot of other like serverless compute platforms and orchestration platforms that we will see that are a bit more internecine than like Fargate, which is, you know, been really setting a few years ago, but will become popular as well, right? Even OpenShift and IBM have a bunch of like serverless containers, services such as OpenShift services, serverless, sorry, they're using Knative, which is really interesting technology as well. And serverless containers are especially interesting because containers are already ephemeral and a host, you know, is not something that you tear down every second, right? So having the ability to scale up and down your containers and run them without any infrastructure, like completely abstracting them away makes a lot of sense in many interesting use cases. So that was about the serverless. Michael, let me know if there are any questions or if you'd like to ask me anything. I was just pinging Chris Short to see how we're doing on time. I think he said we can go over a little bit if we need to. Sounds good. How much more time do we have? 10 minutes maybe? About five. Sounds good. So a couple of more trends here, right? Kubernetes, node sizes, as we can see in this fact. I'm sorry, Chris said we can actually run over, so we're good. Kubernetes, node sizes in Kubernetes are changing as clusters become larger. What we found is that in small clusters, the use of small nodes is pretty common still. But as you look at and move towards larger clusters, those small nodes kind of disappear and we see more like larger nodes with 60 nodes or more. And of course that includes 32, 64 and even more. That actually makes a lot of sense because when you run a Kubernetes node, you have kind of like a sunk cost of processes such as the kernel, the hypervisor, the container runtime, as well as Kubernetes specific components like the Cubelet that take resources that are expensive. And those basically do not scale linearly when you use larger nodes, right? Because you can run a lot of those containers on a single large node and your allocatable CPU and memory resources are just increasing. The other thing is that with Kubernetes, having a failure in a node is less of an issue. And with large clusters that have 1,000 and more nodes, the failure of a single node is probably not going to have a severe impact on performance, which is something that organizations are starting to accept more and more. So that's pretty exciting. And the next one is about networking technologies. Kubernetes is doing a great job in abstracting the cloud complexity. But one of the things that are sometimes left to the application developers and the platform engineers is managing the networking between containers. That complexity also increases as the number of containers increasing. The main technologies that deal with container networking and security, as you can see here, help containers to discover each other and really simplifies that communication for the application developers themselves. One of the interesting findings that we had was that Val Calico, which is a great networking technology, is the most popular. We see a lot of other technologies and this segmentation, this diversification shows us that this is an area which no one is yet dominating in and will be very interesting to see what happens in the next few years. We believe that the number of technologies for networking, container networking and security will continue to increase. We have some technologies such as NGINX and Istio that are super popular and used by, for example, Istio is used by Red Hat and a lot of other companies such as Google to build service meshes and that's something that we don't think will change anytime soon. So related to networking technologies and I think by that we will maybe wrap up the container report. We also published a fact about the service mesh adoption. Service mesh technology is really used as an abstraction to the application networking for applications that consist of a lot of small containers or small services. The infrastructure layer of the application networking is not sold today by Kubernetes, right? So if you're using, for example, an AWS cloud, you might want to use AWS VPC for networking, right? But if you're running your containers on other runtimes such as on premise or like in virtual clusters, the underlying network infrastructure might be different. That's one of the core benefits and promises of service mesh technologies, which is really exciting. However, what we found in this report is that while a lot of companies comparing to our last year report are now experimenting and trying service mesh technologies, but the adoption is still early. If you look at how many containers, sorry, how many organizations are actually running the majority of the workloads using service mesh technology, those numbers are still relatively low. When we were talking about this the other day, I basically admitted that I'm no expert on service mesh, but is this because the sizes of the containers are rather large comparatively speaking and that service mesh adoption is going to probably increase when the containers get smaller and smaller and smaller and smaller and just millions and millions more of them? Exactly. I think that that is the core reason. Most containers are still relatively large. And when you're using services architecture, like not microservices architecture, services architecture, which is still way more popular. You already have solutions that provide you some of those main benefits that service meshes do, such as for example, blue, red, canary deployments, right? You could use like an ingress controller such as NGINX to route traffic between different application versions or different replicas of services. However, once you move to like microservices architecture and the number of services is growing by an order of two or three to thousands of microservices, right? An ingress controller, which is more like a centralized way to route traffic is no longer very scalable or granular enough for that. And we think that, you know, as the number of services that organizations run will increase, service mesh adoption will follow as well. Fair enough. Cool. I don't know if we have time for one more or like do we want to wrap up a little bit? I think we do. This is good stuff. We'll get time. Sounds good. So one of our last facts was focused on about the most popular technologies that are running in containers today. Not a lot of new surprises here since the dominating technologies are still NGINX, ready some Postgres, but we had a few newcomers, right? I think one of the interesting ones is Vault that came up 10th, I think, in terms of the order. Vault is a really exciting technology by HashiCorp that allows, you know, application developers and platform engineers to keep any, you know, secrets and passwords safe, you know, for environments like production, where basically each port vault carries an identity and fetches them from, you know, the secure vault during the deployment and the continuous integration and deployment. And related to that, we saw that in Kubernetes specifically an open shift, the top container images that are running in stateful sets, stateful sets are stateful applications that require some persistence of state. We found out that, you know, those are databases or, you know, data services such as Redis, Elasticsearch, Postgres, and that's pretty interesting, I think, given that, you know, Kubernetes in its early days was not very friendly to run those technologies. And a couple of things change over the years, of course, dozens of hundreds of improvements to Kubernetes, but also a lot of support that came from those open source technologies and the commercial vendors that also maintain them to make them easily easier to run on Kubernetes as well. That also makes a lot of sense because for organizations that use Kubernetes, you know, the benefits of running all your services, including the data that connects all of them together in a single cluster in a single environment in a single network is obviously important a lot. It makes a lot of sense for, you know, that now we see all those technologies are becoming popular, which means that, you know, the journey to orchestration and Kubernetes is safer and more predictable. Great. Mike, I think that we hit the mark on that report. I think so, I think so. And that comes out every year, right? So, so the next one's going to be coming out November-ish. Exactly. What are your predictions? I mean, as I said, right, we think that more and more customers and organizations will, you know, move to Kubernetes and the different flavors of Kubernetes like OpenShift and all of those. We think that serverless containers are becoming more popular this year. I think that, you know, with service meshes, as microservices architecture is becoming the more, you know, recommended approach for cloud-native applications where you want to run containers everywhere. Microservices, well, you know, adoption will increase as well as service meshes. And, you know, last thing is about security, right? A lot of these technologies are meant and built and designed for containers and they, you know, kind of support the security requirements that running containerized applications in scale have. So, so that would probably be another major factor. We see a lot of open source and commercial solutions for securing containers and we're pretty excited to see what would be the dominated technologies in a year from now. Okay, well, we'll find out. We'll find out. Data dog, ladies and gentlemen. So, yeah, you guys have a free trial. If people want to use the free trial, we have it here on the screen. Yep. You can't click on it, but you can type that in. And thanks for coming. Yeah, yeah. It was, it was really good. I know that, you know, you guys are a great partner of ours and, you know, thanks for being on the show. Looking forward to having you folks back again in the near future. Likewise. It's always great. Thank you very much. Okay. Wonderful. Everybody have a great rest of your day.