 and we are live once again. Ladies and gentlemen, good morning, good afternoon, good evening, good night. My name's Michael Waite, and we are here for another episode of the OpenShift Commons Briefings Operator Hours. And today, now I'm gonna have to go here and do my screen sharing, which if I can find it again here. Can you see my screen? I'm making T-shirts. I'm making T-shirts to say, can you see my screen? We have Yair Cohen from Datadog. He's a product manager. And we're gonna be talking about containers and ecosystem trends. But having said that, I'm going to now stop my screen sharing Yair, how are you this morning? Hey, Michael, good, you should say. Thanks, it's great to be here. We're both East Coast. You're in New York in Brooklyn, right? I'm in Brooklyn, and it's no longer snowing here. No, no, no, you were, when we were trying to do the dry run for this conversation, you were down in Texas somewhere and we had to reschedule a couple of times. What happened down there? A little bit of snow? Yeah, I decided to take a short trip after many months at home and wanted to get away of the freezing cold of New York. So we made it all the way to Texas and got stuck in another storm apparently so much for being in the desert. How much snow, where'd you go in Texas? I was in Austin for the first time, which is a really great city. Yeah, it is. I've been down there a couple of times for like Dr. Khan used to be there. I think Coop Khan, they had down at the convention center one time. Yeah, next time you can visit Elon Musk. How long did you get stuck there for? That was a week. My flight was canceled three times and I got lucky on the fourth. Oh, wow. You could have maybe just taken a floss or something to get out of there, but... Yeah, considered all the options. Well, we're glad that you made it back. Thank you. I hope for everyone who lives in Texas that they now have power and water back. That was definitely... Yeah, Red Hat just donated, I think, $10,000 or $20,000 to the American Red Cross to help out down there. Oh, that's amazing. Yeah, yeah. So you work at Datadog. And... I do. Tell us about Datadog. It's certainly one of the more, you know, sexy software companies that's out there these days. Tell us about Datadog. Yeah, well, I work at Datadog, which is a SaaS monitoring and security platform where we really like try to be containers first, cloud agnostic. Been a pretty exciting journey so far. I think we started the company about almost 10 years ago. I joined a year and a half ago. And, you know, every year we're releasing new products, focusing a lot now also on security. It's been very exciting, you know, given all the changes that are happening now with the digital transformation that was really accelerated by, in order of magnitude over the past year as well as, you know, a lot of the modern cloud-native technologies that are appearing there, which is more my focus. And we're really trying to basically break the silos between teams, make the, you know, experience of running applications more efficient and more effective, and really bringing everybody together. Devs, ops, and security analysts, as well as like business decision makers, and especially everybody. Now, you know, I asked you the day, I was like, oh, okay, so Datadog, that's APM, right? That's application performance monitoring, and you're like, no, no, Mike, it's much, much, much more than that. You folks are pretty diversified as far as the types of software that you guys are making these days, right? Exactly, yeah. I mean, we started Datadog as a monitoring platform that is focused on infrastructure monitoring and metrics, or time series database. Later on, we launched our logs management product as well as application performance monitoring. But Datadog is really a platform. It's a unified platform where like, we make it easy to correlate between all these types of telemetries and jump from one section to another. It's not really different products, it's a unified experience. And ever, you know, over the past few years, we continue to launch new products such as like network performance monitoring, continuous profiling, security monitoring, compliance monitoring, all things monitoring. But aren't there, so I work with a lot of software companies who certify their products and their offerings on Red Hat, you know, what Red Hat Enterprise Linux, Ansible, OpenStack, OpenShift, seems like there's no shortage of companies who do what you do out there. What makes you guys better than name any of the others? Yeah, I mean, I think that the last year, right? With the unfortunate events of COVID and all these difficult times that we're experiencing made me understand, I think a lot of our customers where we are unique in the industry, in the sense that in order to move fast in this world and in order to transform your organization and your business faster, you just need to have less tools, you want to like work together with the same view. And that's, I think what sets the ability to scale really fast and adjust quickly and seamlessly to your business needs, whether you scale up or sometimes down in times like this and bring everybody together with no difficult permissions or deployments or cases where like, you need to hire more people, right? Like one of the main benefits of that is to take off some of the complexity in writing applications in the cloud, some of the complexity of monitoring, where for example, with, you know, cloud native technologies and femoral infrastructure, trace like traditional tracing logs and metrics solutions became quickly inefficient. With that, we really put the focus on the developers, we put the focus on being containers first and cloud and nestic and we allow our customers to run on any runtime in any types of infrastructure, cloud and premises and so forth. Using the same tools, the same agent, same platform. What is it about Datadog that, you know, makes it easier for people to manage their cloud environment? Meaning like, so if someone stands up in OpenShift environment and they don't have something like Datadog monitoring it, what's the experience like as opposed to when you folks are involved? Yeah, that's a great question, Marco. I think, you know, first of all, most of the companies are kind of still in this journey to the cloud and that digital transformation to modern architecture and, you know, modern cloud, cloud stack and that journey is where like, we're focusing on the most, right? Traditional solutions not, or I think still not support both like legacy and modern cloud environments, right? With Datadog, you basically use the same agent and the same platform for all your infrastructure, all your stack, both your legacy environments, your on-premise environments and like your cloud ones, regardless on the runtime you're using. And I think that, you know, just trying to get started with Datadog and being able to like get everything together is making us really shine and really focusing on the experience here, right? Like instead of moving away between different tools for monitoring, when you do this migration from like legacy or from on-premise to the cloud, you have the ability to like monitor and measure your performance as you're doing this journey, as you're lifting and shifting your architecture, as you're going to multi-cloud architecture and hybrid with the same tool. I don't want to give maybe specific examples yet of traditional tools, but those usually targeted one type of stack or the legacy or the cloud, but usually not both. So the focus is really about the user experience. And so it's a SaaS offering, so how does that work? There's, you know, you folks have an infrastructure back in your knock in your data center and there's just an agent that people set up and run on every different pod or every node or every server, how does that work? Yeah, exactly. So first of all, Datadog is a SaaS platform, like you said, for monitoring and security, which means that all your monitoring telemetry and data is sent to our cloud. We are running on multi-cloud, so it's not necessarily a cloud. We have data sensors in different regions in the world. We're running on Google, on Azure and on Azure, and AWS of course. And we have two types of integrations. We have agent-based integrations, which require using that unified single agent that runs on any runtime that collects metrics, logs and traces from within your workloads and containers and posts. And we also have web integrations and cloud integrations that directly fetch using APIs and public APIs, data from cloud providers and different technologies, mainly SaaS and PaaS. We have more than 400 integrations. Every day that I checked, we're adding a bit more, part of what I also work on with my team. And we basically make it really easy in a single page, which I'll definitely demo you later, to add more and more integrations into your Datadog platform. So I was gonna say, so you're gonna provide a demonstration here and then we're also gonna talk later on about your container survey that you folks put out every year, right? So this is, how many years have you been doing this container survey now? It's a good question. I think that that's about five years. And it's not exactly like if I can correct you, like what's unique about this study, I think, is the fact that we're really relying on real data. We're trying to provide visibility to our customers and anyone in the community to the latest container trends that we're seeing, for more than like a billion and a half containers and tens of thousands of customers. Yeah, I was gonna say it's probably five years. I know you guys must have a tremendous amount of telemetry information about the apps. Yeah, that's an understanding, right? I mean, isn't that almost like an unfair advantage that you folks have? I mean, you have so many customers that you're managing and monitoring that you guys must have your fingers on exactly what's running where, when and how, right? Yeah, yeah, exactly. I mean, as a product manager, I am trying and my colleagues are trying to like build data using like data-driven approach for taking mindful decisions based on what we're seeing our customers use where they are. So that's really helped. I mean, it's a huge advantage, but also, you know, great responsibility, right? Like we need and we want to stay ahead of those trends, such as like, you know, to give an example, our move to Kubernetes happened already started like a few years ago before Kubernetes was even, you know, very popular. And about a year ago, we, you know, got to a place where we're running 100% of our workloads on Kubernetes. So, you know, taking the risks on betting on new technologies and Kubernetes is just one example is another, you know, thing that we're doing, which is, you know, being there like before our customers and knowing kind of like practices. What was it running on before if it wasn't running natively on Kubernetes? Was it originally just written for bare metal or something? Yeah, we're running on demo. We're running, you know, monolith applications, then running Docker containers. And now we're running all those containers in an orchestrated environment. We have like a multi, you know, cloud architecture where we have a pretty robust Kubernetes platform that we built ourselves that allows, you know, running physical Kubernetes clusters on bare metal, but also and by bare metal, I mean, on cloud VMs and developers can create their own virtual Kubernetes clusters within those physical clusters, right? So you can think about like child and parent. And that really, I think, amplifies the benefits of orchestration, right? Which we're focusing a lot on this report that we'll talk about later. As orchestration kind of abstracts the complexity of the cloud from users with the container first approach, which we're also taking in data. One of the main benefits is that you can run containers everywhere, right? You can move them everywhere. You can, you know, run them in multiple places and Kubernetes is one of those technologies out there that abstracts that complexity for you. And allows you to move containers anywhere. Okay, well, we're happy to have you guys on the show here today. I was expecting Elon to be on, but he must just be a little too busy. He's here VP of marketing. He's usually the guy that we work with. I know, you know, we've been working with Datadog now for many years to make sure that your software runs and runs well with OpenShift and our other products. And you guys have your annual conference down there in New York, although you used to. It was Datadog Dash, right? Yeah, we have Dash around the summer, which is usually the most exciting event of our company where we are announcing a lot of new products and new features and inviting our customers to try them out and hear more about it and talk with us. I went down there, well, of course, we weren't there this year because of the challenging times that we're all working in, but I was down there last time. I was down in New York, down in the waterfront there by the piers. And I got to tell you, I mean, it was really impressive seeing how many customers were there and the excitement around Datadog, the platform. And I mentioned this to your marketing people the other day, but it was also probably one of the best trade shows I've been to, the food was phenomenal. It wasn't like the little mini sliders that are cold and the hamburger bun is stuck onto the pad. I mean, it was a really well run event. This is great to hear. We really do this all day. I'm hoping that we're gonna be able to get back to in-person events again. And I can't wait to have back down there. Are you guys planning on having it in New York again the next time that we can all travel and get out? Of course, hopefully this will be this year, but it doesn't look like we will all of us be able to do that. But as soon as things get back to normal, I'm sure that those events will happen again. Is that because your headquarters isn't in New York City, is it? Or is it? Our what, sorry? Your headquarters. Yeah, our headquarters are in New York, yeah. By the way, last year we had a great virtual dash. I think it was a very unique experience for all of us at the Datadog and it was a pretty successful event, even though we couldn't hand over like swags and delicious food to our guests and users, unfortunately. Yeah. All right, well, anyways, so I was talking about Elon and the reason why we have you folks on the show here today is not because you're just some random company, but we consider Datadog and the services that you guys offer a pretty key workload to helping our customers be successful running Kubernetes and specifically OpenShift for production environments. And if folks have a Red Hat certified container, you have a Red Hat certified operator for OpenShift. And I think that you're available in the Red Hat marketplace as well, is that right? That's right. Okay. And you're a member of our OpenShift Commons community, which I know Diane Mueller is very excited about. I don't know if anybody who's listening has had an opportunity to meet Diane, but she's like, she's probably one of the most amazing people that I've had the pleasure of working with. She actually is responsible for the OpenShift Commons program and all of their events. And those are all over the world if anyone's ever has a chance to attend one of the Commons briefings. They're pretty terrific. Anyway, so we have a demonstration that you're going to show us about, you know, what it is, how it works, what do you need? Do we need a drum roll or? What do we need to set the stage for what you can show us here today? Yeah, you're... I just can go ahead and give you a quick download. I'll go ahead and turn my screen down. Are we ready? I think we're ready. I think we're ready to roll. Sounds great. If you just give me one second, I'm going to start sharing my screen. Can you see it all right? I can. I can see your screen. Fantastic. Cool, so we'll do a quick demo here. Again, this is DataLog. For those who haven't seen it before, DataLog is a SaaS monitoring and security platform that combines your metrics, traces, and logs in a single place to enable visibility across any kind of stack for all teams and stakeholders. This means that everyone, DevsOps security teams are able to break down silos and collaborate more efficiently. So what we're looking at here is a dashboard used to bring in critical data, such as metrics, logs, and traces across your environment in a single view. This specific dashboard is for our demo app ShopPist, which we'll use for this demo, which basically powers an e-commerce retailer that we've set up. As you can see, it's showing key information about our applications, such as system health, uptime, and to more advanced things like synthetics, network performance, and real user monitoring tests. You asked me before, Michael, about our integrations, right? So we have the agent, we have cloud integrations. You can see about 400 and plus integrations in the screen. Each of them enables you to quickly set them up using very few steps. Our agent supports Kubernetes and OpenShift, as well as all the other types of infrastructure and runtimes. It's a single agent that usually can be deployed in one or two steps. So once you set up some integrations in your environment, each of this integration comes out with some out-of-the-box dashboards. For example, this is one of our many out-of-the-box dashboards for Kubernetes, where you can see an overview of your Kubernetes clusters and OpenShift as well. Once your integrations are set up, you can also take a look at your infrastructure. For example, here we're seeing all the hosts and the VMs. We can use tags to like group them and slice and dice, for example, such as CloudProvider and availability zone, and choose any metric to color them. For example, we can choose user CPU and notice that there is one instance here that is pretty busy and really down to understand what is running and what might be. If we want to switch to containers, we can also take a look at all our live containers, including our Kubernetes and OpenShift workloads, right? Here, for example, I'm using again a tag, which is the cluster name to prove all my pods, right? So I can quickly get an aggregated view of how many of them are in each state, for example, those that are in the back-off. And with a single click, I can drill into the specific pod, look at all the containers that are running in it, and correlate between logs, if I have any errors, metrics that are specific to my pod or container, processes are running in it, as well as network data, traces, full performance, and more. Yeah, here, can I just jump in here for a sec? I did want to say to the people who may be watching and listening, we're live on YouTube, we're live on Facebook, as well as Twitch and certainly our bridge here. If anyone has any questions, we'd like to play Stump the Product Manager here today. I wanted to share the screen because I have something really, may I share the screen for one second, Yair? Yeah, absolutely, I'm stopping, Mike. Yeah, I'm gonna share. Now, can you see my screen? I see this beautiful shirt, yes. We're gonna play Stump the Product Manager. If anyone has a question and they can Stump Yair about something that's specific to his area of expertise, we're making up these T-shirts that I think everybody can relate to these days. Can you see my screen and the year on mute edition here? So, if anyone has any questions for a year, please put them in the chat and then we'll get you one of our new challenging times T-shirts. Right, thanks, Michael. And since we're very interested in getting any questions, I'll just maybe quickly explain what my area of, what my domain of focus or expertise in data, right? So I'm the Product Manager for containers, which means a bunch of things. First of all, focusing on making Datadog a container-first platform, which means with ephemeral workflows such as containers and infrastructure, the ability to basically run everywhere on any runtime, as well as the challenges that modern cloud stack provides really like focusing on making those challenges disappear when you use Datadog. For example, with the number of workloads and containers and microservices, the tagging or the number of signals and how we classify them, exposed by an order of magnitude. And one of the things we're working on in Datadog is on making it easy to control that cost, that cardinality, right? So you can control those symmetric tags, you can control traces and the logs that you index and so forth. The other thing is my team and I are working on different Kubernetes open source projects to contribute to the community, such as our extended demo set and the data operator, our water pot, watermark for other scaler and so forth that I can talk about later. So we're trying to like build developer tools to make monitoring Kubernetes and other environments easy. The other thing is that of course, we work with the major cloud providers, different CNCF projects on monitoring those with our integrations. And lastly, we're working on all the different open source standards to keep our customers where they are, to help them where they are and keep them from vendor logging, such as like open metrics, standards, pre-use, open telemetry and so forth. So that's kind of my area. Should we go back to that? Yeah, yeah, I didn't mean to interrupt. Well, actually I did mean to interrupt, but I do wanna offer up these shirts. I think they're pretty cool. So we're gonna send some down, we're gonna have some co-branded with Datadog Red Hat and we'll send some down to you guys as well. So stump the product manager challenge starts today. And having said that, I'll please resume with your demo and I won't interrupt you again, I promise, maybe. No problem, I don't like it. Feel free to let me know if there are any questions from the audience. Cool, so we kind of like looked at the host and the containers in the infrastructure, right? The next thing I want to move to is our APM services. So we're now looking at the service map and in today's paradigm of microservices where we run a high number of different services, it can be difficult to keep on top of the dependencies between them. So what we're seeing here is a map of all our services and we can understand how each of these services behaves with any request that it receives. For example, if I have an incident and I wake up in the middle of the night, I can quickly understand where the service that has a latency or a higher rate and which other services might be impacted based on these different dependencies between them and the communication. We'll switch to the traces, sorry, we'll switch to the service page of one of those services that we just saw, in the web store. Here you can see an overview of all the, basically the application performance of the service web store. For example, we can see the requests that the service is receiving where each color represents a different version as well as like the latency, which I can choose from and many other things and those cool things such as like comparing like the performance of my recent version to the previous one and understand if there is any difference in maybe an application bug introduced a higher error rate and investigators really quickly. All the way down to the infrastructure itself, right? So for example, the service is running on Kubernetes and we can see all the containers, how many pods are running there, how much you'll use and so forth. So all this information is received by the DaVlog agent that collects like traces and sends them to DaVlog. Here we can see basically all the traces that are received and I can, for example, use one of those tags to filter the traces to only show me errors in the, that are type payment service unavailable, right? Let's click on one of those traces, one of these application requests. And as you can see with this flame graph, I can quickly understand all the services that were involved all the way down to the payment API that received an error. Look how also easy it is to quickly pivot between infrastructure metrics, logs, and so forth. One of the nice things that we added, again, when we think about container first is the ability to get all your traces with no filtering, no sampling in live for the past 15 minutes. Those are extremely helpful when you're like troubleshooting an issue in production where you don't really need to like index and store those traces for a long time, but you're right, just wanna understand what's going on at a specific time. Yeah, I have a question for you. So in a distributed computing environment, people notice that there's something wrong or that there's something's consuming too many processes or something's consuming too much memory. How does DataDog help with affecting a change to fix that or is it just purely monitoring? I mean, DataDog does not roll changes to your own applications. It just receives telemetry from these applications. What DataDog does is it makes it very easy to detect issues and to also investigate and understand the root cause that they happen. So the application developers or any other users can get their applications back and running as quickly as possible, right? For example, if I deploy, if we'll go back to my page, I can look at my deployments. Let's filter them to show a specific app that is deployed to multiple clusters in different regions, right? I can use this screen in real time to when I roll out a new version, we'll see how the rollout performs. If there are any errors, if all the replicas are up and running as expected, review metrics and logs and so forth, right? Once the application is up and running, I can use the application performance monitoring to compare between these versions, right? So I can, for example, open the active version, compare it to the previous one and see if there are any higher number of errors, which I can then compare and look into to investigate what issues. Then of course also monitors that I can set up to automatically detect me when an error goes up or when my replicas are not available and so forth to really reduce the time for detection and reduce the time for investigating. So moving forward, right? We're looking at the logs here of this application, request that returned an error from the payment service and I can now move to our logs product to quickly look at this log. As you can see, the log itself, each log message is tagged with all my infrastructure tags as well as my application ones. With the trace that allows me to understand what happened before this log line. And with the logs, one of the nice things that we have here in addition to exactly the abilities to filter and group by different tags is also the ability to understand what happens, right? When I look at application logs, they're very usually noisy. If I don't know exactly what I'm looking for, it's hard to understand and find what I need. With this patterns detection, I can quickly identify repetitive patterns that they will automatically discover and help me understand if there are any outliers or specific issues that I can quickly look into. Similarly to like our application performance monitoring that allows me to send traces without limits, our logs product does that as well. I can change to a lifestyle where I can see any logs that are received in my environment, but by any containers and any cloud services that I'm using. Those logs are not indexed, so they're very cheap. And we build this because we understand that some logs you need to keep and store an index which you can control and choose. And some are not that important, but in case of an incident, they can be extremely important, right? So with the lifestyle, you get all the logs without limits available to you. Lastly, let's move to our security product. Our security monitoring product allows you to automatically detect issues we collect and store those security signals that DLW detects for up to 12 months, I think. So you can really understand the patterns in your environment and keep it safe. You were looking at one security signal for an account takeover with a boot-force attempt, and you can get like a message that also tells us how to triage and respond it. Lastly, I wanna show Watchdog. Watchdog is a page that shows you a feed of all the things, all the unusual things that you would less likely to detect yourself. We're using some machine learning and advanced algorithms to identify any issues in your services. For example, we're looking at a Watchdog story in one of our MongoDB database databases that show us an higher error rate for some queries at a specific time, and we can quickly create a data log monitor that will notify us with alerts via Slack or any other like notification systems that you have about these the next time it happens. So I'm gonna finish here and see if we have any questions before we move on to our container report. I know I have a question. So what size clusters are you folks monitoring out there? I mean, are we talking, you know? Move. Like... Any kind of size. We have a lot of customers. Some of them are small and medium. Some of them are very large. I can tell you that we run some of the biggest Kubernetes clusters, I think in the world. And I'm talking about thousands and more of nodes per cluster. So how do you deal with configuration management then? And, you know, so if your agent needs to be deployed on every node that stood up, how do your customers manage, you know, updates and changes to the data dog agent that's running on those nodes and keeping everything in sync? That's a great question. And, you know, which, as I said, like we're trying to stay agnostic to any cloud technologies, any cloud tools, our customers use, you know, a huge variety of tools that we support, right? Some of them, for example, adopted the GitOps approach where they keep everything in a source control and with CI CD deploy changes. And our agent, you know, provides health charts and an operator where you can keep those manifests of YAML files in your source control and deploy them across multiple nodes and multiple clusters. With Kubernetes, of course, an open shift, we use the demon set approach where the demon set basically updates the data agent on each of your nodes. We also support Ansible Chef recipes where like people use VMs directly and deploy the agent on that. So, you know, the goal is like to create a single agent that provides you, you know, you can find everything in our documentation, which I control there, support for any CI CD and configuration management tools that you have. Okay, and does everybody run this in the cloud? I mean, or are there people who say, well, sorry, but our policies are that we don't want anything outside of our own infrastructure. So can people use Datadog on site? Do you have something other than a SAS model? We do not have anything other than SAS model, but we do provide a lot of capabilities that allow customers to securely and efficiently monitor their own premise clusters. You know, we basically, you know, these features include things like automatic reduction of data and scrambling of sensitive data, using like flow processors to remove any sensitive information. Usually metrics are not that sensitive, but we also provide them capabilities to like remove tags and things like that. But the point is that, you know, even if you're running on premise, you can keep all your sensitive information and you can keep your applications running there, but you still want a unified and reliable monitoring solution in the cloud. I can tell you that we have a lot of different types of customers from different industries and verticals and some of them are, for example, financial customers with the most strict compliance requirements that, you know, they use Datadog and we work with them to meet those requirements. Our Datadog agent provides you all these capabilities to like customize and control what is being sent, what is being delivered. And I think specifically for monitoring, right? Like the SAS having a reliable SAS platform is really one of the main reasons that we use Datadog in the first place. I was just curious, because I would imagine that there's some companies who are extremely paranoid about, you know, maybe there's some government agencies or, you know, the IRS or, you know. Yeah, no, we have a couple of reasons. For example, we are working, and I think we announced like a cloud for, like a cloud offering for government customers, right? So that kind of cloud that we build for the government customers is isolated from our public cloud offerings and is more secured in some ways or it meets different compliance needs to make sense. Sure, okay. So we said earlier that you were going to talk about the results of your survey. You put out a survey every year. I think it was, it comes out in October or November, right? We're usually releasing the report during Q-Con in the United States, North America. And so this survey you put out is a status of container adoption within what's the sample size? It must be about at least a hundred different sites that provide information for this, right? Yeah, I mean, for the report, we're using, we're basically examining more than 1.5 billion containers that run by- Sorry, did you say 1.5 million? Billion, nine zeros. Oh, with a B, okay. With a B. I actually knew that, I was just trying to tee it up. Right, yeah, that's a lot of data as you can imagine. We have a really talented data science teams that helped us like producing this report and finding all these trends that we published every year. Okay, and so we're going to go over the one you folks published this past year. Correct. Okay. Should I go ahead and share my screen or you wanna? Yeah, you can share your screen, sure. Sounds good, let's get started. Can you see it showing up? I can now see your screen. Great, yeah, so one of the first trend that we wanted to start with is about Kubernetes. Kubernetes, of course, has a lot of flavors such as OpenShift and our finding shows that about more than 50% of the containers are now running in Kubernetes. That's pretty exciting to see the rapid and the steady rise of Kubernetes. As opposed to running on what? So before using Kubernetes, or Kubernetes is an orchestration platform, right? Which as I mentioned before, like abstracts some of the complexity of the cloud and managing the infrastructure. Before that, organizations still use containers or in some cases they have not yet the random monolith applications and they deploy them directly on the machines themselves. So you need to say, I'm going to run this container or this application on host X or Y. With Kubernetes, things are changing and basically the orchestration is responsible for scheduling those containers on your behalf, on your infrastructure. One of, for example, the changes in terms of like the user experience or the behavior is that the application teams do not need to even know or care much about the infrastructure or where they are deploying, whether it's in cluster X or cloud provider Y, but instead they just tell Kubernetes, I want to run these applications and Kubernetes goes to that soup of machines and runs them. Before Kubernetes to also compliment this answer, there were other orchestration services, right? One of the most popular orchestration services for what we see is Amazon ECS, which provides a simpler way to kind of run containers in terms of like the different types of options that you can customize comparing to Kubernetes. And Amazon also was one of the first companies that released a managed orchestration platform that became super popular. All right, so fact number two was that by now we see that 90% of the containers are orchestrated. That means again that all these Docker containers and now we're seeing the rise, the increased popularity of also other container runtimes, those are just managed by an orchestration such as Kubernetes or ECS. Moving forward, this was a pretty surprising fact, right? What we found was that the majority of the workflows that are being deployed to Kubernetes are not utilizing CPU and memory efficiently. So for example, with CPU, you can see that about less than 10% of all the containers, sorry, about 30% of all the containers are using less than 10% or 49% of the containers are using less than 30. With memory, we've seen a similar case. And that's kind of like counter-induitive to, when it is being able to be impact and automatically schedule containers in the most efficient way. And there are several reasons that, we explain that I can talk about quickly or why this is currently happening, right? One of them has to do with how the journey to Kubernetes looks like, right? Most companies add their own applications that they ran before Kubernetes and kind of the first phase of this journey to Kubernetes or to orchestration is more like a lift and shift of your applications to Kubernetes. During this process, like you really try to preserve high performance, you want to scale, especially during this month, the past year where like, we see the digital transformation accelerating and you do not want to, for example, risk your application being like um-killed or brought up by Linux. So that's kind of like the first phase, right? The other thing is that when you think about where customers that we work with are now, most of them are relatively new to like running in Kubernetes. And we think that now that in the next year, we'll see the focus shifting to like from performance, like now that performance is good and scale is automatic scaling is working to cost optimization, which basically means utilizing the CPU and memory, which are usually one of the major, some of the major expense factors in running cloud services and applications. I was going to say, so what's the ideal number? I would think it would be somewhere around, you'd probably want to be sitting around 80%, right, Ish? Exactly, yeah. And you know, if you think about it, right? If you already have your applications before moving to Kubernetes, like those were not necessarily monolith, but they were composite of a relatively few or small number of services. With Kubernetes, you basically need to specify for each container how much CPU and memory it uses or that's the requests. The problem is that if you have very large containers and you want to schedule them or be impacted efficiently on nodes, there's a certain amount of large containers that you could be impact on a single node. The reason I'm mentioning it is because another trend which will show a little later in this report is about the move to microservices or the adoption of microservices. Microservices basically is an application architecture where you have a high number of services, a high number of containers that are smaller. And kind of like if you try to take a lot of small stones and put them in a glass, you'll probably have less air left rather if you try to put a few large stones in a ball, that will have a lot of gaps in between. So that's kind of what we're seeing in play here. And we believe that as companies move more towards like microservices and service mesh architectures, that would also increase and improve the utilization in cloud resources on Kubernetes. So that kind of like captures what we've seen here. It's pretty interesting. Let's scroll down a little bit and talk about Fargate. So Fargate is a compute system or service by AWS that allows you to run containers on a serverless compute platform. So it basically abstracts or moves the need to like manage and use hosts. As you can see in this report, like we've seen Fargate increasing to about more than 30% pretty high number of usage of serverless containers in a single service such as Fargate. Pretty exciting. Serverless containers I think would unlock a lot of use cases and benefits over the next few years. And worth mentioning here that Fargate is probably a good representation for a lot of other serverless compute platforms and orchestration platforms that we will see that are a bit more Indonesian than like Fargate, which has been released I think a few years ago, but will become popular as well, right? Even OpenShift and IBM have a bunch of like serverless containers, services, such as OpenShift services, serverless, sorry, they're using Knative, which is a really interesting technology as well. And serverless containers are especially interesting because containers are already ephemeral and the host is not something that you cheer down every second, right? So having the ability to scale up and down your containers and run them without any infrastructure, like completely abstracting them away makes a lot of sense in many interesting use cases. So that was about the serverless. Michael, let me know if there are any questions or if you'd like to ask me anything. I was just pinging Chris short to see how we're doing on time. I think he said we can go over a little bit if we need to. Sounds good. How much more time do we have? 10 minutes, maybe? About five. Sounds good. So a couple of more trends here, right? Kubernetes notes are as we can see in this fact. I'm sorry, Chris said we can actually run over, so we're good. Oh, Kubernetes, node sizes in Kubernetes are changing as clusters become larger. What we found is that in small clusters, the use of small nodes is pretty common still, but as you look at and move towards larger clusters, those small nodes kind of disappear and we see more like larger nodes with 16 nodes or more. And of course that includes 32, 64 and even more. That actually makes a lot of sense because when you run a Kubernetes node, you have kind of like a sunk cost of processes such as the kernel, the hypervisor, the container runtime, as well as Kubernetes specific components like the Cubelet that take resources that are expensive. And those basically do not scale linearly when you use larger nodes, right? Because you can run a lot of those containers on a single large node and your allocatable CPU memory sources are just increasing. The other thing is that with Kubernetes, having a failure in a node is less of an issue. And with large clusters that have 1000 and more nodes, the failure of a single node is probably not gonna have a severe impact on performance, which is something that organizations starting to accept more and more. So that's pretty exciting. And the next one is about networking technologies. Kubernetes is doing a great job in abstracting, abstracting the cloud complexity. But one of the things that are sometimes left to the application developers and the platform engineers is managing the networking between containers. That complexity also increases as the number of containers is increasing. The main technologies that deal with container networking and security, as you can see here, help containers to discover each other and really simplifies that communication for the application developers themselves. One of the interesting findings that we had was that Valcalico, which is a great networking technology, is the most popular. We see a lot of other technologies and these technologies are very, very important. And this segmentation, this diversification shows us that this is an area which no one is yet dominating in and will be very interesting to see what happens in the next few years. We believe in that the number of technologies for networking, container networking and security will continue to increase. We have some technologies such as NGINX and Istio that are super popular and used by, for example, Istio is used by Red Hat and a lot of other companies such as Google to build service meshes. And that's something that we don't think will change anytime soon. So related to networking technologies and I think by that we will maybe wrap up the container report. We also published a fact about the service mesh adoption. Service mesh technology is really used as an abstraction to the application networking for applications that consist of a lot of small containers or small services. The infrastructure layer of the application networking is not sold today by Kubernetes, right? So if you're using, for example, an AWS cloud, you might want to use AWS VPC for networking, right? But if you're running your containers on other runtimes such as on-premise or like in virtual clusters, the underlying network infrastructure might be different. That's one of the core benefits on promises of service mesh technologies which is really exciting. However, what we found in this report is that while a lot of companies comparing to our last year report are now experimenting and trying service mesh technologies, but the adoption is still early. If you look at how many containers, sorry, how many organizations are actually running the majority of the workloads using service mesh technology, those numbers are still relatively low. And when we were talking about this the other day, I basically admitted that I'm no expert on service mesh, but is this because the sizes of the containers are rather large comparatively speaking and that service mesh adoption is going to probably increase when the containers get smaller and smaller and smaller and smaller and smaller and just millions and millions more of them or? Exactly. I think that that is the core reason. Most containers are still relatively large and when you're using services architecture, like not microservices architecture, services architecture, which is still way more popular. You already have solutions that provide you some of those main benefits that service meshes do such as for example, blue, red, canary deployments, right? You could use like an ingress controller such as NGINX to route traffic between different application versions or different replicas of services. However, once you move to like microservices architecture and the number of services is growing by an order of two or three to thousands of microservices, right? An ingress controller, which is more like a centralized way to route traffic is no longer very scalable or granular enough for that. And we think that, you know, as the number of services that organizations run will increase, service mesh adoption will follow as well. Fair enough. Cool. I don't know if we have time for one more or like do we want to wrap up? I'll leave it to you. I think we do. This is good stuff. We'll get time. Sounds good. So one of our last facts was focused on about the most popular technologies that are running in containers today. Not a lot of new surprises here since the dominating technologies are still NGINX, Redis and Postgres, but we had a few newcomers, right? I think one of the interesting ones is Volt that came up 10th, I think, in terms of the order. Volt is a really exciting technology by HashiCorp that allows application developers and platform engineers to keep any secrets and passwords safe for environments like production where basically each port vote carries an identity and fetches them from the secure vaults during the deployment and the continuous integration and deployment. And related to that, we saw that in Kubernetes specifically an open shift, the top container images that are running in stateful sets, stateful sets are stateful applications that require some persistence of state. We found out that those are databases or data services such as Redis, Elasticsearch, Postgres, and that's pretty interesting, I think, given that Kubernetes in its early days was not very friendly to run those technologies. A couple of things change over the years, of course, thousands of hundreds of improvements to Kubernetes, but also a lot of support that came from those open source technologies and the commercial vendors that also maintain them to make them easier to run on Kubernetes as well. That also makes a lot of sense because for organizations that use Kubernetes, the benefits of running all your services, including the data that connects all of them together in a single cluster, in a single environment, in a single network is obviously important a lot. So it makes a lot of sense that now we see all those technologies becoming popular, which means that the journey to orchestration and to Kubernetes is safer and more predictable. Great, Mike, I think that we hit the mark on that report. I think so, I think so. And that comes out every year, right? So the next one's gonna be coming out November-ish. Exactly. What are your predictions? I mean, as I said, right? We think that more and more customers and organizations will move to Kubernetes and the different flavors of Kubernetes like OpenShift and all of those. We think that serverless containers are becoming more popular this year. I think that with service meshes, as microservices architecture is becoming the more recommended approach for cloud native applications where you want to run containers everywhere, microservices, adoption will increase as well as service meshes. And last thing is about security, right? A lot of these technologies are meant and built and designed for containers and they kind of support the security requirements that running containerized applications in scale have. So that would probably be another major factor because we see a lot of open source and commercial solutions for securing containers and we're pretty excited to see what would be the dominated technologies in a year from now. Okay, well, we'll find out, we'll find out. Data dog, ladies and gentlemen, on the show today, I'm gonna share my screen if I can figure out where the share button is on my own tool again. It's dead in the same place, Michael. Yeah, I know. That's joking. So, yeah, you guys have a free trial. If people want to use the free trial, we have it here on the screen. Yep. You can't click on it, but you can type that in. And thanks for coming. Yeah, yeah. It was a really happy, I know that you guys are a great partner of ours and thanks for being on the show, looking forward to having you folks back again in the near future. Likewise. It's always great. Thank you very much, Michael. Okay. One.