 introduction. So again, thank you everybody for joining us today. And on the call, we have myself, Eva and Drew. Eva and Drew, would you guys like to add a little bit more of a personalized introduction, say what you do for Instana and Turban on it before we get the ball rolling. Yeah, absolutely. It is a pleasure to meet everyone. Happy to be here. My name is Drew Flowers. I'm the director of solution engineering for North and Latin America here at Instana. I've been with the company about three years now in a solution engineering role recently promoted the director and for the previous 17 years, I basically architected data centers for all varieties of organizations. So I've been around the block on many different types of observability challenges, whether you're talking about going from on prem to the cloud or staying cloud native. And in my time with Instana, I've seen many different flavors of customers doing many different things with many different tools. So it's been an interesting experience and I'm excited to share what I have for you folks today. And I'm Eva Tutsi. My primary role here at Turbanomic has been product management and it's been a fun five years being involved in the Kubernetes community, and watching this evolution of what I think is a great platform to support these cloud native principles. So I like Drew. I've been around the block for about 23 years in Haiti, watched everything from applications going off as mainframe. I shouldn't even say that probably to open systems to so to finally now containerization and even some exciting stuff with functions and serverless like Drew, I've watched an interesting evolution of people going from new technology monitoring it and realize that the answer is, I need to be able to quickly identify and solve problems. So looking forward to sharing as well. Thank you both for that. And now to dive into today's presentation. Drew, would you like to kick it off. Absolutely. So let's kind of talk about what it is that made IBM pick up and stana to begin with. I mean really it comes down to what you're seeing right here. The impact of a single second of application latency with IBM's legacy stack they had plenty of great tools that were facilitating monitoring whether you're looking at Tivoli or things like it sm. But overall for their application stack, especially in more modern architectures they weren't quite seeing the efficacy with their tools that they were after. And within stana the genesis of our tool really came about because operators tend to be using tools that are giving you anywhere between 15 to 30 seconds worth of polling. They're doing aggregation they're rolling up a lot of different things. But understanding the impact of a single second is very important. Dina, could you click through that for me. When you talk about what a second actually means you're actually talking about customer conversion and customer satisfaction. It's a statistical fact that users will abandon a website after just three seconds of a delay. So understanding that second to second moment to moment interaction whether we're talking about application performance provided through tracing or time series data like monitoring sequel deadlocks. It's really important. So when you're dealing with 15 to 30 second aggregates you're actually dealing with a polling rate that might be resulting in customer issues or latency on the other end that you're missing. So we created a platform that never samples the transactional monitoring it collects time series data every single second it's very high fidelity and highly automated. We did it in perfectly with IBM's forward vision as well as their goals for trying to pick up something like Turbonomic which you'll see a very great synergy with across this actual presentation. Next slide for me Dina. The reality is that, as I mentioned I've seen a lot of different companies go from on premises to the cloud or hybrid or remaining cloud native entirely. And the reality is that as you go from say a SOA based or monolithic architecture to something more like Kubernetes and containerization and micro services. You're also increasing the complexity of this environment orchestration is changing things every second every minute and again that goes back to why every single second of monitoring matters. And when you're talking about automation as well what you're looking at is automating the instrumentation and discovery of new services and platforms and without that level of automation without a tool that keeps pace with the rate of change of modern architectures. You wind up with something very similar to what you see here, an incomplete vision of what you're actually attempting to serve out to the world. So when things break it creates a problem where operators and teams now have to rope together these massive war rooms to try to figure out who's at fault who needs to fix their services or where the application went wrong. Where if you had a full stack observability tool something that was keeping pace with the rate of change of your environment perpetually discovering everything placed inside of it. In this picture doesn't happen. You retain 100% fidelity in your understanding, which means when things break, you can pretty easily figure out where we start actually looking at the tool later on in the presentation. You'll see exactly what I mean when I say understanding where everything is and how it breaks. And that's also got a lot of follow on effects that will feed into Turbonomic and its ability to create decisions for resource management. Go ahead. Next slide, Dina. So this is where we kind of wanted to open things up and introduce a poll for the group. So in this case what we're trying to ask is where do you feel as a DevOps practitioner, a developer or even the owner of a team like where your team is spending their time? Are they doing development? Are they configuring tools to monitor or maintain that development? Are you performing resource management, chasing app issues, onboarding new apps, performing capacity planning or maybe it's everything? Go ahead and shoot an answer in there for us and we'll kind of address how those are looking towards the end of the presentation as well. And go ahead to the next slide. Here we go. And I believe this is where Ida steps in. Can you speak on speed and how the rate of change can affect our performance within our applications? Sure. Yeah, to follow along what Tru was saying, this new paradigm that people adopt does introduce complexity. Again, but it's all for good reason. We make these decisions with application architectures and the technology platforms that they run on because there has to be business benefit. And one of the key drivers of the agile, elastic and portable cloud native principles I think is getting stuff to the market faster. If you're running a mission critical business and you rely on these applications built on these technology stacks to deliver business capabilities, you probably have time sensitive requirements. You want to get new features out there faster, probably to be more competitive or to be have a better user experience. When you solve a performance issue or solve a problem and you have a fix, you want to get it out there faster. But also, it creates with this complexity the fact that there's a lot more moving parts now. So what you have is a business need to run faster because you made the investment in cloud native, you have more moving parts. So now you have more stuff changing quickly. So in the past, in my olden days when we had waterfall methodologies, we had several months that we could have test our processes to make updates and to roll out changes. Not anymore. This bullet train, this train is leaving the station is traveling at 110 miles per hour. So relying on processes to allow applications to deploy into production or to allow change to have processes that require days and months isn't going to cut it anymore. So you have to rely more on automation to make sure that not only are you moving things down the pipeline, but that when you haven't, excuse me, an issue performance problem. You need software to help you spot and resolve these issues, and even prevent these issues faster, which just can't wait. Would you agree through. Oh, absolutely. That's 100% correct. It's all about automation. It's all about being on that train and staying on that train. So that, yeah, yeah, go ahead. Go ahead, Eva, I was just gonna give us quick introduction to the slide but take it away. Yeah, so where does what slows down the train. Right. Part of the challenge here is not only to have this great new paradigm, but even though the developer, the application developer who's really chartered with creating all this wonderful new functionality. You want to obfuscate away the infrastructure right. So if you're a portable agile elastic service developer, don't worry about the infrastructure, but in exchange for that we, we put a burden now back on this developer, because now they're system owners. And why because the definition of resource management is now become part of this manifest of my workload. Not only am I creating this manifest of my desired state, I've got to do things like create a spec around memory limits CPU limits and what you and I like to say, you know, what's one of the most fun things that an application developer can work on is, yeah, it's not sizing things. That's not the most fun that's right up there with root canal without no Viking. Right. But it has. It has a downstream impact. It has an impact, not only to the service, because what if there is a performance problem related to resources. Now somebody who should be working on feature and functionality is working on trying to pluck away like, where was the bottleneck where was the constraint. It has a downstream effect to everybody else that's using that platform, because if something is not sized properly. It could be impacting resource availability for somebody else. Right. So that's an interesting problem. And I think what drew and I want to talk about is, we need to take this low hanging fruit of resource management, out of away from these guys, you guys need to focus on building applications, your dev ops teams need to focus on helping you get onto that bullet train and run applications in the best way possible but also handle the speed challenge. So we need to take help to solve that problem of a sharing performance, because this is where in all these steps. This is where drew and I want to advocate. To get the right data to the right people, and we need to have that data work for you so that we can have a feedback system but also the right level of automation, so that we can avoid those pitfalls of cheese everything worked okay and build testing integration. And now I've deployed into the running environment, where I am co mingling with the other tenants and I've got things behaving in different ways and I've expected. So we should have software make those decisions on what resources you do need, when you need them, so that you can have not only a automated pipeline but we've got automation, solving for these issues. And you don't have to sit there and figure out what should my men limit actually be right. I mean this is right in line with the same story that we like to tell as well I mean when you're talking about not just resource management but even observability the question becomes. If the goal of a DevOps organization is to try to automate as much as possible to focus on product forward development. And why do many DevOps organizations adopt tools that force them to spend a lot of time configuring that tooling or manipulating resource management, you're spending less time innovating your product as a result of some of these tools that are being adopted. And that's one of the goals that we had in Stana and especially Turbinomic have seeks to try to resolve is to get your operators your DevOps folks your developers out of that configuration of external tooling, and to have that tooling keep pace with you so that you can focus on what really matters, innovating that product and continuing your own forward development and facilitating your roadmap, not dealing with our product. Yeah, and just so everyone can sort of get get a visualization excuse me this is how in Stana and Turbinomic work together. Eva drew do you want to sort of speak to this relationship right here. Yeah, absolutely. I mean if you think about it very simply work getting into the pipeline of automation just like all of the steps that your folks are doing for development and deployment. But our goal here is instead of trying to make you manipulate your monitoring tool we want to automate that. So we come in with a highly autonomous intelligent agent that performs auto discovery auto configuration and auto instrumentation of your application. So we begin generating that high fidelity content with very limited interaction by you or your users. And as you manipulate that application introduced new containers new services, the agent detects these things and continues operating with that level of autonomousness. Then we feed that data over to Turbinomic through various API connectors and Turbinomic now has very high fidelity high resolution information by which it can make those resource management decisions. Anything you want to add to that Eva. That's exactly your spot on drew in fact, one of the other things that we can do to is and, and I apologize if you already alluded to this but resource management is important to make sure that you do not But you should also be targeting to manage to an SLO right and that's the great thing about Instana is that not only am I going to get, you know, the Jews going to demo for you guys is some great information of being able to get that very forced very first important level of triage. Is it the code, or is it the environment. Because if it's the code, it sounds going to hope you know what exactly to do. But you should be managing your environment to a desired SLO. I want to have 30 seconds or 30 milliseconds of response time right. That's important information that also should be driving your problem identification resolution and automation. Absolutely. And that's something that Instana can definitely help out with as well and we'll show you in the demonstration how you can take that information and wrap up SLOs around it and then you can even see those SLOs within Turbinomic when Eva shows her part as well. Yeah, and just before we hop into Jerusalem Stona demo. This is just another visual across the pipeline, sort of where we work together. Is there anything on this slide you guys want to touch on or should we just hop on into the demo. Actually, I think this is kind of an interesting time to bring up some of the results of that poll because when you think about this map here specifically it really talks about the different things that we just pulled you about. When you look at those results, you know, all of the above clearly one with a 39% voting favor, but when you consider that your team is spending a lot of time on all of these actions, but there are tools out there that can facilitate automation of resource management and automation of observability. You have to start asking yourself is your team spending their time where it's most beneficial or could we be providing them better equivalent tooling to perhaps not have them doing all of the above but maybe a little more focus on their development efforts or chasing the application itself, because these are the things that are going to drive your customer interactions. Your customers don't notice what you're monitoring your application with they don't notice what's behind the scenes manipulating resource management. They just see the end result. And that's what's really key is we want to have your teams focusing on that end result. Are your customers getting what they need or are your teams spending too much time trying to generate the data to get there. And that's what we want to help with. Exactly. And one of the other important things about our data model integration is that it's all about context. Donna giving us the application definition KPIs. I'm going to stitch that together with other data from the Kubernetes platform in this, but then I can continue to stitch this information further down the stack so that while, you know, Drew's going to tell you that we've got an issue in some key areas of your application, I could help isolate if it's something in the underlying infrastructure, further even below the platform that is the source of the issue, right. And I think this is how it marries together because we want to get you going faster and not having spent all your time in trying to triage what's going on. Precisely that on the money. Now, Drew, you ready to give us a peek at Instana. Absolutely. Let me go ahead and steal the screen share from you, Dina. Of course. Now, kind of the interesting thing about Instana is that our tool is really designed around one very core principle. We don't want to inundate you with data that is meaningless. A lot of times when you look at monitoring tools which are going to see our massive sprawling dashboards with tons of information but a lot of this stuff tends to turn into noise. They're not metrics that are beneficial to any sort of troubleshooting effort. They're just metrics they throw up there so they can say they did it. So what you're going to see when I go through this is Instana is designed to show you the data that you need when you need it. It's curated with a very specific workflow that's designed to help any user of any knowledge level understand what we're putting inside of here. Now, there are four core components to this tool. There's an end user monitoring suite that is looking at websites and mobile apps. This of course uses JavaScript on the website side, little Android or iOS library on the mobile side. We can textualize this data back to our application traces, which we do not sample. We retain 100% of your traces that go through the system. So you don't have to worry about a sampling algorithm eliminating an actually useful trace. On top of this, we merge it together with an understanding of the underlying platform, which we will define as Kubernetes or VMware Tanzu, formerly Pivotal Cloud Foundry and VMware ESXi. Kubernetes also includes OpenShift, the IBM flavor from Red Hat, and the underlying infrastructure, the worker nodes, the VMs, the cloud services. Now, like I mentioned in my initial conversation here, we start with a highly intelligent agent that you can see here supports a variety of technologies, whether it's Docker, Kubernetes, OpenShift, Cloud Foundry, Linux, Windows, the usual suspects. This agent is designed to automatically discover, dynamically discover all of the assets. These things you're seeing here on the bottom left hand side from Docker to Spring Boot that exists on your hosts. On top of this, when we find things like Spring Boot applications, we're going to implement the Java tracing for you automatically and let you know if there were any failures there and anything you might need to fix. Now as the agents are doing this, we build this. And this is really where the secret sauce is happening. This is what we call the dynamic graph. At its core, as the agents are reporting everything they find, we are in real time building a dependency track and inventory of your entire environment. From the hosts the agents installed on top of, to the containers that exist on the host, to the processes in the containers, to the endpoints under the service and process itself. All of which is tracked for its underlying dependencies. So when an execution happens, when an endpoint gets hit, we know exactly what host, what container, what service, what endpoint that all was, and we show that information dynamically. Now, as you do something like open up a dashboard like this, what you've actually done is navigated that asset on the dynamic graph which exposed the underlying inventory. Here we can see what we talk about when we say high fidelity infrastructure metrics. You can see these graphs updating in real time as agents stream data back. And this information is collected at this cadence every step of the way, whether I'm jumping down into the spring boot apps to see requests and sessions, or using my buddy up here we call context guide to jump back through the infrastructure to look at the JVM, or maybe even bouncing over to the Kubernetes side of the fence to understand how the pod is behaving today. Now, what you just saw there, jumping from the infrastructure to my platform category and drilling directly into the pod might seem like a minor feature set. But in reality, this is something a lot of tools struggle with that carrying of context with your actions. And that's one of the things in stana is built on top of is context context context. So every time you do something like this using context guide to jump between the product, we're looking at traces and seeing the underlying infrastructure that context is carried with every action that you perform. Now, over here on the platform side we see a lot of cool stuff, whether it's the events that are happening under the orchestrator or even diving into the demon set of in stana and seeing the actual deployment specifications. So within this tool we've autonomously collected infrastructure data for my CPU and memory usage, all of the information being exposed by the Kubernetes platform. We've contextualize that back to the individual entities, and we've added some additional data like the k8 events and the underlying deployment specs. On top of this, we've instrumented the services that Kubernetes is running. Now this view here that you're seeing, including my dependency mapping that you're seeing right here, which is fully dynamically generated by tracking executions that go through the system and just inspecting them. You do not tell us what dependencies exist. We simply instrument your services, we track the requests, we inspect every request that goes through the system, and we pick out the dependencies for you. Down to my service breakdown, what services do I have, what's my health checks, and basic error and log messages that are picked up by virtue of instrumentation. All of this is being put together in this completely curated view with this base level of configuration. Identification of three tags, two radio buttons. That's it. By doing this, our application perspective system, which is what we call this system here, immediately becomes infrastructure aware. It knows all of the underlying containers tied to the namespaces, the processes underneath the containers, the hosts, all of these processes and containers run on top of through the clusters on the back end that the databases might be hosted out on. And what this system allows me to do is incredibly deep. For example, if I just come here and key in on this little bump in my error account, you'll notice going back to my dependency map, all the sudden things are really colorful because stuff is breaking or broke rather at this point in time. But check this out. Say I am just somebody who's a new DevOps practitioner to the team. My only responsibility at this point in time is my catalog demo service, and I want to know why it's breaking. Using app perspectives, I can actually just click through that off the dependency map. You'll notice this summary page is exactly the same as we saw on the front end. That's by design. That's part of the workflow that I've been talking about. From here, I can see what endpoints are currently generating all of my errors. And right about here is where most APM tools are going to say, all right, you know your application, you know your service, you know you're failing endpoint, and you know it's an error based issue. Go to your analytics mode, throw out a query, inspect your traces. But we want to make troubleshooting easier for folks who maybe don't understand query languages as well or just aren't attuned to the tool yet. So now I can just jump into my endpoint and where it said dependency notice how it says flow. From here, we'll just flow map this entire API chain in the front end executions coming out of my EUM service and my engine X web, all the way down to the database interactions, what tables we were hitting, etc. I can now add the actual traffic pattern and colorize based on the fact we know this is an error based event. And immediately we can see this back here is most likely my problem 168 attempts to connect that failed 100% of the time. Not a troubleshoot. All I have to do is click that inspect my error messages. There's my root cause. It's that simple. And this is all put together dynamically. This is not something that I had to configure. In fact, this is highly out of the box functionality. And remember how I said we carry context. I click this link here. We get brought back over my analytics section. My query has been generated for me dynamically. And now I can simply look at a trace. These traces are what define my dependency map and all those metrics that you just saw. And what's really cool is I can do a lot of different things underneath this, not just understand my executions and what log messages interacted with these executions, but also see things like my JDBC strings like I would expect and even decompile the instrumented binary live directly off of the stack trace to show the code inside the UI. So there's a lot of great functionality here that spans both the DevOps and the developer side of the fence. And the last thing I'll show you here before I hand it off to Eva for a little show of Turbinomic is of course our event management because this is something a lot of folks always want to talk about. Specifically because the demo I just showed you, it's a major sequel failure, so I shouldn't be finding that proactively. And in fact, we do generate an event off of this, but to understand our events, you need to understand a couple of things. For starters, we understand events by understanding incidents, issues and changes. Changes can be just about anything. It can be container values that change new values of my last messages from my IBM MQ service or new services or hosts that go online or off as a byproduct of orchestration. Issues are going to be low level problems, things like error rate being too high, but it's maybe a deviation of 5% on one service, or maybe my CPU is starting to bump up against limits. We want to understand every low level problem affecting the system, so that when we fire an incident. We don't just tell you there was a sudden increase in the number of errors on this service. Good luck. God bless. Go figure it out. Instead, what we want to show you is this. What you're seeing here are 17 individual events tracked to eight changes where apparently one service had a significant problem bouncing off and on that affected for application or infrastructure entities. This is what brought me here. My jump and error count going up to about 16%. But we also saw a problem where my latency spiked to about two seconds. My discount service on top of catalog demo saw similar problems along with my sequel. The key here is that we tracked some significant changes where the my sequel D went off on this specific host. Not only did it go offline, our ebpf sensor detected it as a sick kill. So we're actually flagging out your potential root cause before you've ever started troubleshooting just by looking at the traces, understanding the interaction points and seeing, hey, there was a change at an infrastructure level that very likely impacted this chain of requests. Now, obviously this is not everything in Stonic can provide. We didn't even touch the end user monitoring, but we do have a shortened time frame here and I want to make sure Eva has time to speak as well. If you're interested in anything beyond what I've shown here, please go to instana.com request a demo, you might even wind up with me on the line giving you another deeper dive. With that, I'm going to hand it over to Eva and let her show you what Turbinomic has to offer. Oh, thanks Drew I think that's always important that you've got to have a great place to start. So what I'm going to do is basically try to pick up the story and say that. Okay, now, are we seeing the turbo UI, just a quick check. Yes, great. All right, so if you guys are okay I'm just going to maybe bounce between a couple of different demo environments right now. When I am interacting with Turbinomic, I can come in and see one thing that is kind of key to this whole definition of making data work for you turning data into analytics into action is the supply chain. The supply chain for us is this representation of that full stack of business application that we are getting from instana. So the definitions of what you define in instana and again those are some things that drew can go over with you and a little bit more detail. How do I define an application I can do something as simple as say everything in this namespace. I'm trying to kind of look for an example, everything in this namespace is an application but then it comes back here to Turbinomic that we discover those definitions. We will start collecting response time transaction throughput. These are KPIs that just by adding instana as a target. In fact, allow me to come in here and just kind of quickly show you. Right here, we have instana as a target. We've got the Kubernetes clusters as a target, and then we have the infrastructure where it could be public cloud, private cloud, it could even be extensions of the infrastructure which is things like converged fabric storage details all comes back and renders for me this supply chain so I can see through the stack, the relationships of application demand, the components that are running that's that require compute storage and network to be able to run effectively. And then the providers of the compute storage and network. So hopefully that makes sense because I think that's a key aspect and by the way, I'm switching between a couple of demo systems because that was showing you things that are related to everything running in public cloud. Here's an example again this supply chain can go in all sorts of directions, depending on the data that we're able to get from you. The more data you give us the more in depth this information becomes the more points of possible resource constraints and resource contention and bottlenecks, I can represent in the supply chain. And these are also points that I can now generate decisions from that data decisions to basically adjust the way resources are being managed and the this these kinds of resource adjustments can actually be dynamic. It can also be automated. And we have a definition of what we call executable decisions. Let me see if I can get to a nice little scope here of actions. So when I look at these, when I look at this data, I'm also Jen and asking the question, is this action executable. If I want to resize my favorite application component here. Robot shop, right, who doesn't like robot shop. So this one is blind on a shipping component. I am looking at KPIs of not only resource consumption but also throttling CPU throttling for example which is a factor in how fast I can actually process workload. So what I'm going to ask the question is does the underlying cluster have the resources that I can give in order to right size this workload. So I'm giving you data to support my decision, but more importantly I'm giving, giving you exactly what action that you can take. I think that's kind of an important thing because a human being kind of process this information, and other point solutions that may operate on thresholds can say, Oh, you hit a threshold, go change something. But it doesn't continue to ask the question, can I execute this action in this environment. Does my cluster, does the nodes, the nodes that I'm running in this cluster, can they support the execution of this action. So what is the overall trend of resources. What is the resource consumption per node. I factor all of those things into consideration for you. And then, additionally, I can represent factors of cost for example, this particular cluster happens to be running in AWS. I can give you some insight into, I'm now marrying the information I'm getting from my cloud provider, and also showing you necessary investments that I have to make. Because here, I have a constraint in this node. I've got CPU congestion risks. I have actions to mitigate that risk. But if you're not going to do anything about them. I can show you that we need this additional capacity. And here's the cost impact. But I'll also show you the good news I'll also show you that for this particular cluster, I can redistribute workloads better I can be smarter about that redistribution. I can even suspend this note. And by the way, when you set up this cluster your choice of storage tier as well as for PVCs or PVs persistent volumes that may have been created, you're using this default storage class of GP to get kind of don't need it. And this allows you to be even a little bit more intelligent about what we call responsible efficiency. I'm showing you the exact usage of resources for this particular, this pod is using on this PV, and you just don't need that storage class. By the way, these actions are executable. I can be clever about the way that I modify this PV storage class tier, I can actually do that from the cloud provider side as well. So my integration here of these three points application platform and infrastructure also affords me different ways that I can execute these actions and achieve this goal of the best state possible, and by being as non disruptive as possible. Is there anything else I think we're kind of getting to that magic point here in having 15 minutes left, but you know we can again give you the insights, I'm gonna stay focused on Kubernetes now, how I can prevent issues as well. I understand the fluctuating demand. I understand what my SLOs are, and I can make continuously make redistribution redistribution decisions, which is something that is actually missing from Kubernetes Kubernetes does a great job to say, Hey, here's where my pods need to deploy. But it never asked the question again, is that the best place to stay running. So now I can have a node that is going to approach a congestion, and I can do one of two things, I can have a human being get involved. Or I can say, I'll let the, the eviction policies taken to effect. But the problem with eviction policies is that this node is already under stress. If I wait for the eviction policies and evictions, evicting these pods is not always maybe the best solution to manage and assure performance. So since our analytics are not only helping identify what you should do with these bottlenecks, but we want to prevent these bottlenecks, we want to drive decisions to prevent issues, and also to define when to manage the scale and size of my workloads. And all of these things are analyzed in context with each other. And I know I probably do a lot of things at you. But isn't it easier to kind of come to a system and say, Okay, I need to mitigate CPU throttling for this particular workload, because I've got a problem here with throttling. Isn't it easier to say, Oh, here's the answer. You need to increase your CPU limit 100 millicores. And oh, by the way, you're not really using any of these. You can optimize the other way down. I can reclaim some resources. You don't need all those requests that you've actually specified. I think it's easier to interact with an action than it is to go look at a ton of data and try to figure out what to do. I totally agree with you on that point, Eva. I mean, as somebody who's been 17 years doing hybrid cloud and, you know, data center architecture and management, it's certainly a lot easier to have a one stop shop for resource sizing your environment than it is to go back and forth between the AWS console back into VMware back over the days. It's far simpler. You're far so correct. Yeah, and I'm sorry, it's taking me a little long to do this to pop up. But again, this is both Instana and Turbonomic, multi cloud, multi cluster. If you want to invest in your, your cloud native strategy, and be really truly hybrid. We're Switzerland. We will represent all of this for you. And even by this representation, hopefully give you with through actions and the risks, a better idea of where to start. But of course, automation means that I can do a better job of continuously. Sorry, there's nothing like a live demo, right? True. To be able to continuously manage an environment and take care of what I call the low hanging fruit so that you can focus on the more evaluated. So here again, I've got AWS, Google Cloud, and Azure, all represented here. And this is a lot easier to come to, to be able to say, you know, where's some of my trends, what are the actions that I need to take, and come right in here and focus in on that. And to that point as well, this seems like a really good time to also mention that Instana on the observability side is also tailored to the exact same use case. It doesn't matter if we're in a cloud native a hybrid cloud environment or an on prem data center. We can span all sides of the fence and we can cohesively show you the same one stop shop view. Yep. Absolutely. So here, these agents and this environment, I mean, they are running. Oh, maybe you can help me do something about the license there. You have me after the call. Yeah, I'll talk to my people will talk to your people. We, yeah, we can. Very easy to deploy to by the way I'm going to give you a little plug that since I've been working in Kubernetes for over five years. I wouldn't say all things are so simple. Right. This was a they're not. And I was really impressed with the ease in which I could deploy these agents and the fact that kind of out of the box they were already kind of configured to perform and to work. That is the goal. That is the goal. Yep. Okay. Dina. Yes. Yes, thank you for those very insightful and informative demos I hope everybody found those as I opening as I did. Very good. Love them. And then just to sort of keep plugging along. This sort of is just showing us why stop out observability. Let's keep on going. Drew Eva, would you guys like to talk about this before we answer some questions. Yeah, absolutely. I mean, from our side of the fence at Instana we couldn't be happier with the partnership that IBM has created between Instana and Turbinomic we've always seen our tool. As something beneficial, not just as an observability platform, but a data source for third parties to make decisions. We've partnered with other CICD platforms and now of course our friends over at turbo now that we're all under the IBM umbrella. And the partnership couldn't be yielding better results. I mean, out in the field as part of a sales team myself on the sec side. I think we've really engaged with a lot of interactions across many different organizations that are using this combination, highly effectively right now, whether they're still in a POC or they've gone through with a purchase. It's an incredibly powerful combination that I think anyone who adopts it and actually puts it in place will just see a huge amount of benefit from even do you have anything to add. So strategic resource management is also about automation, I'm going to say it again. There's we, we have built our collective solutions to generate actionable decisions, and I'm going to stress that because sometimes we think that, you know, evicting a pod is like a good answer and it's really not you know, so actionable executions or even something like HPA horizontal pod auto scaler. It just says spin up another replica it doesn't ask if I can schedule it. It just says, if it goes into pending state, I guess somebody needs to spin up another know, right. We have intentionally designed automation based on the definition of executable actionable actions. I can't stress that enough, you can see my hands going wild. I think people should put that in their heads that just because something pops off another pod. Has it really done the analysis to know that this pod will run that this change will take. I would challenge that. And so do a little plug. If you would like to have a demo of instana and Turbonomic, please let us know. So instana and Turbonomic also have scan box environments if you would like to go and try it out for yourself. I am going to drop those links in the chat for everybody. Feel free to go check us out. If that is interesting to you. And we're going to answer some questions that were asked in the chat. Let me just pull those up as well. Okay, so Eva and drew. Is there scope in cloud computing in the future was one question that was asked. And it really depends on what you're going to define as a scope, right? I mean, the entire point of the cloud is to make compute storage services available to folks without having to build your own data center. As compute technologies continue to escalate in their R&D. Just think of the impact that something like quantum computing might have on the cloud being made available to the public space. That is effectively an end goal of every public cloud at this point in time is to reach that stage where you can have something like a quantum compute system that developers can make use of without needing to build their own. Right now that's only available to the Facebook's the Google's and a few other select companies of the world like IBM. So when you consider that as sort of in scope for the future of cloud. No, there really isn't much of a scope. The scope is defined by where technology takes it. So it's really down to the chip makers and the compute manufacturers to define what that's going to look like in the future and the cloud and the cloud companies themselves to figure out how they want to make use of that. Thank you. And another question that we have is what is the approach to profile and application and make good resourcing decisions. I can guarantee Dean has got some stuff to add on to this one too. So I'm going to start with the very beginning part of you know what is the best approach from our perspective on the instana side that comes down to a key a few key factors. One is the collection of every metric that you need at a high fidelity because if you're going to feed something like Turbinomic to make resource decisions. It's using machine learning. Anyone who knows anything about machine learning knows one simple fact. The depth of the data set that you feed that machine learning algorithm will have a direct correlation to the efficacy of the output. So having something high resolution high fidelity being fed into that system is key it's paramount to its success. Now on top of this it's not just about time series data in a distributed system. It's more about the interactions between those micro services that are more important than just knowing how many CPU deadlocks you might or CPU issues you might have deadlocks on a database. So that tracing super critical to being able to generate those KPIs and SLOs. And for us we find it very critical that you always collect every single one of those traces just so you have a firm foundation of observability. And then you take that information you feed it over to something like Turbinomic and well I'll let Eva talk to her piece of it and how turbo would make the best resource decisions. Thank you. Yeah, so our, our philosophy is that again we turn data into actions through analysis and, and when we talked about profiling an application. This is actually an important aspect of our data model, we are tracking actual resource usage against allocations, historically. And also looking at the constraints in the environment. Because you can't just plot CPU usage for your particular workload, you need to understand what is going on in shared environment with parallel processing. And when we profile an application, we definitely do that we use historical information to generate decisions on how to right size workloads, and again, make sure that they are actionable within the constraints of the environment, but also take to drive actions to mitigate If the cluster is constrained, you could profile your application all you want, you're in a shared environment, you need more resources, right, or you need to redistribute your resources better. So, I think that's the point of our analysis is it's multi dimensional multi layer. It's like for you, Trekkie fans out there, it's Mr. Spock's for layer for the, you know, chess game, right, there's multiple layers to this. And we absolutely need to understand not only current conditions but historical conditions to be able to convert you to the best state possible. Hopefully that was the question. I think that was do not worry and this one also is going to be directed at you, Eva is turbanomic able to track the interdependency between cloud native applications and integrate with on premise legacy applications of a customer. So, the short answer is yes in fact some of that we can even get from in stana the integrations into our. So in stana, for example, maybe a classic example is, I've got some business logic running in containers that's running in Kubernetes. Maybe up on AWS, but then it's making, there's making JDBC calls for example, back to some database because but that data needs to stay behind the DMC in the private cloud, right. Right, true. You understand these external I'm going to pick on Java, these external JDBC calls you've got an understanding of what that endpoint is. Let's say that it also goes down to the underlying infrastructure as well. I mean, the funny thing is is that this question can actually be tackled by both of our products. Right, because that that database that's sitting on a, let's say VM, and that VM is part of the vCenter. Guess what, between the two of us what you have is that full stack of that application and you can drive it from the application itself and in turbanomic what you would see represented is all the components running in AWS the pods, the nodes in what region stitched to to with cost and the storage and on the private cloud side, we're going to go from that database server onto the VM, onto the ESX host on in the data center with the related data store and there's a back end converged hyper converged infrastructure, the back plane of the East West network traffic. That's kind of cool. And that's what we can do. Absolutely. And it's the same story on the inside in stana side of the fence if you have an on premises data center and a cloud based environment with in stana agents installed and they're reporting to the same server, you will see one cohesive map of interactions, whether they go from on prem to cloud or stay isolated in one environment. Right. I just think we go a little deeper like underneath the covers right with the ESX layer and then. Yeah, that's definitely true I mean in stana, even though we have support for things like ESX I which is actually very, very cool and does a lot of really neat contextualization turbo does look a little bit deeper under the hood of the hypervisor and I believe you guys even go down to the hardware layer don't you. And the point is we use this data to make decisions because stuff is going on right now, moving a VM. It's cheap, right. Moving a pod, I made it cheap, by the way, I made it happen. So, why wait to find a problem. You need to use the data to drive the automatable decisions that will mitigate these issues so that you can stay focused on issues that perhaps involve more intervention to actually remediate. And I think that's the important part of the story. You know, it's, it's great to see the context, but you want that context to work for you to drive decisions that a human being doesn't need to get involved with. Absolutely. I think that's the endpoint for any, any decent AI op story. Okay, and two more quick questions. This one is directed to both of you. Is it possible to manage, automate and monitor on prem data centers, which include the hyperconverged solutions SQL servers, Windows server environment, windows and KVM virtualize environments, etc. So I'm going to just be very upfront with this one. Yes, we can do it within Instana. I know Turbinomic has a little bit deeper interactions with some of these technologies, but I do want to make sure folks understand sort of the delineation lines with our tool. We are very focused on the VM layer and up. So when you start talking about monitoring a sand, not something that we do today that we do have an integration with ESXI that can show you data storage latency. It's not going to directly monitor, say your net app sand that's parked in a data center until you the performance stats coming out of it. Now when you start talking about hyperconverged solutions. Yes, we absolutely can help with that. But again, I'm going to throw another asterisk up just in the interest of transparency that the depth of this is going to depend on your hypervisor. If you're doing hyperconverged with ESXI. We have a lot of really cool stuff we can show you there. But if you say gone maybe native Nutanix and you're using Acropolis we don't have any interaction points with Acropolis as a hyper layer. So that would be something that would be considered a bit of a miss depending on your environment, but I'm sure Eva's got something there for us that might. Yeah, we've been around for 11 years and you know we've really have a what we would call a really rich set of integrations when it comes to these kinds of infrastructure compute storage network solutions and so what I invite you to do is go to mc.com go to our integrations page. I guarantee if it's relevant technology, we have a way to collect relevant data from it. Again, my goal is I don't want to be a reporting tool for you. My goal is I want to take information from your applications, platforms, infrastructure, and the compute storage around that and drive actionable decisions to prevent resource contention. Right. Thank you guys for that. And the last question which I can just answer real quick was, if we had some labs or learning centers, and I dropped the two links in the chat that will take you to instan as observability sandbox and try to do the sandbox that we just launched so feel free to use those links to go check us out and learn on your own but again if you'd like to learn more. Please let us know in the poll and we'll be happy to reach out and give a demo. So I just want to say thank you from all of us for joining us today. And I would like to pass it back to the Linux foundation, unless even drew you have anything. Last you want to add, I'll pass it back over. Just thanks for joining us today I hope you liked everything that you saw and it was beneficial. I dropped an extra link down there to our YouTube page if you want to check out any of the learning materials there and hopefully if you like what you see we'll be talking in the near future. Thank you and right let's pass it back. Awesome Thank you all so much for your time today and thank you everyone for joining us. Just a quick reminder that this recording will be up on the Linux foundations YouTube page later today. We hope that you'll join us for future webinars. Thank you so much again and have a wonderful day.