 Hello everybody. Again, thank you all for joining us today. I am Dina Henderson and just a little bit more about what I do over at Turbinomic. I am the junior product marketing manager on the cloud and cloud native side of things. So I market and educate the market, excuse me, on all fun things cloud and cloud native. Eva Irfan, would you like to say a little bit more about yourself? Yeah, let me start and then because Irfan's got a great resume that I want him to share, but I've been here at Turbo a little over nine years but been an IT for over 20. And my role here is to help people that deploy containerized applications, leverage Turbinomic capability to help optimize those applications and optimize the platform. Not even demystify a little bit, but to basically help people accelerate and onboard more applications onto their Kubernetes technology staff. We don't want any empty clusters. So I'm very excited to share what is, we've been doing this now for five years and some of the experiences we have with our customers on how to survive automation and also how to manage that against the source of truth. So Irfan. Yeah, hi. I'm Irfan. I have been with Turbinomic for close to three years now. In my past experience I have worked with companies like Motorola and Huawei. I have been pretty active in the upstream Kubernetes community also since almost four to five years now that's like almost starting off the Kubernetes landscape itself. I am associated with multi-cluster stick there and I maintain a project called QFed. At Turbo my main responsibility is like I mean I'm mainly responsible for bridging the gap between the upstream, whatever is a cool thing over there, and then bringing those cool things into Turbinomic and try to integrate some of them with our product. Yeah, yeah, great. Thanks Irfan. And then Dina we want to just let people know. Yes, we go to the next slide and I'll just go over that. All right. And just so everyone knows for this presentation, we will be talking about what is generally available today. But Eva and Irfan will be touching on some things that are in private preview that will be available down the road. The distinction will be made clear. And if there's a question around that at all, please let us know or a question in general. Drop it in the chat and we will get to it during the presentation or at the end. Eva, over to you. Excellent. So let's, let's put some framework around what we're going to talk about. So first congratulations to people that have built cloud native applications and you know, there's some key processes that you've tackled in the building testing and deploying and managing change in your technology stacks, such as a container platform. So the DevOps pipeline is really itself kind of the bread and butter of the DevOps process. So adding on to that. Now we have an environment, we have applications, we want to manage change. And we, but we also want to leverage automation, so that we can rapidly deploy and potentially even reverse these changes. GitOps is a term that I think really best describes these processes and methodologies. And today we're going to really be focused on GitOps source of truth. This is where your application or infrastructure as code resides and this is the process by which you want to excuse me manage and deploy change and then of course the CD pipeline the continuous deployment pipeline. This is also important because when you make a change and you've approved change, you do want to rapidly deploy those changes and automate the deployment and update. Shout out to we've works, you know, I think they really coined the term of GitOps back in 2017. And this methodology really helps you achieve continuous delivery of applications and changes in a cloud native way because first of all you're going to have a declarative description and configuration of both applications and infrastructure. Sometimes people start out with that declarative description of infrastructure as infrastructure as code. That was something that was very popular starting out, you know, several years ago, maybe about two three years ago, but also your applications, you have a manifest a YAML. This describes your deployment or whatever resources that you're going to configure for your application so the source of truth will be this repository of these manifests. So if you look at Git and the methodologies around that this provides really a great way to manage versioning and change and approval process and the whole commit process. So, Kelsey Hightower, I think everybody knows Kelsey Hightower's right GitOps version CICD on top of declarative infrastructure, stop scripting and start shipping right. What we want to talk about today is you even go beyond that and now use this methodology to stop scripting, start shipping and start automating. Automating what we want to, we'll talk about what we want to propose is automating optimization. So another concept that we will want to talk about is in your automation, how much approvals do you need for what types of changes, right, and we'll talk about that so we wanted to start out with kind of level setting terminology and our expectations around GitOps. One of the other reasons that we want to advocate more automation and even automating ways to optimize your applications is that if you look at the Harvard Business Review this is where we got this study. 60% of DevOps teams will be evaluated on KPIs and performance metrics, including criteria on tied to business outcomes. We're seeing that with the DevOps people that we talked to DevOps is a key persona for turbinomics. And what does that translate to if you're being evaluated on performance metrics of your applications. What do you want to maybe optimize decisions that are sorry automate decisions that optimize the performance of those applications. And when we talk about concepts that we are advocating, you know the feedback we get from DevOps is that I wish I wish I spoke to you yesterday, right, I wish we were doing this yesterday, because managing for performance cannot be manual. If you have any knobs and levers, Kubernetes itself does manage the desired state, right, if something pod stops or crashes, it will restart, right, but Kubernetes doesn't automate optimization. We all want to keep this cloud native innovation train moving success of cloud native and containerization really is depends on onboarding more applications. So these processes that you're putting in place around CI CD and get ops. You want to ask yourself an important question. If I want to keep the innovation train moving. Shouldn't I be optimizing and shouldn't I be automating those optimizations and making sure that I'm not introducing more manual processes, just to onboard more applications. So I want to talk about capacity planning questions around how do I introduce optimization changes. Try to try not to think manual think automation. So, when, in our experience working with customers what we see is slowing down that innovation train is. Yes, there will be specialized skills of your DevOps teams and sres. You know you do want those people primarily leverage for helping the application teams, helping the application teams understand what does it mean to onboard this application environment. The applications themselves are getting more complex. One of the key value propositions of cloud native is decoupling micro services. Well, while that really does hopefully streamline the application development process that I can make changes to key components instead of having to make changes to be everything. It introduces change because now where I had one monolithic system. I'm hurting cats I've got 50 services, each with five replicas. And application growth has really tested the definition of capacity planning. And while we may be more tolerant of over provisioning. You can't sustain that as your only model to make sure that you've got plenty of capacity to onboard the new application and over provisioning doesn't guarantee performance. What happens is our key personas of DevOps and application teams end up in a resource management guessing game. And that just increases more labor. So, well, you know Irfan Dina and I are big proponents of cloud native and we love containerization as that platform that makes cloud native and all the business benefits of reality. So what we've done is we've turned our application developers into operators. We have they as part of their manifest of this is my definition of my application that gets deployed. We're asking them, hey, put some limits and requests in there or, hey, I'm going to give you a quota on your namespace and you got to fit within that quota. Now, maybe quotas. Marisa, I think maybe that's another good topic, because I could stand on a soapbox all day long about quotas. But the reality is the application developers asked to put in specs. And if you're going to ask me to size something, I'm going to size it with plenty of capacity, right. I don't want to keep revisiting this. Why do I keep revisiting it just because the DevOps team is eventually going to say, hey, you're not the only application on this cluster. It's multi tenant, I got to make room for people. Can we fine tune this right. It's frustrating. There's a lot of time. There's data flowing around. And, and the return on investment on this effort of the guessing game is not good. What does a Kuber not have at their disposal. So Kubernetes does have some great, you know, projects and there are some really great answers to many problems. But let's talk about the resource management guessing game. So what does someone have someone has VPA vertical pod auto scalar. Okay, that's a thing. But you're asking someone to set some thresholds, and you are asking them to kind of understand the VPA methodology. And now, but what if they have a horizontally scalable services as well. Okay, well, now Mr. Mr application developer, I want you to set an HPA policy. Okay. That's another one that's another threshold based mechanism. And the challenges here are, what if I want to use these two together. I have a horizontally scalable service, but shouldn't I want to optimize each individual replica. And what do these threshold based mechanisms actually understand about the running infrastructure. Well they don't. When a threshold is triggered and HPA says here fire up the replica. It just hands it over to the scheduler and says you place it. You'll tell you'll determine if there's plenty of capacity. Well, that's a thing. But here's the challenge with the Kubernetes scheduler. It only looks at the capacity of the environment on initial placement. It never really continuously optimizes the placement of the existing pods. And instead it relies on two mechanisms. And we could talk about the scheduler but I would still say to scheduler is just yet another long line of thresholds that do not correlate. So now HPA hands it over to the scheduler it scheduler goes, Oh, I don't have anything. I'm going to put in pod pending. Now, if a node happens to be under enough pressure, you'll get an eviction. But evictions for some services are not great answers. Why, because, you know, maybe I'm not run, maybe I'm not so stateless, right. But it then at the end of the day someone says we'll set another threshold around pod auto scaling. Look, the reality is, the best answer is, what if I had an analytics system that understood vertical horizontal scaling continuous placement and represented your environment as a supply chain, and the dependencies, and the analytics understands these dependencies, so that when you generate an action. It is first of all not a myopic, Russia trigger, but it also simulates for you taking the actions and lets you understand dependencies in the infrastructure. Before you take the actions. So that when you automate the definition of automation is also full staff. So that is what Turbinomic brings to the table. And the ability to get out of monitoring at specific layers of the staff and tools that do this a policy that does that. And now you have to have both DevOps and application developers manage separate myopic policies, and the DevOps has got to put this all together for a multi tenant cluster. So we want to get out of that. And then we want people to then get actionable decisions and now look at how to automate. So, well I do want to come back to Turbinomics analytics model, I would like to pivot back to what we were originally talking about is, you've got a pipeline you've got to get off process methodology. And then the elements of this CICD and get ops process for successful automation. I'm going to, I'm going to make some, I'm going to propose that in your source of truth, which you should have managed change to your manifest manage change to your applications in your source of truth, but also allow management of these, utilizing decisions this container specs the limits and request. If you're defining application resources. Allow your source of truth to allow introduce introduction of change, because you can take a decision and action coming from your stress testing environment, and you should leverage that in your source of truth. And let your source of truth now define an optimized spec. Right. Also in your source of truth think about the relationship of that to your runtime environment. Are you going to have a one to one. Here's my prod cluster. I have a prod definition of my application, or one to many. Maybe I would like to have my definition of my application be the same across uat production and test environment. Right. So now when you update something in your source of truth, your CD pipeline can then deploy and leverage that change out into the environment. Your CD pipeline actually automate that change. Now you may have definitions of where you would like to have your approvals. But if you think about changing a limit, or a request. If you trust the analytics. You want to mitigate introducing manual processes around approvals right there. Obviously you want to make sure that any change that you do works. But once you get trust, think about direct commits and direct updates. Continuous optimization should be something that you think about as part of your source of your definition of get ups in your CD pipelines, and then continuous optimization can modify the resource limits can even modify the number of replicas and even modify your cluster capacity by leveraging either built in cluster auto scalers or leverage your infrastructure is code source of truth and automation. In the economic, we take data and turn that into actions, and we want those actions to reflect back on what matters most, whether it is the response time of an application transaction throughput, and our full stack analysis capabilities really drives actual things where if we need to horizontally scale something or vertically scale something, and we need additional infrastructure. We're going to tell you these things because our actions correlate. Right. So, in our, we're getting to it, we're going to do a demonstration for you guys. The scope of the types of automation that we can drive, whether it is pod moves cluster scaling or SLO scaling. One of the things that we want to focus on for you is vertical scaling of the workload. Again, even if you're horizontally scaling, I think this applies to all types of workloads. You need to optimize size because even if you're horizontally scaling, you could be propagating a bad configuration. So, for vertical scaling. This is a great example of while there are mechanisms that turbo provides that you can resize something in the running environment, you should actually maybe resize right back here in your get ups repo at your source of true. So, I'm going to hand this off to Irfan Irfan, I'm just going to maybe just high level set this up and then I'll go to the logical diagram of your demo environment. But what we're going to do is we're going to piggyback on an environment that is using our go CD, and get as the source of true. So, this is what CD provides for us. So, Turbinomic is maybe I can take a step back. Turbinomic has a mediation probe that would that runs in the environment in your cluster called could be turbo. In fact, I think there's a link to that project where we also have documentation that talks about all these use cases. But one of the benefits of being a mediation probe that runs in the environment is that we can discover other custom resources. One of the things that Argo has Argo CD has an application custom resource, which provides some interesting information for us it provides the, where's the source of truth and directory and branch information that we can turn around and use to make a commit against the source of true. Turbinomic is going to in the demonstration we're going to demonstrate we're going to exit, we're going to generate a resize action. Irfan will talk about the application resize action that we have. And we can now execute the change to limits and request back to the source of true. Irfan, are we going to do a direct commit, or are we going to do a pull request. We are going to do a direct commit to as of now. Yeah. Okay, great. Now, one of the things that Irfan's building out is the capabilities to do either that is really just mechanics but it's still the same mechanism. We're going to make a change against the source of true change limits and request. Argo CD is going to automatically detect that change and deploy it out. Right. So this is a private preview feature as Dean has said, what this means is it's beyond beta and with as a product manager I engage with customers to deploy private preview features. We test that out in your environment and then we get feedback and as product manager. We can even turn around and incorporate that feedback in even as the definition of the MVP before we go public. So Irfan, do you want to walk us through here basically this is, I think I already said a lot of it but I'd like you to do that and then we can give it right over to your environment I'd like to. Right, right. So this, it looks like a lot of detail on the slide, but to simplify turbo here on the left hand of the screen, the block which is turbo. These are brain or the engine, you can say and this is where all the analytics happen, and it also comprises of the UI, where you can see the details of all the result of the analytics, and those are actionable insights in terms of actions which can be done and when you execute those actions, the changes to the configuration would be carried out in the given K test cluster, which is the central block. And in the normal mode what we actually do is we use agent kind of something running inside the cluster, which we call keep turbo, if I named it as a mediation probe that's like terminal terminology specific to us. So the agent basically interacts with the K test API server and changes the configuration of the resources to bring them to the desired state. A simple example would be like you have resource deployment running with some replicas and there are some resources applied on to it. There are some limits set on it and request set on it, but turbo determines that the current limits and requests are not appropriate. So it recommends to change those requests or limits, which are the actionable insights, which on execution, this agent cube turbo will update that onto the deployment spec. In a world where this cluster is not managed by the CD pipeline, this will be a direct update on to the resource, but in a world where we have CD pipeline configured and there is a source of truth configured. The update ideally should go back to the source of truth, for example, GitHub, from which a tool like a CD might be pulling the information or pulling the changes. So what we also have implemented is which is a private preview feature right now that our agent can actually get the information about what are the applications that are being managed by the CD pipeline using the CD application definition itself, and then also use Oh, sorry, I'm sorry, maybe this is what you wanted to do right this was probably the side that we want to do right. Yeah, yeah, that's okay. Yeah, so. Yeah, this this is the flow that happens so turbo can send the action the actionable insight to the cube turbo probe, which is our agent in the communities cluster, and as when when the action is actually executed. The cube turbo probe updates the source of truth, the way that I'm going to show you right now is that it updates a commit on to the specified branch in the source of truth. So the tool like I go CD which is observing the source of truth can now update the resource which is deployment directly. Yeah, stop sharing and we'll stand over to you. Yes. Yeah. I'll move on to environment that I did set up. So, I am assuming that the audience might not be entirely entirely, you know, in the knowledge of this software that we have. So I'll give a small preview about that so this, this is something that we call supply chain, and the supply chain represents the environment that we have basically collected the data and the information from in, I'm showing a supply chain which is sort of showing the details which we have collected from Kubernetes clusters and and the entities like individual entities here are something that represent one of the entities or one of the resources in the Kubernetes cluster. So this is the, these are the clusters themselves. These are the nodes in the cluster the virtual machines and the nodes in the cluster name spaces are self explanatory workload controllers are the actual workloads. In terms of the deployments that the applications which are running the in the cluster, for example a deployment or a replica set or a stateful set that we reference them as workload controllers. And if you if you go to the into the details of these workload controllers feel also have some information what what resource this is. These are parts which are self explanatory containers are self explanatory. We also have representation of the similar specs of the containers as container specs. So deployment might have a part template spec which has one container definition in it, and it might have five replicas. So that container definition is what we call the container spec and those replicas each are represented as individual containers. Let me show you one application, which I have configured in one of my clusters. Yeah, before that I also should show you the ago CD pipeline that I have set up. So I have an ago CD installation in one of the clusters. And that cluster has couple of applications running in it and those applications are synced from one of the source of truth that I have configured. So this is, this is the repo from where this particular instance of our city is sinking those applications from. I'm not thinking the whole repo I'm thinking a few, few folders, which have, say, young aspects in them. And this particular demo we are showing the updates in the YAML spec directly, but this the whole concept can be extended to manage Helm charts or some alternate mechanisms of managing the specs of the resources. Let me also show you what is running inside the cluster. So these are the parts which are running inside the cluster, which, which are part of a couple of applications. So there are there are three deployments and they have some bunch of parts here. And these the deployments are the definitions of the deployments are coming from these folders which I have configured in ago CD. If you look at any of the, the YAMLs, their standard YAMLs, it's a test workload which drives the CPU utilization to the desired limit. And whatever limits that I specify, this particular workload always drives the utilization on almost still the limit that enables us turbo engine to, you know, understand okay there is, there is some need of updating the limit of, you know, getting the limit further up so that the constraint on this application resource could be resolved. That's, and that is what the actionable insight here is. So if we go ahead and execute, let me before doing that, let me also show you what, what is it that the actionable insight is trying to show us. So currently, if you see the utilization of the, the CPU utilization is almost 100%, which is configured as 15 mili cores, the limit is configured as 15 mili cores and the CPU utilization is almost 100% for that particular container. So the, the change which is recommended by the engine is that let's increase the limit 200 mili cores, which, which is according to the engine currently very low. And there could be multiple such actions that could be updates in limits they could be updates recommended on a request. So this is what I'm showing you is the vertical scaling of this particular container, there could be horizontal scaling recommendations also where the platform will say that let's increase the replicas on this particular workload. So, currently, if we see to be able to appreciate what the changes will happen. If we see, there is a particular commit, which is, which was updated on 29 March quite quite some time ago. After executing this particular action, after executing this particular action, we should be able to have a new commit seen on here. And what it means is the, as per the recommendation from the engine we are actually updating the limit within this resource YAML, resource spec to the desired limit suggested by the platform. And if I, if you don't mind so then, again, Turbinomic has made a analytics decision to change a limit. And then the integration that we built will leverage the definition of this Argo CD application. So we're trying to direct commit against the definition of where that source of truth is, and you've shown that the, the, the get side where you've got to repo is actually, you know, also providing a benefit of tracking these changes so you could see that track the changes track there where, which is where is the best place to track changes quite frankly, right. Right. Thanks. Thanks for adding that Eva. So, yeah, so we also get, as Eva mentioned that we get the history of the changes over here so we also know what what update has been done by which user in our case it is the turbo platform itself. And this thing should be picked up by the Argo CD pipeline. And there's the time out configuration is a little long here so I'll just refresh all the apps to pull the changes from the pipeline. So we see one of the apps is being updated right now. And some of the parts should should be updated so we see these are the new new parts that are being created with the updated limits. And if we didn't describe on one of those pods we would see we should see that that limit that is updated is now, I should probably do minus we am all. Yep, this also. Yeah, so we can see that the limit is now updated against this board. Yeah, so this is this is sort of showcasing the concept that the pipeline or the mechanism that we are in the process of building and what we want to put out there as one of the one of the mechanisms which can be used to, you know, push the changes back to source of and where a tool like turbo can be utilized to do the same. Yeah, this is, this is mainly what I wanted to show as part of the demo. I'll hand it over back to. Yeah, I think and also I think for fun we also want to advocate when I think it's important to think about these different scenarios in, you know, when you build out your definition of your get ups and your, your pipeline. Think about resource specs is something that should be very easily changed, right. And that that's part of the reason we wanted to, or fun was going through some of the details of the setup, because you saw that these specs were there they were, they were easy to change, right. Right. Okay. So then, if you don't mind, I'm going to just come back. And sorry. I'm going to wrap it up. So, I think that to summarize. I think the concept set or fun demonstrated. First of all, the, the, it starts with the right decision the right action. And what we are advocating is to build out your get ups and CD pipelines with the idea that when I have actions that I can, that are the performance and resource management of my applications. You want to automate those things, but it does start with the right action and this is what the turbinomic analysis and the engine that are fun to scribe. So the use cases that we are driving. We want to help people leverage horizontal scaling of cloud native services and drive those decisions with through service level objectives. But we also want that to correlate with resource management decisions, whether they're vertical scaling continuous distribution, redistribution, and even cluster scaling, because these should be all correlated and analyzed together, minimize the manual labor that's that's that's why we wanted to really push the automation demonstration. You've got it with kubernetes and with cloud native applications. We've got the best environment for automation. So do it automate the things that you can. We want to help DevOps and application people confidently build out more applications deploy more services onto the platform. And, but you don't need to over provision and you don't need to get yourself into allocation models where you don't want to over commit, because you're afraid about what's going to happen. We're going to help solve those things. So, thank you so much for fun for your great demo, Dina, Marissa back to you. Yes, thank you all so much for joining us today. We had a great presentation. I hope you all agree. If you have any questions, feel free to drop them in the chat right now, and we'll answer them for you. Otherwise, we will hand it back to the Linux foundation. There was a question posted a little while ago for which I tried to type the answer. If that's something. Yeah, we can go over that one. So the question was, what is the best way to do versioning for monorepo? Monorepo, yeah. How do we auto increment for microservices and Helm chart using CI CG. Yep. So I posted the answer. The concept, what I understand is that the updates to the Helm chart based deployments would be the update to the chart itself, right. And the update to the chart would be put as a commit on to the charts and a tool like a go CD would be syncing that to downstream to the environments, k just environments, right. At a given point of time, as a user or as an automation tool, like turbo which can be used as a tool which can automate these updates, the changes could be pushed and they could be versioned also like after five changes or after every week, they could be versioned as a new release or something. And that's how I think I can recommend that you can manage your, your microservices, which are based on time charts. Yeah, there's, yeah, there's definitely different techniques modifying the values.yaml, you know, I think one of the things I know personally when I talk to people about home, who want to use Helm, right as managing is I, I encourage them really make sure you are parameterizing your container specs, right. Treat them as a variable. Not as something that needs to be hard coded, right. That's another concept that I know we've even deployed employed ourselves, our own application, our engine, you know, so if I'm referred to it is actually Kubernetes based application, managed by a home chart operator so we've even tried to walk our own talk and make with our application, we can size and scale based on the customers environment that we're actually managing so just to add on a little bit to Irfan's answer. Thanks. Thank you both. It looks like that was our one question. So, Marisa, I'm going to hand it back to you. Amazing. Thank you so much Eva Dina and Irfan for your time today. And thank you so much everyone for joining us. Just a quick reminder that this reporting will be up on the Linux foundations you to page later today. We hope that you will join us for future webinars. Thank you so much again have a wonderful day.