 Awesome. Hey guys, hope everyone's having a wonderful KubeCon so far. Our talk today is going to be focused on the application operator persona. So we're going to talk a little bit about the importance of operations to managing cloud-native applications. My name's Sidanwa, sometimes colloquially referred to as Spud, mainly by Ria. And you want to introduce yourself? Yeah. Yeah, and we both work on the same team. This talk is going to be pretty abstract. It's not really going to dive into a lot of technological details. But if anybody has any questions throughout the presentation, just feel free to stop us. And we're open to having a discussion at any point during the talk. No. Okay, so a little bit about the operator. So operations is responsible for managing everything from infrastructure to applications. So at the very beginning, it is the operator's responsibility to choose the infrastructure. And traditionally, when you're on-premise, this refers to your compute networking storage and your security standards. So the operator is responsible for choosing the technologies that are going to back these. And they're responsible once that's done to actually deploy and monitor the infrastructure. So they're responsible for building out the tool chain to make sure that it's easy for them to deploy and then they're after monitored the infrastructure. No. The third one, I have an asterisk around it. And this is one which there's generally two schools of thought around this. So how many of you have developers who operate the applications once it's deployed? And I think generally the industry is heading towards this where you have this DevOps role or this DevOps role where the developer is empowered through technologies like Kubernetes and some of the serverless offerings to actually operate the applications. But the other more common one that we see, at least I see when I work with large enterprises is that there's a very strict handoff between the developers and the operators. So the developers are responsible for only writing the code and delivering business value. And then there's a completely separate set of people sometimes referred to as the DevOps people who are responsible for actually running that thing in production. And the different schools of thought both have merit and it really depends on the enterprise and how they're structured. Typically the larger ones will have separate people doing all these separate duties and the smaller companies will have the DevOps role. So even within the operator role, when we talk about the word operator, there's a ton of roles that fit into this category. Traditionally there have been systems admin who are responsible for, you know, administering the VMs, administering the servers when there needs to be an increase in capacity. There's the network engineers that we see very often in large enterprises and these are responsible for things like IP allocation, subnetting, where the applications are deployed, what kind of network rules are applied. You have the IT systems admin who work more with developers and make sure that the developers have working development machines and have great security access groups. You have something known as the SecOps which is responsible for making sure that all the enterprises security standards are passed down to the actual operators in the AppOps. You have something called the database admin who is responsible for actually administering the database. Throughout this talk we're going to refer to all of these as the infrastructure operations. So those operators who are responsible for the three things that we're going to talk about again and again, which is the setting up the compute storage and networking of your enterprise. And really all of these things you're doing to serve your developers. So at the end of the day your business is only successful as your developers are with respect to delivering business value and all of these roles have existed or exist at some point to make sure the developers are empowered to actually deliver business value. So clearly the operational role as you can see has a lot of variants. So there's a lot of different things an operator can do and an operator who does role X at one enterprise might do something completely different at a different enterprise. So if I say I'm an IT systems admin at a particular enterprise I could move to a different one and it will be completely different. And one of the challenges with operations has always been the challenge of operating on different environments. So with the dawn of public cloud you have this compute networking storage happening in two different places. So you'll have a completely separate stack on-prem and then in public cloud certain things are managed for you like virtual network or VMs and the availability of VMs. But typically when the operators are building out these tool chains such as CI CD monitoring and safe delivery pipelines they have to duplicate this. So when you have an on-premise setup and you have a public cloud setup the tool chains with large enterprises will be duplicated. So they're forced to build these separate set of tools to actually manage things. And then Kubernetes came by in the cloud native landscape and it created a common control plane across these different infrastructures. So it created a common API surface between on-premise and the public cloud whereby operators can now build tool chains such as CI CD pipelines or Prometheus for monitoring that can work across both of these planes. And at the end of the day your development team is empowered to just do what they do which is picking the development patterns for their code and then the frameworks. As an infrastructure for DevOps teams because networking and all those things are pretty complicated sometimes which developers usually don't interfere with. So you see that there are companies who provide managed Kubernetes platforms. Do you also have that experience? Yeah, so Rhea and I both work on the container computing which is responsible for AKS which is our managed Kubernetes offering. And even there what we see is customers still have to worry. If customers have an on-premise setup or even if they don't they'll still be doing things like bringing in their own network where they have a network that already has some applications and they want to use that same network for their Kubernetes cluster. So they still have to deal with a lot of the infrastructure concepts. Is that a question? Yeah. And one of the things that I wanted to talk about was with Kubernetes making kind of applications more accessible because that's really what it did. It came by, it obstructed away the infrastructure and it gave this lore that applications are easily accessible and developers can interact with this thing to deploy their apps that the operator role is kind of going away. But I would argue that Kubernetes and the ecosystem only makes the operations role even more important and creates a more well-defined separation between infrastructure operations which is those folks who are responsible for the lower level things in your enterprise like your on-premise setup. And then the folks who are responsible for just operating the applications. So if you have a common API surface on top of the infrastructure you can create a separate role for those people who are responsible for the delivery of applications on top of Kubernetes. The monitoring of those applications and then the safe delivery of them. This makes the roles more streamlined and concrete. So you'll have those folks who are responsible only for the infrastructure and then those are responsible just for managing the applications. And this division enables the developers to just focus on their roles which is just delivering value for their end customers. And from the customers that we have spoken to, having this clear separation allows people to A, hire for talent, which means that you can hire infrastructure operations who are good at what they are. You can hire Kubernetes admins who are good at, you know, administrating Kubernetes clusters and managing applications on top of it. And having this clear separation will just allow you to move faster. So I touched on this a little bit and the idea that Kubernetes came by and it created a division between infrastructure and applications is very true. So it creates this API surface that makes it super easy to get started with containers and deploy applications on top of it. But some of the things that it makes easier can lure you in and then it becomes very complex really quickly. So the analogy that I like to use math, which is, you know, you start off learning math and, you know, you learn one plus one equals two and it's super easy. And then really quickly you get into like really complex and it becomes a very big beast. So for example, if you were to deploy a Hello World application on Kubernetes as a developer, the quick starts are very easy. And you think, you know, you can get a mini cube cluster on your local machine and you can deploy a Hello World application and you're up and running really quick. But the day two ops of, you know, actually deploying this thing, setting up an ingress, setting up autoscale policies to support your customers and managing all of those quickly becomes very cumbersome. And you don't want your developers doing that. You want a bunch of folks dedicated to application operations to actually take care of that task. So we think that there is room for a new role in the ecosystem that sits on top of Kubernetes, which is definitely an infrastructure orchestrator. And that is the app ops and the app ops role is responsible for just delivering the applications, monitoring them, taking care of things like safe deployment and then securing the traffic. So as a developer, you're responsible for just building your code and you don't really care about the fact that it runs on Kubernetes underneath the hood. The application operator would make that decision for you. As a developer, you don't really have to care about, you know, how safe deployment practices are done in your enterprise. So you would write your code, you would iterate on it, upgrade it, and then the application operator would take care of doing, okay, let's use service measures to actually do the canary deployments. And we're going to do, we're going to follow the canary pattern to actually do safe upgrades. But again, as a developer, you wouldn't care about this. You're just caring about writing new code. So these are the duties, you know, digging into those duties a lot more. So the delivery of applications includes, you know, setting up and using those CI CD pipelines to deliver the apps. And you can deliver this to multiple environments by virtue of Kubernetes being the common API surface. You can set up and manage the monitoring agents on Kubernetes and you'll get a common control point to actually monitor these applications. And this also includes building out the learning. Safe deployment includes performing upgrades and maintaining availability during upgrades. And this is where things like service meshes with Istio and traffic actually come into play. The last one is securing traffic and there's actually a wider, this actually falls into a wider category of just security in general. It's not just about securing your traffic, but also making sure that the data that the application accesses is stored securely. But even in on Kubernetes, the application operator in order to accomplish all of these tasks has many technologies that are disposable to actually get this done. So these are just some of the ones that we found just by doing a quick Google or Bing search. And if you want to do things like deliver applications, you need to go and not only pick, but you need to learn Jenkins and you need to learn Brigade. And there's a bunch of tools for you to get this job done. If you want to monitor your applications, there's Prometheus, Grafana, Datadog, Dynatrace, and the list just goes on. And as an app ops, you're responsible for choosing one of these and then picking the right one. For safe deployment and securing traffic, this kind of falls into network and security. And with the dawn of service mesh last year, there's been a lot of technologies that are available to actually get this job done. The point being that in order to actually manage applications, the application operator needs to learn a lot. They need to learn all those technologies. They need to actually set it up. And setting it up involves them dealing with the Kubernetes cluster, which is not necessary. As an application operator, you just want to care about how the application is delivered and how it's managed. And you shouldn't need to care about the underlying infrastructure. The last one is that there can be different implementations and support across environments. So some of these might work well in one environment and some of them might not work well in other environments. And this leads to the next point of leaky abstractions, which is when you're trying to set up a thing like service mesh on Kubernetes and you're trying to deploy STO, you have to deal with a lot of YAMLs. You end up having to set up a lot of YAMLs as an application operator. And this forces you to get into the nitty-gritty of Kubernetes, which is not necessary. So to talk more about how the industry is headed and where we think this is being addressed, I'm going to hand it over to Rhea. Thanks, Mama. That was a really good talk about today. Here's an overview of... Oh, I'm just going to put it down. This is an overview of where we are in this space today and what we've been doing, especially Kubernetes. So we have Kubernetes and IaaS, and this isn't an example of all of the technologies out there or all of the innovations. A lot of this is out of specific, but I tried to encompass a lot of other things. So, Kareas and IaaS is a thing. We have a lot of customers and we hear a lot of users paving their own Kareas customers on top of ISBMs and into cloud. So they have to do a lot of the hard stuff with networking and storage and all of that. They have to pave it all themselves. And then we're getting... We got into this wave about two or three years ago. We're managed Kubernetes, which is a huge thing. And a lot of us have created these managed Kubernetes services. Past that about two, one and a half years ago, almost two maybe, serverless Kubernetes between things. That also, that was a huge overloaded term that encompassed functions, containers as a service, odds as a service, containers that just spin up on the fly. So that's kind of what we branded as serverless Kubernetes. It's a completed term that we use in marketing a lot. If you just walk through the booths and walk down the sponsor booths, you'll see probably serverless Kubernetes or no serverless Kubernetes or all of that. That was kind of also in the next phase of innovation where we're consistently trying to figure out how do we abstract away more of infrastructure for customers? That's where we started with the cloud. We wanted to abstract away bare metal and VMs on-prem. And then we wanted to abstract away what it means to put Kubernetes together. That's where managed services came along and then serverless Kubernetes. But we think there's something that's going to come after this. So when we're thinking about what else could we attract away and why would customers want that? Well, they don't actually care about managing a distributed system, right? Customers are just here to deploy their apps and see them running. They care about their business code. So while we continue to get there, there's probably something else that would hopefully abstract away maybe even Kubernetes itself. And then the entire management of Kubernetes or discrete system up to somewhere, someone else, possibly a cloud provider and possibly, I don't know, another operations team possibly. There's a couple of things that are starting to get into that space. So Rio from Rancher is a good example where they're trying to have people just deploy applications and we'll get into it a little bit later. And also Knative on top of Kubernetes. So the roles within this landscape. Since Sonoma basically brought up the new role of an application operator, maybe not so new role, this application operator would sit in a space that's past Kubernetes. So they won't actually interact with Kubernetes itself. They wouldn't interact with the Kubernetes API. Only infrastructure operators would act within that space because Kubernetes is too hard. You still have to manage a lot of stuff. And it's not really for developers to deploy applications and not really even for application operators to deploy and run applications. Kubernetes just gives you the infrastructure that's for that. All right. So Knative, they've branded themselves also as serverless Kubernetes. I would argue it's a little bit of a step further than that. They're giving you a bunch of building blocks for applications to run on top of that. So for example, they have serving, which will help you with function. They also have auto scaling, so an auto scaling trait or auto scaling component within Knative. So if you want to scale from zero to X, you're able to do so with just installing Knative and then Rio. Rio also uses Knative, but they've built a really nice UI through Rancher, and they have a bunch of different load balancing and HTTP routing rules, canary deployments, so they're using a service mesh and deploying that for you under the hood. So this is also getting closer to what we think of managing applications. They're giving you all the features and all the tools you need to actually go and deploy your app directly on top of the infrastructure without having to deploy all of this stuff yourself. And then there's another problem that is creeping up into our community. We are creating a bunch of different distros. So a lot of people are talking about how Kubernetes is kind of a new OS or a new operating system, so keep making this and it's everywhere. We are basically with CRDs in production of custom resource definitions. We're creating these new flavors of Kubernetes because now every cluster can have all this extra functionality, but when you go and take a pod stack from, say, a Knative deployment and move it on to maybe a deployment in Amazon, it's not going to work the same without you deploying it and constructing it first. And the same thing with Red Hat, third distribution, and then Rancher. So all of these, and then even Microsoft in it, even other cloud providers, a lot of us are writing our own custom CRDs to give customers an easier time and more functionality and have the Kubernetes, but really what we're doing is kind of fragmenting the community. So this is a quote that I, it's just something that I wrote up, so I'm just going to read it. We promised flexibility and portability to users of Kubernetes, but with the introduction of CRDs, custom resource definitions, they add extra functionality while still looking and feeling like Kubernetes. But this means workloads using these CRDs are no longer portable to any Kubernetes cluster. They're portable if you have the CRDs as that flavor already installed. So we're changing basically the landscape of Kubernetes now, we're fragmenting it. And it's the not so secret killer of OSS communities. If we take an example from Linux back in the day, there was probably 12 to 15 different distributions. And now we only hear a three or four. We could argue that Kubernetes is taking that same route and CRDs are most likely the way we're doing it. And so customers are going to have a harder time as we continue to evolve and we continue to innovate in this space. So what if we could use a common API, just like Susama talked about on top of the infrastructure layer. So we could avoid this fragmentation and we could continue as a community building together. This is just an open question. We don't exactly know what the future holds. But what we do know is Rio things like Rio Knative and all these other implementations of CRDs that are popping up everywhere are probably not going to be the solution. We need something to tie all of this together. So if you guys have any ideas or want to talk to us about it, I think you would be very open to doing that. So thank you. We have a lot of time for questions, hopefully in the five minutes. Thanks. Thank you guys. Any thoughts, questions? You can feel free to ask them now or come by after. The solution is very open to talk with the space but no one's really done that yet. Yeah and some of the customers that even we work with, especially the really large ones, have built this layer on top of Kubernetes themselves so that their developers never have to interact with it. So a lot of the large enterprises are trying to get away from exposing Kubernetes to their customers and building their own version of something like Rio, which they call a micro pass but really it's just providing application functionality like things like data deployments and auto scaling. These are constructs that are application oriented and they just happen to use Kubernetes underneath the hood. So they're creating like a UI surface that exposes just that and doesn't really expose, you know, HPA and all the other Kubernetes details to actually just get the job done. So we think this is going to be a really hot space, especially with things like Rio and Canada coming out. I think the next coupon in San Diego is just going to be focused a lot on announcements related to this space. Yeah, there's a mic. I'm wondering what's the difference of the relationship between the F-operator and the HALM. And how? So the problems you mentioned, the relationship between F-operator and HALM. I got it. So HALM is a really good tool. I feel like a lot of us use it. I myself learned how to create applications only using HALM and Kubernetes. So HALM is really good. It's almost still like an application. It was trying to be an application operator focused tool to help people deploy applications on top of Kubernetes, but you still have to deploy everything. You still have to manage your service match. You still have to manage all of the components that HALM is probably talking to. HALM just gives you an easier way to put a bunch of Kubernetes components together and deploy it in kind of a similar way and then upgrade that same animal spec and upgrade that application. It's a little bit too loose and it gives no opportunity for people to manage the infrastructure bits underneath it. I understand that HALM can store the application but operators can manage it. Exactly. The point being, HALM is really good for just the deployment. But then after that you need something to actually manage and monitor the application and that's where HALM falls short. But it doesn't give you like Rio and K&A, for example, have gone further in helping people build applications and helping people not have to understand what the service match is and what exactly all functionality provides. They just say that we want this functionality, we want green deployments and then they would hopefully just get it with their infrastructure. Well, thank you guys. Thank you for listening.