 Awesome. Thank you very much. I'm definitely excited to be here and we're going to talk a little bit about AI today. And so the title of this talk was unleashing the potential of your AI platform with a crossplane control plane. And so always happy to talk about crossplane and control planes and, you know, the what they enable the power that they enable behind the scenes. So just a little bit for starters, I'm actually at upbound right now, a little bit about upbound. We help organization standardized on a single point of control and visibility for infrastructure and applications across multiple clouds and platforms. We're obviously the company behind crossplane. It's our popular open source project and upbound is the commercial product that we've built around crossplane that really leverages the power of control planes at scale. A little bit about myself. You know, I've got over a couple of decades experience working in the platform infrastructure and reliability engineering space. I'm currently the head of engineering at upbound, but I've had previous since that companies like Airbnb, Uber and Twilio will tell a couple of stories about those companies a little bit later. I did start out my career as a software engineer, essentially before getting into leadership and management. And during that time, I'm literally on my seven startup. You know, I've started a bunch of my own companies as well. Some successes, lots of failures, but that's definitely the name of the game game. And, you know, over and over, I've built and operated and scaled so many platforms at this point it's way too many to count. And just a random bit of trivia about somebody I am a EDM producer and DJ, maybe not as much as I have done in the past but you know old DJs never die they just start spinning in their bedrooms a lot more so that's just a little bit about me. All right, so let's get into the meat of this presentation. And if you've ever seen me present before, you know I always start off by laying down the basics and in this case, we're going to talk a little bit about a platform before we jump into the specifics of an AI platform and what that actually entails so you know what's a platform. The platform is an environment that abstracts the underlying complexity at what at which applications and infrastructure are consumed by your engineers. I definitely feel like platform is an overloaded word these days this is why I'm very intentional with my definition of it. Basically, the platform is the thing that engineers use at your company to get work done on a day to day basis. You will have some interfaces into that platform those can be like apis they can be CLIs, or they can even be web like you user interfaces etc. And then the platform really has a bunch of things going on behind the scenes that you actually do not want to expose to your your engineers or your developers you really want to hide that complexity away it makes it easier for them to do their jobs it makes it easier for them to move much quicker and then it allows you the ability to change whatever you you want under under behind the scenes are under the hood, essentially as much as you want. So generally what a platform consists of our underlying infrastructure applications that run on top of that infrastructure, and then workflows that basically coordinate, you know, everything in that infrastructure as well a typical workflow might be like the software development lifecycle, or deployment workflow, etc, but you get, you know, essentially the general idea. So I'm going to tell a couple of stories about platforms that I've built over the years. And I'm going to start with a story around a Kubernetes platform now the names have have been changed to protect the guilty. So I won't reference the companies by name, but in this specific example I was working at a very large organization, and we were you know we had 10s of thousands of essentially virtual machines, and we were undergoing a big migration to start leveraging containers, and essentially Kubernetes. One of the things that happened as we started going through this migration, it was we realized that we weren't just introducing like a new core container orchestration system, you know, to the company. This was actually an opportunity to really leverage Kubernetes itself as a platform. So rather than just changing the way we bundled applications, we realize wow we can change the way we configure applications, we set things up. We can simplify the workflows and it was at this point that we really, really started to see sort of the advantages of leveraging something like Kubernetes is more than just an orchestration system that Kubernetes itself was actually a platform. And we could build all of the sophistication on top of it, while hiding a lot of this underlying complexity from our engineers. And this specific case when we wrote out the platform. Initially, we did not hide that underlying complexity. We actually exposed a lot of this directly to our end users and our engineers. There was obviously some frustration because honestly if you're an engineer, you know, at a large engineering organization who's tasked with shipping features, you know, at the end of the day, I'd actually argue, do you want to understand the intricacies of, you know, operating Kubernetes and, you know, dealing with YAML and all these sophisticated deployments and workflows and things like that. Or do you actually want someone else to just basically develop a higher level abstraction, so you don't have to learn all of those underlying concepts and you can really focus on your job. So there was definitely some big learnings here. We went through several iterations of this platform before we essentially settled on a simplified YAML config that hid all of the underlying complexity of actually running Kubernetes behind the scenes. One of the big advantages this allowed is that we were able to upgrade our Kubernetes infrastructure and change the way things work inside of the platform without actually affecting what we exposed to our engineers and developers. And again, this was because we came up with a very simple YAML-based abstraction that was the main point or the main interface for those engineers. So that's one example of building a very large Kubernetes platform. It's obviously grown much larger since then. Another example of a platform, and this is at another company, is, you know, what I refer to as just an observability platform. So in this case, the company had actually been around for a while. And at one point we actually, you know, did an inventory. We looked at all of the different alerting, monitoring, logging, section reporting, telemetry, and incident management tools that we had. And we realized we had between 30 and 50, depending on your definition. This was crazy. We had all of these different tools that folks were essentially interacting with. And at a minimum, there was at least 10 different ways to configure how all of these tools work together. The reality was we were essentially leaking all of these tools and abstractions onto our end users. They basically saw like the 50 fully different tools. And, you know, there's a lot of confusion. It was really hard to get started if you were a new engineer trying to understand, you know, all the various observability systems and tools we had behind the scenes. And so in this case, we actually set out to basically unify all of these tools under a consistent platform. And we essentially followed the same approach that I just described in the Kubernetes example. We started with the single abstraction that we then published to all of these engineers. And again, it was another one that was YAML based. And once we published this abstraction, you know, our engineers basically started interacting with that, you know, to configure observability for their services. And then behind the scenes, we were actually able to unify, deprecate some of these systems, you know, there's no point in having multiple monitoring or learning systems, for example. And we really did end up developing a lot more consistency. We reduced the number of tools and systems that we were using. And overall, we just made it a much more pleasant experience to use observability at this company. Nowadays, you basically jump in, you bootstrap a new service, you edit essentially a monitor.yaml and give it a little bit of information and then boom, you basically have all of this preconfigured and running for you. And the very nice thing that I think often gets overlooked with platforms is by having really good abstractions on top of a platform, you know, it does allow you to reduce the complexity that you're exposing to engineers. But it also means that, you know, it's a lot easier for you as an infrastructure or platform team or engineer to continue to iterate on the platform behind the scenes without essentially forcing all of your users through costly migrations. You know, we want them to require the smallest amount of context to get the largest amount of value and that's in my mind that's the ultimate like, you know, verification of validating, excuse me, consistency. So, with that said, you should really, really be building your platform using a control plane. I have built a lot of platforms over the years, we just talked about a couple of them. And the reality is I really do think we are in the control plane era. This was really popularized by the large companies like the large cloud companies like AWS and Google, they all use control planes, you know, essentially behind the scenes that's basically the thing that we interact with when we leverage and we work with these large cloud providers. So why shouldn't you get the same benefit for your company as well. So let's talk a little bit about that, you know, very quickly if you're not familiar with cross plane I'm going to go over some of the basic content concepts. Very quickly right now, but essentially I really like to break cross plane down and cross plane and the cross plane control plane down into four main components. The first is it essentially gives you this ability to create a cloud abstraction layer. So similar to a database abstraction layer. A lot of times when you're working with the system, you know whether if I was using a database whether it's Postgres or MySQL, having an abstraction layer in front of it basically allows me to talk to different components that are similar in the same way and a standardize. And that's essentially what cross plane does with our provider ecosystem. It allows you to create abstraction layers for different cloud providers. So basically anything with an API or a public API can have a cross plane provider created for it. And then, you know, all of that functionality essentially gets sucked into the cross plane ecosystem as well. So you know that's the first building block. The second one is what we call compositions or another way to think of this is just composable custom abstractions, and it basically allows you to take all of these things that you get from having a cloud abstraction layer and define them into your own interfaces or your own APIs. And so what this means is I could take something like I could include like provider AWS, you know, as an abstraction, then I could say hey you know what I want to create the concept of something generic, like an S3 bucket, sorry like a bucket. And that bucket on AWS might actually mean, you know, an S3 object, or on Google it might mean something different, or on Azure it might mean something different as well. So this ability to define all these custom abstractions and a bundle of them all together, and to compose them into these higher level concept is a really, really powerful concept in cross plane. Another big building block is you can take all of this stuff that I just described, and you can package it all together. Oftentimes I like to think of this as cross plane enables the app store for the cloud, and I 100% believe that that's true. You can go to the marketplace, you could grab a package, you can install it on a cross plane control plane and it will basically unbundle all of those components and start running and configuring them and bring whatever you've defined to live. And obviously the last building block here is a control plane. A cross plane control plane is basically the engine that enables all of the other things that I've just talked about. So that is cross plane at a very, very high level. So why control planes? What's the point? Why use a control plane in these days and ages? This day and age. I talked about this a little bit with the big public cloud providers. But the reality is that with some of the concepts that I just described are these building blocks. It actually results in a faster time to market when you're building out new things. Essentially the ability to do something like go to the marketplace and grab a pre-existing package for something that's already configured is really no different than using a package manager on a Linux system, for example, right? We don't all go and bundle all the packages on a Linux system today ourselves. We actually use package repositories and we really go out and leverage other work, others work. And that's one of the big advantages of jumping into control planes and the ecosystem enabled by cross plane. The second one here is I talked a little bit about this ability to hide the underlying complexity of your infrastructure behind these custom abstractions. Again, I talked about two examples earlier, both with the Kubernetes platform and with an observability platform. But the ability to create these custom abstractions that hides the underlying complexity from your users means that you can keep improving the platform behind the scenes without changing the interface that you're exposing to engineers. And so this one of the biggest advantages of this is really avoiding what I call costly running in place migrations where you go to improve something on the underlying platform, but you require your end users or developers to take on a bunch of additional work to actually get those benefits. And to be honest, that's not fair, right? What if the public cloud providers did that to you? What if AWS said, hey, every time you want to see improvements in S3, you've got to go through a migration. It'd be dead on arrival and we should really think about our own platforms in much the same way. Another big advantage of using a control planes are using a control plane, specifically in the cross plane example, is we do allow declarative APIs. This means you can define exactly what you want in your infrastructure. And a cross plane control plane will ensure that your infrastructure always matches that states. One of the best ways to get consistency is by essentially using a declarative API. You don't get anything that you didn't ask for. And if something magically shows up, cross plane will essentially reconcile and ensure that those resources are no longer running anymore. And the last big advantage is, hey, you know, cross plane and our cloud abstraction layer really allows you to realize the multi cloud dream. What's the multi cloud dream? Essentially being able to leverage functionality from a lot of different cloud providers. And by leveraging the abstraction layer, again, you're not exposing any of that to your end users. Your end users don't actually care where something is running, only that it is running. And so, you know, this is one of the big enabling pieces of functionality with a cross plane control plane is this true vendor neutrality. I mentioned earlier that we are in the control plane era. And if you've been around the block like I have been, you've definitely seen this coming from for a while. You know, way back in the day, we were all doing scripting and, you know, configuration was sort of imperative. You know, we moved into the infrastructure as code era. I remember the first time I played around with like chef and puppet, for example, it was awesome. And it was much better than the things that we were doing at the time. But now I really do believe that we are in the control plane era. This has been really popularized by the public cloud providers and essentially Kubernetes as well. But now we get these advantages like declarative APIs, fully self service, automation, and a whole other features behind the scenes. So it's definitely awesome. So it kind of gave you a couple of basic building blocks there. You know, I talked a little bit about what a platform is. I talked a little bit about cross plane and some of the advantages that you get from having a cross plane powered control plane. And now let's actually jump into the interesting part of this talk, which is essentially what is an AI platform and how can I use a cross plane control plane to leverage that. So what is an AI platform. So it's very interesting when I sat down to start working on this talk, because I knew a lot about AI I knew a lot about the various tools and technologies that were involved. What I didn't realize is how much of a mess the AI landscape is, and it's gigantic. We have overloaded all of these tools and terms and services and approaches, you know, basically together, it can be extremely confusing. You know, if you're just starting out and trying to understand the world. And so one of the things I want to do here is really kind of lay down a primer of high level, this is extremely high level but a primer of AI from my perspective so we can actually all be on the same page about exactly what we're talking about. And so very simple model that I use today is AI, I think of it as essentially having three big components. It is a pipeline, right. It starts with a set of learning processes. And this can be machine learning or deep learning, for example, this is basically taking data and gathering that data and processing it in a way that you can then feed it to a model. The second big part of the pipeline here is modeling. And so this is actually the crux of AI and so when we talk about things like, you know, large language models or all this crazy new video stuff that we're getting, you know, modeling is essentially the brains of the AI. This is the algorithm and structures that basically process all of that data that we just gathered in the learning phase, and then, you know, predict patterns essentially based on on the data that they've been trained on so the modeling is essentially the second part of the pipeline. And then the last part is deploying, you know, training a model is one thing but if we don't actually deploy it and use it, then, you know, what are we doing it's just basically data that's sitting nowhere, getting unused. So the third part of this stage is essentially the deployment stage. And this is just basically the implementation of, you know, your model so for example, in like the chat GPT example I know that's really really popular right now, essentially a chat bot or chat GPT itself would be deployment of the model this allows you to actually interact with it. And it also does include a lifecycle management phase. And this essentially means it's not just enough to deploy something. You've basically got to operate that thing forever and the lifecycle stage includes everything from deployment and monitoring to, you know, updating all of the things you're doing whether it's the models whether it's actually the data by retraining, you know, and then essentially ensuring that this runs forever and that when you're done with it, you know, all the underlying resources models, etc, are essentially pruned as well. So, you know, it really is helpful to think about the AI landscape as these sort of three big stages or pillars, you know, with all of these things underneath. All right. So when we really do think about an AI platform, it is very similar to the platform that I just talked about. There is this notion of your end user, you know that interacts with the AI platform through a set of predefined interfaces, you know, again, these can be APIs, these can be user interfaces, these can be CLIs, these can be whatever, you know, whatever you want. So the end user interacts with your AI platform through these interfaces. But now what's in the platform, it's, you know, it's changed a little bit because now we're specifically dealing with an AI pipeline, but it consists of all these learning modeling and deploying tools and technology and infrastructure. So again, really, really useful to think of an AI platform as being really no different than any other type of platform. Just the things that you're using inside of the platform are specific to AI tools and technologies and services. And if I'm honest, an AI platform generally involves a lot more steps, if you will. The pipelines tend to be a little bit more complex than some of your basic platforms. And so we're going to get into a little bit of that right now. I called out that you really need to think about the AI platform as just another pipeline. And that's true, regardless of all the underlying tools and technologies, you know, it's still a platform. And so we've talked about the three phases like learning, modeling and deploying and so I want to actually get a little bit more specific now. Right. So from the learning stage, this involves everything from like data collection and data preparation. Then you go to the modeling stage. This is where you do model training and model evaluation. And then the last stage of jumping to the deploying stage where you take that model, you deploy it, you model it and then you essentially do all of the lifecycle management and optimizations as well. At the end of the day, you're still just building a platform. The workflow is just a little bit different. But you say tomato, I say to model. Right. So kind of jumping into a little bit more specific like let's do a deep dive into the learning stage because I think often folks imagine that, you know, an AI platform is this crazy different thing that they've never actually worked with before. The reality is if we look at stage one, the learning stage of an AI platform or pipeline, it's really no different than an existing data pipeline that you might already have where you're essentially collecting data, you know, from a bunch of places and then you're going through this preparation stage where you take that data and get it into a usable form to feed to a model, you know, much later on. And so, you know, some of the technologies that might be involved here are things like databases, object stores, distributed queues, ETL systems, etc. At the end of the day, it's, it's just a data pipeline, right. And so this is a cautionary tale for all of you. If you're at a company, and you all are really excited about AI and leveraging this internally, if you don't have your data story together, then you're going to fail right away. And I really want to double down on that statement, because again, you know, garbage in garbage out as the saying goes, not having the data that you're going to be using to train and feed into these models in a pristine shape means you're essentially just going to get garbage from the models. Even if you have not started deploying AI in your organization yet, you should still be investing the time to ensure that your data collection and data preparation processes today, your pipelines, everything there are in really, really good shape. Because again, if they're not, you know, you're going to fail right out of the gate. So there's a few platforms, there's a few challenges with building, you know, an AI platform, especially when it comes to deploying and managing infrastructure. Now, one of the things I've definitely said is, you know, AI platforms can require a bit more orchestration. There are lots of components. It's not like just having a data pipeline. You essentially have a data pipeline is just one part of the AI platform and then it's the complexity extends, you know, a lot further than across that as well. And so I've kind of bucketed a bunch of challenges that I've seen with deploying these platforms in the wild. You know, one of the biggest one is just maintaining consistency. You know, AI platforms tend to leverage resources across multiple environments across multiple clouds, providers potentially, you know, using different instance types and resources and different services, just going to, for example, AWS and, you know, looking at some of their AI related services, there's like dozens of them, right? You know, and so just maintaining consistency and environment like that can be extremely, extremely challenging. You know, sort of related to that is this notion of resource allocation and management. Who knows that we're in a GPU shortage right now. I'm sure you all do, especially if you're doing any type of AI work. This is definitely true with cloud providers as well. And so, you know, being able to find and deploy things like GPU instances, you know, you may not actually be able to get these resources where you want. And so you really need to be flexible in whatever technology you deploy that allows you to deploy, you know, resource or leverage resources from Google Cloud if they exist there, or if they're currently available from AWS or even from some other company, we spoke GPU provider as well. This has been a really big challenge right now and just because of the GPU shortage. And these resources are definitely at a premium. I talked a little bit about this third bucket already, you know, complex dependencies and pipelines. These platforms are a mesh of things, databases, data stores, ETL components, different services, low balancers, APIs, user interfaces, the list just goes on and on. And so there's really an orchestration problem with these AI platforms. Because you're basically taking all these different components and, you know, wiring them all or orchestrating them all together, you know, you've got to have a really sophisticated system to essentially be able to do that, and to manage all of that complexity, not just from the moment of deployment, but over the entire life cycle of the AI platform. A few more challenges, security and compliance. Guess what? It is the wild wild west in AI land right now. I don't really think anybody's thinking about this. And I pointed out that an AI platform, you know, basically starts with a data platform or a data pipeline. So guess what? A lot of the data that you're taking and you're sending into your, you know, your AI pipeline is a lot of it is under the same restrictions that you might see with general data privacy laws and issues and things like that. And the reality is we're all just kind of ignoring this right now and just, you know, feeding these models, etc, whatever we want. But there is a security and compliance aspect to building an AI platform as well. You know, and that's definitely a big challenge. I think we're going to see more and more on the subject over time. You know, as we see these models and these these AI systems grow and scale and complexity. I talked a little bit about this ability to leverage resources from different cloud providers, but you know, you also want to be able to do that in a way that, you know, ensures portability. Google resources and AWS resources, for example, don't work exactly the same. And so managing the way that you deploy your platform to leverage, you know, whichever is the most convenient or whichever is the cheapest or whichever is even available is extremely important. Otherwise, you're going to be leaking, you know, that complexity to your end users. And, you know, again, they're going to be frustrated. They don't really care, you know, where their models get trained as long as their models get trained. The last aspect semi related is scaling infrastructure as well. Again, we're in a GPU shortage resources that are at a premium right now. You know, sometimes that means it's very spiky, you know, training a model may require a bunch of just in time resources that maybe 10x what you might actually need, you know, in production and over time and so, you know, managing and managing the stuff is extremely challenging. So just a few of the list of AI platform challenges and so let's talk a little bit now about crossplane and an AI platform. Let's basically take these two things that I just talked about, you know, crossplane control plane. Let's take an AI platform and let's see how this all stitches together and how this really solves some of the problems and the challenges that I've, you know, just been talking about. So, what do we get when we stitch all of this together, we essentially get a control plane powered AI platform. I mentioned before, you should not be building your platform without a control plane. And that is also true for AI platforms as well. So I've really broken this out into five major advantages. There's actually a lot more. But for the sake of time, I want to focus on these because I definitely think these are the biggest advantages. The first of these is, you know, this ability to create custom APIs and interfaces. This reduces the complexity that you're exposing, you know, to your end users. This notion of declarative configurations, you ensure that only the things that you want are the only things that are actually running. Cloud providers abstractions allow you, you know, and basically enable portability so you can move workloads around, you know, you have highly availability, you can go to wherever the resources actually lie. This ecosystem and integration, this essentially allows you to leverage the work that others in the ecosystem have already done. So you don't need to start building your AI platform from scratch. You can just grab a package that's pre-existing out in the wild and start from there. And this is a really big advantage when it comes to saving time and bootstrapping and, you know, getting a faster time to market. And the last big advantage is this notion of reconciliation and orchestration. AI platform is a complicated pipeline, you know, we want to be able to operate it securely and efficiently, you know, and leveraging a control plane to do that and manage it over its entire lifecycle is a huge advantage. So let's jump into this in a little more detail. So we talked about custom APIs and interfaces. AI platforms really require these abstractions to hide the underlying complexity from your engineers. This, you know, at the end of this day, this creates a more intuitive experience. Not everyone is an infrastructure engineer, and especially a lot of the folks working with these AI systems, they may be data scientists, they may actually, you know, be analysts and other, you know, users at your organization. And so the simpler that we can make these things and, you know, what we essentially expose to our end users, the better. Getting into a little bit more specifics here. Crossplane has a concept of something called a composition. I talked about this a little bit earlier via composable resources. But in this example, I may create a composition that's as simple as it ask my users for three things. It says, hey, what do you want to actually name this thing I'm going to deploy? B, where do you want to deploy it? And C, give me an idea of how, how many resources it should consume. And so I've been very intentional here because in this example, that is all that I'm exposing to the end user is that composition. And essentially those three pieces of information that I'm asking for this is essentially a custom API and crossplane. Now behind the scenes, crossplane is actually doing a bunch of really interesting work. It knows that, for example, if I've asked to deploy a region, if I, you asked to deploy something in East, it knows that East is actually an AWS region and it's actually US East one, you know, in this example. And it knows that as part of the data processing step, you know, you want to store information in an S3 bucket. So it configures an S3 bucket, names and bucket training data, then it configures a pool of 100 instances to essentially do the model training and it deploys an LLM on those instances. And again, all we said is, hey, we wanted a composition of this type and it's sort of is implying the rest behind the scenes. And then again, in this example, we used a pool size of large. So it's also configuring a pool of API servers to basically handle request once the model is actually deployed in production. So just a very simple example of how you might leverage a composition and a custom API to again hide this underlying complexity, you know, from your end users and actually give them these really bite size chunks of things to, you know, deploy without knowing how it all works behind the scenes. Next one we'll talk about is this notion of declarative configurations. And again, this is just, you know, I only want the things that I want, and that's it. And so if any of you are familiar with the term shadow it, it's this notion that folks go out and just manually deploy things in environment, they may sidestep like your normal deployment tools and processes. You know, oftentimes this is someone taking a credit card and signing up for a SaaS service and then, you know, linking that into your infrastructure as well. One of the nice thing about having a control plane powered AI platform is this ability to define exactly what you want. And in this example, we're actually saying hey I'm going to create a control plane. The control plane itself is going to be called chatbot. It is actually going to get its configuration, you know, from a get repository, want you to update and make sure that configuration is in sync roughly every 15 seconds and I want you to use the main branch on that configuration as well. And so the great thing about this is, you know, sometimes maybe you don't want your users to always, you know, create instances of your your underlying AI platform and resources maybe you actually just want to configure that. Automatically yourself on their behalf, this ability to declaratively define something like a control plane actually you can do this across the board. I'm just using a control plane in this example means, you know, it's essentially possible to reproduce your entire AI platform and configuration from essentially a get repository. And again, this is an extremely powerful concept. Next up we have a cloud provider abstractions. And in this example, I really just exposed, you know, cross planes, declare it a provider config, and then cross planes declarative manage resource config as well. And again, this is just giving you a little bit more of an idea about how you can actually leverage, you know, these are pre existing cloud abstraction layers to ensure that you know you're using the best of all that the different cloud providers have to offer you in whatever you want. So in this example, we've started by essentially installing a cross plane provider that is on top of GCP that is their vertex API service. So great, we've now given cross playing the ability to talk to this provider. And then now we can actually start declaring and creating resources on that provider as well. In this example, I've just created a resource called data set that essentially, essentially, exposes the underlying vertex AI resources as well. And again, it's really that simple. This ability to declaratively describe the things that you want, you know, and to access them via all these different cloud providers via cloud provider abstraction again is a really, really powerful concept. This allows you to harness these specific tools or services in a way that again you're hiding the underlying complexity via compositions from your end users. One thing we've briefly talked about is this notion of ecosystem and integrations. And again, cross plane has a great ecosystem cross plane itself, obviously as a CNC F, you know, founded project it is actually owned by the CNC F it has a governance policy, you know, it is truly a vendor neutral ecosystem. And one of the advantages of that is that, you know, in a vendor neutral ecosystem, all the vendors and everyone essentially comes to play and we see that in the form of all of these different providers, you know, that enable all these different cross plane providers that enable, you know, the ability to talk to different cloud providers or other managed services or vendors. This also extends to, you know, essentially our configuration or package ecosystem as well. You can go to the upbound marketplace today, and you can search for configurations and you can find pre existing things that folks have built that you can basically take, you know, essentially for free, they're basically packages, import into your own control plane, and start leveraging that same functionality and same capability right off the bat. I'm not joking. When I say, this is basically like an app. Excuse me, this is basically like an app store for the cloud, because it really is, you can go and grab these packages and start using some of these components right away and we're actually going to give one for you to try out at the end of this presentation. And last but not least, you know, the, how do we keep all of this together, how do we make sense of the world. You know, life cycle management of an AI platform, being an extremely challenging problem, you've got all these different components that basically have to work together seamlessly for, you know, the life of your AI service and so this notion of a control plane continuously, you know, orchestrating the resources that you need so it deploys them and ensures that they're always there, and then always reconciling reconciling the state of the world to ensure that the things that you want. You're in your control plane managed environment are the only things that are always there. And we actually refer to this as drift when the state of the world doesn't equal that the state that doesn't equal the state that you've essentially defined or declared in a configuration and cross plane will continuously reconcile that state. This is great for your security story, because again you're not running anything that you didn't previously define or declare. It's actually great for your efficiency story, because there's not going to be all these hidden resources and things that folks have deployed, and then essentially forgot about. And again with AI platforms and all these different components, you know, that is something that is very, very easy to do. One of the things I'm actually curious about is what percentage of a cloud providers revenue actually comes from wasted resources, essentially resources that folks have spun up and then forgot about. I have some guesses about that. And I actually think it's much larger than any of us possibly realize. All right, so we've talked about all these different components. And so I want to actually actually introduce a concept here now. And this is just one way that you might actually think about creating a composition for your own, you know, custom AI platform, because this is really you know this custom API abstraction is really the thing that you're going to give to your end users that again is hiding all of this underlying complexity, you know, and it is a really, really powerful concept. So in this example, you know, we created a composition called chat vibe, you know, it's got a bunch of arguments that you can pass as essentially the model that you want to use so maybe I can pass them as the model argument, and it's basically going to, you know, configure an LLM based platform, or I could supply prediction as the model, and it will basically, you know, configure a time series database prediction oriented platform and you sort of get the general idea So again, this is a way to really hide that underlying complexity, because this is essentially logic that will cause cross plane to render the composition in different ways based on essentially what you've passed it. We talked a little bit about capacity. Hey, you know what, I have a rough idea with how many resources I think this will need small medium or large and again that's really just a hint to the underlying infrastructure. Same thing with location, but now that we get a little bit into the more interesting parts because in this sort of sample composition, we've actually, again, simplified things a lot, we basically said hey, I'm going to have this argument called data transform. I'm going to allow me to custom configure the way that I actually want to mutate training data that I'm essentially using, you know, for this AI platform in this model. And in this example, we've described offense or ETL is, you know, put two potential arguments. Same thing with the deploy as argument for this composition. Again, I'm very easily saying, hey, I want the thing that's actually running that my users of this model actually interact with the be a great rest API based. Maybe it's a website, or maybe it's either either, or maybe it's also integration with like something like TensorFlow behind the scenes, and then use this custom service endpoint for retraining. So again, compositions are a very, very powerful concept, I'm going to keep beating, you know, this concept over and over because I really do think it is a cross playing control playing superpower, the ability to define these higher level concepts and hide the underlying complexity from your end user. So this is a pretty lofty composition. So maybe let's go into something a little bit more detailed and talk specifically about what that composition might actually render. So based on some of the things that I just mentioned that we might select as arguments, that composition may actually then go okay, well, you know, based on what you the arguments that you described, I'm going to configure an S3 bucket to store data. And then I'm going to use a Kafka pipeline that essentially, you know, prepares all the data in that S3 bucket for training. Then I'm going to deploy a bunch of AWS P3 GPU instances that are also using GPT for LLM to basically do the model training and evaluation. Then I'm actually going to deploy this model behind an AWS enterprise load balancer using open telemetry and Grafana for monitoring and then hooking into a custom service for model optimization as well. And so again, this is really just another good example of explaining how you kind of take all these different concepts with an AI platform and pipeline and really stitch them all together into the underlying services that are exposed from the cloud provider. And take advantage of that. In this example with the chatbot platform, we are also building all of this on top of cluster as a service primitive, and this is essentially managed Kubernetes. So we get to manage all of these resources and everything that we're deploying as part of a Kubernetes cluster. So now here's the fun part. This is actually a real-world example. If any of you all are interested, you can actually go try out what we call configuration cluster as a service ML. And what this will basically do, this is provided by upbound, but what this will basically do is it's going to deploy the stable diffusion model that's going to leverage an already pre-existing cluster as a service configuration and upbound. And so again, we're building things on top of pre-existing components. You don't actually have to go out and create the way you're going to manage Kubernetes clusters. You just use that cast primitive, and then you're deploying a bunch of stable diffusion-related things on top of that as well. In addition to this, there's a bunch of components that are also orchestrated together. Again, one of the big powers of a control plane is its ability to continuously orchestrate it and reconcile. And in this example, we're using upbound, Flux, Jupyter Hub, Ray, and Kubernetes all together to enable this use case. So this is really exciting. I definitely recommend that you all go to upbound, create a control plane, launch this configuration on it, and really see how easy it is to actually leverage the already existing ecosystem. To get started in your AI journey. And so very quickly, wrapping this all up, obviously, I mentioned that I work at upbound. Upbound is really all about control planes at scale, easily and efficiently. We are behind the open-source project cross-plane, where we're definitely stewards of the cross-plane project and ensuring its continued access. We've actually created a commercial product called upbound that makes it really, really easy and efficient for you to just run and manage all of these control planes at scale. We often like to say, no one is ever just going to deploy a single control plane. You actually want to deploy a control plane that scales easily as you add more resources for it to manage and you define more things for it to do via compositions. And so the upbound product really allows you to do that easily again and effortlessly. Just talking a little bit more about the upbound platform itself. You can use upbound today to build everything from internal developer platforms to your AI platform to your data pipeline as well. You get all of these advantages from basically leveraging work that we've done. Obviously, at the very core, we are just running cross-plane control planes. We're just managing and scaling those up reliably for you. We enable you to easily plug this in to get or whatever other source control system you're using. We give you built-in management and monitoring capabilities, security and resiliency capability as well, like RBAC and audit logs. And we allow you to scale your control planes to meet whatever you're essentially going to throw at them. The more things you want to control plane to manage, the more work that it's going to do to continuously reconcile and orchestrate everything on that control plane as well. And yeah, so with that, we're essentially going to wrap things up and move to Q&A. But again, we've talked about some of the upbound reference platforms. These are just some of the few things that you can build and leverage with upbound today. Again, the advantage of using upbound, using a control plane, using cross-plane is that it is a faster time to market, especially in the area of AI. You can bootstrap a data pipeline. You can leverage our CAS ML example that we just talked about, or even if you're just doing something as simple as managing Kubernetes clusters at scale. This is all stuff that is enabled, and it's really, really easy to do with the upbound platform today. All right. And so with that said, we are now going to move to the Q&A portion. All right. So I'm checking, and it does not look like we have any questions yet. So I'm going to give us a few more minutes to see if any questions roll in. Otherwise, would love any comments or feedback. Definitely don't hesitate to reach out to me. Again, I'm just at Summary on Twitter and LinkedIn. Happy to talk shop about control planes or upbound or your AI platform as well. All right. So we do have one question. Does the OTEL data feedback to data collection systems? And so in this case, the answer is it could. These are all things that you configure with the underlying composition. And in this case, it's one of those things that, again, I can start with a very simple abstraction via a composition and add more and more capabilities and functionality behind the scenes that the end user essentially gets over time. But there's nothing precluding you from doing this. Cool. Good question. Thank you, Ranjee. All right. So another question would like to know about spaces that is coming up newly. So great. Again, the short version of this is spaces is a capability or a feature of the upbound product that in the same way that you are able to create or use a declarative API on a control plane. You can essentially now use this with groups of control planes as well. So spaces is an awesome concept that allows you to declarative declaratively configure and manage control planes, you know, from a Git repository if you want from your own on-prem environment if you want as well. Spaces are really nifty because you can basically just deploy spaces into, you know, your own Kubernetes cluster and then scale up your crossplane control plane usage in that manner as well. Spaces, again, is another really, really popular concept. In a sense, it's an answer to an earlier problem that we actually created, which is how do you actually easily manage, you know, all of these control planes at scale. And, you know, we want the same capabilities that you get from a declarative perspective inside of the control plane to outside of the control plane as well. So again, space is really powerful and you can run this on-prem. So definitely reach out to us if you've got questions about that. And the questions are coming in. All right. Awesome. You all are an engaged crowd. I love it. Can you please show again and describe the screen with the three phases of infra management. Middle was Terraform and Ansible and yes I can. And this is just what, you know, we refer to as the evolution of building platforms and managing applications and infrastructure. And again, the idea here is we started out with scripting. We moved to the infrastructure as code era, which were great. But I think we all understand a lot of the pitfalls from managing infrastructure as code. You know, one of the biggest ones there is definitely drift. You define the state of your infrastructure using something like Terraform and Chef. But then if you're not actually continuously running that process, you know, you may not actually run it for months. And then when you go to, you know, do, for example, a Terraform apply, et cetera, after months of no updates, suddenly the system wants to change hundreds or thousands of things. And so that's why we really think a control plane is, you know, superior. There's this notion of continuous reconciliation. So the control plane is constantly monitoring the state of everything that you've asked it to manage and ensuring that it always matches that desired state. So that's every, you know, use Kubernetes and, you know, use a Qt control like apply on the command line and edit a resource. You know, and then basically save that config and then seeing Kubernetes, you know, change the state of what you just changed in real time understands the power of this reconciliation. You know, what you say is always what you get. The next question we have is, can you use this to build embedded systems platforms? Great question. I, I'd actually like to understand a little bit more of the specifics around that. You know, I can't see any reason why you would not be able to do that. But again, understanding a little bit more of the detail there would probably be helpful. But yeah, I really can't see any reason why you wouldn't be able to. So feel free to reach out Mohammed. If you want more info on that. And then last question for data collection in an AI domain. Do you see data likes data lake who do data break, like map reduce systems being more effective, or snowflake or mix of both. That's a great question, Ranjeev. The reality here is that's actually not up for me to decide that's up for you to decide and that is one of the advantages of again exposing these custom abstractions and these compositions. You get to basically decide, you know, what's what's behind the scenes and what's under the hood. I really did try to drive home the point that these that step one of an AI platform is essentially, you know, all the same challenges that you had with a data pipeline processing. And, you know, again, the needs of different organizations are going to be different. You know, so one of the nice things about crossplane is control planes is, you know, we, we are unopinionated. We allow you to stitch in whatever you want. You're the best person to know what tools and services you want to leverage. All right. Nice set of questions. Thank you all who contributed questions. I especially if I didn't call out your name specifically I definitely appreciate that. It's sometimes lonely up here as a presenter, you know, talking. And so I really do value that the interaction any of you that know me in real life can probably attest to that. So since we don't have any more questions, I want to thank you all for attending. Again, don't be afraid to reach out. I'm at at summary, pretty much Twitter, LinkedIn or somebody at upbound.io. And with that said, I want to say thanks to the Linux Foundation. And I'm actually going to pass things back over to Candice. Thank you all. Thank you so much summary for your time today and thank you everyone for joining us as a reminder this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars and have a wonderful day.