 Hey, thanks. Thanks, Kristina. Hi, everybody. Joe Taborek here from Day 2 IQ and with me, our CEO, Toby Knaub, a longtime innovator and thought leader in Cloud Native. And we're going to kick it off here right with the headline for today's webinar, right? So the market is moving very fast towards this notion of smart Cloud Native. And we realize we still run across people every day that are not familiar with the term. So we're going to start with the basics. So Toby, I'm going to start by a prompt for you and T1 up for you, which is, can you describe smart Cloud Native for those that have not yet been experienced with it? Absolutely. Thank you, Joe. Hi, everybody. Thanks for joining this webinar. Very excited to talk about what Smart Cloud Native is. We see that as an evolution of Cloud Native applications. And essentially what Smart Cloud Native stands for, those are essentially Cloud Native applications that have AI capabilities built in, in a way where that AI is core to what the application does. So you can think about it as a digital product, a digital experience that has AI capabilities and is also automatically improving, right? That's one of the key characteristics of a Cloud Native application is that we can continuously update it, deploy it whenever we want to make updates to that application. And I think those Smart Cloud Native applications will be key to a lot of tomorrow's winning products. And here's some examples, right? So we all use smartphones. We've used those for a couple of years. And they have tons of AI capabilities built in, right? They have a spell checker. There's usually a photos app that automatically sorts our photos using AI. There's a digital assistant that's built in. And all of those capabilities in the phones they have, well, AI running in the phone, but they also have a component running on the cloud, right? That makes this product work. And that's an example of a Smart Cloud Native application, right? It's a Cloud Native app. It runs in the cloud and it has those AI capabilities. And the AI is really core to what the thing does, right? Like if you take the AI out, it wouldn't be the same application. It wouldn't exist the same way, right? And so we're all familiar with that, right? Because we use smartphones all the time and some other examples that might sound very familiar, like self-driving cars, we talk a lot about those. They obviously have capabilities and they're connected to the cloud too, right? So it's another example of a Smart Cloud Native app. And they also wouldn't exist without those AI components. So this is some of the things that, you know, we're already talking about every day, but there's other examples like that of Smart Cloud Native apps, I think literally every single industry around the globe. Here's one that I found recently that I think is really exciting, which is actually putting AI into medical imaging devices, like CT scanners and MRI scanners and ultrasounds, right? Those machines have always been very compute heavy, right? They need a lot of compute capabilities to do 3D image reconstruction and things like that, but they're now starting to use AI to just make those products better. There's some research, for example, it's called Fast MRI, where they're using AI to make MRI scans much, much faster, like four times faster, which means it's gonna get cheaper. It's a better experience for the patient because you're not in that machine for as long. And because the cost goes down, it's gonna be available for so many more folks and it's really just gonna increase patient outcomes significantly. Other examples are Smart Cities, for example, and I think we're gonna see a lot of that too for a next generation of games, right? Like as augmented reality and virtual reality come along and games have been AI part for a long time. We'll see that in the gaming industry too. And then also some places that, you know, as consumers, we don't see as much like research. There's examples where it's been used for drug discovery and things like that. So I think smart cloud native applications will really be pervasive and solutionize pretty much every industry around the globe. And I think tomorrow's winning products, you know, the products that we'll get most excited about in whatever industry they're in, it will all be powered by these smart cloud native applications. Now I talked a little bit about the characteristics of a smart cloud native application in the beginning and they have AI at their core. It's a defining attribute of those. And then they have the same sort of characteristics that every cloud native application has, right? They're continuously deployed. That's important so I can add capabilities to that app all the time. They're dynamically managed by a cloud native platform. They tend to be pretty data intensive because AI is data intensive, right? So they process a lot of data and typically that data is in real time, right? It uses data that's coming in in real time. And because of that, because they've processed so much data and the data is usually real time, those smart cloud native apps also tend to be deployed both on the cloud where you do your machine learning training, your AI training, as well as on the edge, which is typically where the new data comes in, right? From phones, from cars, from manufacturing equipment, that's another use case for smart cloud native apps is in smart manufacturing. That's where the new data comes in. So you need to architect those applications in a way where they can run both on the edge and on the cloud. So that's what smart cloud native applications are. So, Joe, you work with a lot of organizations, enterprises around the globe. Why should an organization transition to smart cloud native applications? Yeah, good question, Toby. Well, first, just to review the benefits of cloud native and then we're talking about, what is this AI capability add to that, right? So I think the benefits of cloud native generally many of the folks in our audience may be familiar with, but to recap that, right? You've got ease of management, all of a sudden you're able to release your applications and your capabilities much more quickly. So you have better release velocity. These applications are now highly resilient in this architecture, right? And in the cloud, especially a hyperscale and self-feeling infrastructures are now available to you and they're portable. The applications on those workloads are portable, right? So the question really then becomes, okay, so what's the uplift or what's the added benefit of smart cloud native in these AI capabilities, right? So you talked a bit about data, we've got massive data growth and proliferation, right? So they say data is the new oil, but are you leveraging the data that you have to provide your customers the experiences they want, right? The end of the day is all about our customer experiences and that's the differentiation that can be built by infusing AI into your business applications and these customer experiences that ultimately drive your revenue and service your customers, right? So what's powering this next wave of experience is the combination of data, your business logic, right? Your traditional application and AI to get those experiences in the hands of your customers more quickly and again, back to providing the differentiation for your business, right? And help you compete. And for you operators out there, you combine that with the ability to deploy these applications to multi-cloud environments, including Edge in a consistent way with a platform like ours, for instance, amplifying your reach to more customers while you're minimizing your complexity and cost, right? As well as taking out risk of getting to production or scaling that out to large workloads over time. So the benefits actually come back to differentiation, velocity and the agility that smart cloud native comes with smart cloud native and the AI capabilities in that are absolutely key to the experiences of tomorrow. There's no doubt about that. And you and I, I think agree that while smart cloud native is something really new, it's a relatively new use case for Kubernetes and for smart cloud native. We also think that organizations really need to act fast and transition to these smart cloud native applications. Why should folks act now versus wait a little bit until the market matures? Yeah, it's interesting. There are certainly some customers that have not or some organizations that have not embarked on the journey, but the reality is it's not early, right? We don't need to wait to the market matures. You actually just gave a couple of examples, but these tailored experiences are all around us, right? The technology has already matured to a point where it's no longer a science experiment. Whether it's consumer experiences like, you know, Netflix is recommending what you wanna watch or Amazon recommending what to buy. Those have been here for years, right? Those are great examples of smart cloud native applications and they've proven their value, right? You can see that in those organizations and their case studies, the increases of revenue, customer loyalty, stickiness of those customers and so on. So you get to this point where you realize just the leaders in your industry are already doing this, right? As an example, you know, we've got a customer that's a cruise line that's running their passenger experiences on their cruise ships right where those passengers are closest to the data. And they've built these digital experiences that are incredibly responsive and tailored to their customers, these folks that are taking a cruise, which in turn drives additional revenue, like while you're out to sea, you know, do you want to buy this excursion or, you know, add on these packages to your experience and it increases not just revenue then, but you know, again, the lifetime value and the overall experience and loyalty from those customers. So the leaders in this industry we've named a bunch already are already doing it. If you're a bit later and you're chasing your competition, so think, you know, you're building the next generation of electric vehicles, you may have a traditional 100 year old business but you're chasing Tesla in this aspect of electrifying your lineup, you're aggregating and processing data from many sources in the vehicle all of a sudden, right? So cameras, radar, LiDAR, and you need onboard AI to make real-time decisions of all that, but you also now need to provide these connected car experiences and insights to enhance, you know, your ultimate driving experience. So, you know, there's situations where I'm sure many of you are leaders in your industry and many of you are, you know, the car company trying to catch, you know, Tesla to use that analogy. So, you know, why now is partly that if you're not already doing this, frankly, you're late and you're slipping behind your competition and it's economic in that like the delayed or the foregone revenue increases in customer loyalty have not yet been obtained by your customer, right? Or by your company, right? So really the time is now, I don't feel like the technology is certainly been here for long enough to take advantage of and the time really is now from, you know, that aspect, yeah. So, for those that have not yet experienced it here, Toby, you know, there's gotta be some challenges with this, right? And even some of these mature companies like the Netflix and the Amazons and the Cruise Lines, they have been on this journey and they've experienced challenges. We work with many of these worlds, we're leading organizations and so we've seen some of those challenges for them, you know, firsthand. But maybe you can spend a minute just talking about, you know, it's not all motherhood and apple pie, as they say, right? There's gotta be some bumps in the road, there's gotta be a learning curve you're dealing with, new technologies, new processes and so on. So maybe spend a minute talking about, you know, what challenges will our folks experience on their journey to smart cloud native apps? Absolutely, yeah. And I think it's a good, you know, a reminder that, well, I don't think it's been enough itself poses challenges for organizations that are just adopting it, right? That are new to it because it really isn't just a new technology, it's also a new, you know, a philosophy, it's a new way to build apps and go about deploying them and managing them. So I've seen a lot of times that the biggest hurdle that folks need to overcome is just that organizational change that they need to go through, you know, upskill their teams to learn this new way of doing things, you know, deploying every day instead of once a year. So you have those challenges already just around cloud native, you know, it's a fairly complex set of tools, you know, you need to look at automating a lot of things and so forth. So, but on top of that, the smart cloud native applications, you know, that AI piece that they introduce poses a lot of additional challenges. And, you know, spoiler alert, there are solutions to those challenges, but I'll, you know, I'll highlight some of the largest ones here. So, yeah, first of all, the workloads that you're gonna be running to build a smart cloud native application, they're fairly complex. Most folks when they start out building a cloud native app, you know, you run some microservices, that's usually what people do first. They tend to be stateless microservices, at least in the beginning, that's where you start. So they've stateless microservices fairly easy to manage in the whole spectrum of workloads out there. And then, you know, folks go to things like adding, you know, continuous integration and delivery capabilities, also fairly easy to manage in the whole spectrum of things, but AI workloads really are more complex. They're complex for a couple of reasons. Well, first of all, from many organizations, those are new types of workloads. They've never ran those types of things in production before. And so in terms of their lifecycle, they are more operationally complex. You also need a lot of different workloads. So to build an AI ML capability, you need to build what's generally called a machine learning pipeline or an AI pipeline, much like a continuous integration and delivery pipeline. So meaning you have a whole set of tools that you need to deploy and really that need to work well together to allow your data scientists to, you know, first of all, get access to the data, wherever it may be living, then have an environment where they can develop the models. That's usually a notebook, like Jupyter notebooks, then provide them with the infrastructure and the right tools to train their models and then optimize them, tune them, deploy them, monitor them. So there's a whole set of tools that's usually way over a dozen of different workloads that you need to manage on top of what you've been doing before, right? Running your microservices and running your data services. So typically dozens of workloads to build that end-to-end solution. So really complex workloads, it's a lot of them and they have fairly complex life cycles. That's definitely a challenge, these new workloads that are introduced by AI. The next challenge is really around the data. We talked about this in the beginning a little bit that these applications tend to be very data heavy, right? Which means in turn they consume a lot of resources. Training machine learning models in particular requires not just a lot of storage and data for your data sets that you're training on, but also a lot of compute to train those models, right? Typically you need expensive compute resources like GPUs or other machine learning optimized silicon to do that. And it's not just that you need a large amount of compute but you also need to manage that compute elastically. It's typically the way these load patterns look is you have your data scientists, they iterate on a model, now they wanna train it and one thing that is kind of unique to AI training is if you compare it to building software, when you do a software build then usually you do a very small number of builds. Like let's say you build for two or three different platforms so you have two or three different builds. But when you train an AI model you wanna do what's called hyper parameter optimization. So you're often building dozens or hundreds or even thousands of versions of a model. So very, very large compute requirements and that demand is elastic, right? You may have at certain times a day a lot of data scientists building new models and then it dies down and the load goes almost to zero. So you actually need a system that can help you manage those resources dynamically, right? Been pending on demand and schedule those different training jobs in an optimized way for a couple of reasons. First of all, again those resources are expensive so you wanna make sure you don't blow through your budget but at the same time too you wanna allow data scientists to be able to iterate quickly, right? You don't want them to wait for those builds forever. So you also wanna make sure nobody's blocked. So there's quite a bunch of challenges around that around managing the compute and the storage for these workloads. And then related to that, as you're talking about data, security comes into play too. It's obviously something that should be a priority from right out of the gates as soon as you're starting to build an application for production but data adds another layer of complexity there for security, right? How do you keep that data secure? There's customer data in there and things like that. So that's another consideration. And then we talked about how our becoming more complex and smart cloud native apps talked about the data, having nature of it and the complexity that introduces. It also poses new challenges for the infrastructure underneath. So how you run Kubernetes, how you run your infrastructure. And the reason is simply that a lot of these smart cloud native apps need to run in different places, right? We talked about the fact that most new data originates at the edge but for training those models, we need the cloud. We need the scale of the cloud so we can train those models. So now you have to architect your infrastructure in a way where it can run in these different places on the edge and in the cloud. And how do you do that in a consistent way is another challenge. And so to finish things out, I think the thing that's important to realize, maybe for the folks that have worked with machine learning and AI for a little while, one thing that's unique here too about smart cloud native apps is that those AI components, you need to treat them as production services as online services, right? We've done machine learning and data science for a long time but in the past, typically those workloads were sort of used for offline batch processes or you run some kind of model on a piece of data and spits out a report and then you analyze that report sort of offline. But now, again, because those AI capabilities are part of a user-facing, customer-facing application, you need to treat those things as highly available online services, right? You need to learn how to operate that way with everything that comes along with that, right? Like how to make changes to it without taking it down, how do I manage the load around it, make sure it responds quickly and things like that. So all those production considerations that yet you would have about any online service. And then lastly, a challenge is talent, right? I think everyone's feeling sort of the talent crunch, it's difficult to find talent around cloud native. Well, now since we're talking about more complex workloads that adds additional strain there because you need that talent that knows how to do this. So those are some of the challenges. Yeah, for sure, thanks. There's actually a question really relevant. A point ago you were talking about highly available online services and there's the first question we have and thanks again to the audience for throwing questions into the Q&A for us. First one is right on that point. The attendee asks, how critical would a server or cloud outage be in effecting smart cloud native applications? If this is critical, what are some of the methods to prevent that risk? You talk a little bit about the architecture from that standpoint. Excellent, yeah, it's a great question. So right on that point, yeah, you need to architect just to right out of the gate for high availability, right? We have some best practices around how to do that for cloud native applications, right? And that's one of the things that cloud native architecture and Kubernetes is really good at. If you use the primitives in there, you will have high availability, right? You boost one instance of the application or your underlying server or cloud instance fails, Kubernetes will make sure that, that instance gets deployed somewhere else and you still have enough capacity available. So you benefit from that, if you architect those applications on top of Kubernetes and using cloud native. That's a really good point. So yeah, what are some methods to prevent this? It's basically following the same sort of best practices around deploying any cloud native app. Yeah, great question. Yep, thanks for the question. All right, I talked a lot about challenges and we kind of hinted at that in the beginning already that adopting cloud native or smart cloud native in particular too, it's not just a new technology but it also requires organizational change. So Joe, can you talk a little bit about how roles are changing on teams, right? Like what's changing for developers, what's changing for operators and the other personas involved? Yeah, right on. So looking at adopting new tech or approaches, we always think about it in terms of not just the technology, right? But the people and the processes needed in order to be successful. So as IT leaders, you're thinking about people, process and technologies in these fundamental shifts in the way that you design, develop, deploy, run, new application architecture like smart cloud native. So in this move, you're against seeing the need for all three, right? To adapt to new approaches and development, deployment, operation to be successful, right? So broadly speaking, right? The two common personas, if we just kind of boil it down to two, it'd be, you know, you've got the developers and the operators, teams responsible for developing applications and teams tasked with running them in production. And with the advent of DevSecOps and GitOps, these roles are already becoming blurry, right? And many of your organizations may already be down the path from a DevOps or DevSecOps standpoint. There's a lot more collaboration and a lot more communication required frankly here in this new world. And with the rise of MLOps, so machine learning operations in the AI domain, solutions, you know, need to cater to additional personas like the data scientists or the data engineers. And so you have, you know, even more necessity, you know, to break down some of those silos. Operators now are increasingly tasked with the objective of minimizing the complexity for developers, right? Providing those developers with new and interesting services to help them get their jobs done, right? Like server lists and functions. So they could abstract, you know, infrastructure constructs. And a common effect of building smart cloud native applications and adopting GitOps or DevSecOps practices is the, let me talk about this disintegration of silos within the organization, right? We work, Toby and I work with 150 year old traditional financial services company, right? Very mainstream in that regard. They've actually reorganized all of these teams from a highly matrixed kind of format the way things used to be. And as they developed a smart cloud native approach, you know, broke down those silos, did a complete reorg and aligned their teams for higher agility, faster releases and so on. So, you know, you have a world where, you know, now developers continue the sense of ownership all the way through production. And you've got teams, frankly, meeting other teams inside their own four walls that they've never worked with before this closely. So the roles are changing, the walls are coming down and it's, and it means really interesting, you know, things in terms of the way that we work together as humans, but also just the agility that that opportunity presents for the business. You know, maybe you can expand on a couple of those concepts. Toby, I know we're kind of sticking to smart cloud native, but, you know, think about, you know, GitOps in that regard or DevSecOps, your thoughts there? Yeah, I think, you know, GitOps is a really cool approach, really cool concept that we, you know, always recommend to our customers. And it's really, you know, for those folks who are not familiar with it, you know, we had this phase in the past where, you know, sort of the best practices around DevOps were around, you know, infrastructure as code and things like that. And typically what that meant is that as an operator, as an operations person, you would automate your infrastructure in terms of runbooks, right? So you would write some code that executes certain steps to make change to the infrastructure. And, but typically you run that as a human or if it's executed by some software, it's still sort of this, you know, iterative step, iterative steps that you need to execute. And if any of these steps fails, then the whole runbook fails and you need to go in and debug it and so forth. Now what's great about GitOps is a couple of things. So first of all, it's the interface that you use in order to make changes to your infrastructure is Git, version control system that I think most of us use these days. So it's a very familiar tool. And so, you know, to make changes to your infrastructure, you basically just check in changes to your configuration, right? Which often there's a YAML files that describe what you want your infrastructure to look like. So the declarative configuration, which of course that's one of the great things about Kubernetes that it introduced is those declarative APIs for describing what your infrastructure looks like. And so to make changes, you just use check in a different version of your declarative spec. And in fact, recently, you know, with the introduction of cluster API, for example, we can just extend the scope of what we can control with declarative specs, right? So with cluster API, for example, we can declaratively define what our infrastructure clusters look like. You know, with some of the products like Kubeflow, which is the basis of our captain machine learning platform at Day2IQ, you can take that all the way up to the machine learning workloads that I talked about as being so complex, right? So you can declaratively describe those two and then use Git to make changes to it. Now, what happens in between? Well, you know, you might be wondering, well, I'm checking in those changes and I have those declarative APIs. Who makes the changes? Who executes those? Well, it's software, right? So we have another key concept in Kubernetes, which is controllers, a piece of software that basically tries to reconcile the desired states that you declared in your Git repository in this case with, you know, the actual state of the cluster and the workloads, right? So you have software doing that instead of, again, a human executing a script with a couple of different steps, which makes it a lot less error prone and you can build in things for, you know, recovering from failure automatically and things like that. So what that leads to ultimately is better service outcomes, right? Higher availability, less downtime, all of those things, which is extremely important, again, because we're talking about some very complex workloads here and complex environments. So, you know, the more complex those environments get, the more difficult it is to do that the old way using, you know, scripts and GitOps and the declarative APIs and really give you a way to like automate that and make it build a system that's a lot less error prone, a lot more resilient. And I think, you know, what we could talk about at this point too is maybe talking a little bit about the infrastructure versus the application, right? So I just talked a lot about, you know, declarative APIs and how you make changes to the infrastructure. That's obviously, you know, we're talking about what an operator does and the cluster operator. Well, we're talking about machine learning workloads here for smart cognitive apps. So there's another persona here involved, right? Which is the machine learning engineer or the data scientist, right? That's building models. And you know, you may be wondering if I'm a data scientist, if I'm a machine learning engineer, you know, I don't really want to deal with GitOps and I don't want to deal with declarative APIs, right? I don't want to run a right YAML. I just, you know, I know Python, I know my machine learning development frameworks. That's kind of all I want to do. So how can we provide this persona an environment that feels native to them where they don't have to, you know, know everything about Kubernetes and, you know, learn all the ins and outs? One of the ways we solve that in our captain product is, you know, using a Python SDK that allows you to actually interact, you know, with the underlying infrastructure. It allows you to easily train a model and deploy a model and tune the hyperparameters and so forth. Which again, there's a lot that happens behind the scenes to make that happen. You know, cluster will probably autoscale up to have enough resources and things like that. But it's kind of all abstracted away from the data scientists. They can just do that in their notebook environment, in the Jupyter notebook using Python. So I think that's a really, you know, important tool too to just create an environment that's friendly to a data scientist or to a machine learning engineer. Hey Toby, we've got a couple of questions on this point actually. So could we just, you know, spend another minute on actually the tech stack itself? So there's a couple of questions that are pretty related here from the audience. One's around, hey, just go through kind of the stack, containers, Kubernetes, AI, ML, like is it always all of these pieces? And the other one's very related is, you know, can you take me through kind of the conceptual technical stack to enable these smart cloud native applications? Can you just spend a minute on kind of what the stack looks like and, you know, the breadth and depth of, you know, what might be involved in that? Very good question. Yeah, so we'll start right at the bottom, right? Which is the hardware, the infrastructure, right? So we talked about those applications often running on edge and cloud, right? So on the cloud, you know, you rent your machines from your cloud, favorite cloud provider. And those machines typically run Linux, right? Most cloud native applications are running on Linux. You can do Windows too, but, you know, especially for machine learning, that typically happens on Linux. So, you know, you rent your machine there, you have Linux on top, your favorite Linux to show. On the edge, the hardware is kind of interesting. You kind of see all kinds of things. Folks often deploy, you know, arm-based, you know, small size boxes, you know, talking about four cores, you know, four to eight gigs of RAM, something like that. But depending on the use case, those things can be a little bigger too. Those can be hardened x86 service. So, you know, they're a little bigger. Maybe they have 16 cores, 32 cores, something like that. But they are, because they're deployed on the edge, you know, typically they don't sit in a data center that has, you know, AC and, you know, air filters to create sort of a controlled environment. They often are deployed on the manufacturing floor and so they need to be industry, you know, hard end machines. So, that's what the hardware looks like. And again, you'd be running Linux on top of that too. The next layer up is Kubernetes, right? You standardize on Kubernetes for these applications. And then Kubernetes is sort of your substrate, your abstraction layer that sort of makes all the, infrastructure underneath look the same wherever it may be running. So then on top of that, you run, as Joel mentioned in the beginning, you have your business logic, you have your data applications, and then you have your AI applications for, you know, your business logic and your microservices. You can build those in whatever programming language you want, you just stuff them in a container, right? That's the nice interface we have here, whatever programming language we're gonna use inside, just stuff it in a container, we can run that on Kubernetes. The data services that we often see involved here are things like Kafka, because again, a lot of these applications are real-time, so you need a way, a real-time data pipeline. And then, you know, databases that can scale to, you know, the amount of resources, the amount of data that you're processing here, like Kassander, for example, or the equivalent cloud services. And then for the AI ML pipelines, the workloads that we see most commonly are Jupyter notebooks, so that's a development environment for data scientists. Then the two most popular machine learning development frameworks are TensorFlow and PyTorch, so we see that a lot, both written in Python, or they have Python interfaces, rather. So that's where people develop their models. And then, you know, as you go later into the AI ML pipeline, there's different tools involved for hyperparameter tuning. Each of those Python frameworks has a way to deploy their models, there's TF serving for TensorFlow, for example. So that's typically the stack. There's many other pieces, you know, could talk for hours about, you know, what you should do for networking and for storage and for security, there's many, many other pieces that exist sort of around this, but those are kind of the core components. So if you're gonna, for the first time, deploy a cloud native application and you don't have one yet deployed in containers and running on Kubernetes, how many different technologies are we talking about here, Toby? Yeah, it's so for just a cloud native app without any smart, without any AI, you're already talking over a dozen, right? Probably 20 or so different open source technologies that need to come together to build a production-grade stack. That's a lot, right? That's a lot of different tools to think about and we'll talk about what some approaches are to kind of calm that chaos and how to go about that. But then you add the smart component, you're immediately kind of doubling almost, I would say, adding another dozen or two other workloads to make that happen. Yeah, it's an incredibly complex environment, but massive opportunity and upside to getting it right. So it's probably a good segue, like we wanted to spend some time talking about what recommendations we would have and without given time constraints, without doing that technical deep dive. Let's move into like this topic around, what recommendations you'd have for the team you're Toby today. Maybe you hit some of the technology-centric stuff, I could hit maybe some of the people in process at a high level, but I'm cognizant, we're looking at the clock too. Yeah, absolutely. So I think one of the great things about Cloud Native, of course, is that it's all based on open source software, right? And why is that great? Well, because the open source community, especially the cloud native community, just innovates so fast, right? The speed of innovation is incredibly high. And so we talked in the beginning about how important it is to adopt smart cloud native now because others are already doing it. And so if you don't act now or you're gonna be behind, well, if that's true, then one of the best ways to catch up if you haven't started yet, or to get ahead in smart cloud native and AI is to also commit to open source for those AI pieces, right? So leverage the best of breed open source technologies for the entire stack, right? You start with Linux at the bottom, we go to Kubernetes as a both open source, and then those key technologies I talked about for AI, Jupyter, TensorFlow, PyTorch, so forth, all open source, right? So that's like your guaranteed best way to be at the forefront of innovation. You need to choose a smart cloud native platform that helps you run all these things, right? So we talked a lot about the challenges earlier, how complex those workloads are and so forth. So you need a platform that first of all can support these AI workloads. And that runs in all these environments we talked about, right? Edge, multi-cloud, air gap environments, if that's something you need to do. We see that a lot in manufacturing, for example. And so you need a platform that does that, that has a high degree of automation built in, because again, there's a lot of complexity. So you want to automate as much as possible to get resiliency and just tame that complexity. You also want to look at a platform that can use AI and ML itself to help you operate better. That gives you operational insights that makes recommendations to you, how kind of is your sort of sidekick as a human operator to help you run more successfully. And then related to going with an open source strategy, you want to pick a platform that's built from the best of breed open source components and kind of aligns with where the community is headed, right? Where the CNCF ecosystem and community is headed because that's where the innovation is happening. You want to take advantage of that. To do that, you need to find the right partner, right? You want to find an organization that can help you support this, that integrates all these pieces and tests all these pieces for you, provide support and training and so forth. And I think to pick up a point again that I mentioned earlier, you need to architect for production from day zero, right? Like these AI workloads are different from what we've done in the past. And so you really need to treat everything like a production system and kind of use all the best practices that we know from running microservices and other cloud native workloads, also for these AI workloads. So that's what I would say around the technology, the choices are sort of the best practices. And Joel, maybe you can expand on how people should think about people and processes. Yeah, there's actually a couple of questions that are on this point. So instead of talking through that in depth, I'll just maybe address that in the trend to answer a couple of the team's questions here. The first question on kind of the processes as applications get more complex, it's hard to for ops and those teams to manage. There's more to go wrong. Do you see or expect a big uptake in management time for the operations teams when smart cloud native apps are deployed, right? So this really does go to people, teams, structure, process and so on. I would say the opportunity is Toby was just outlining that tech stack while Toby just simplified it. I think he also said there's two or three dozen, you know, new and innovative technologies that need to come together, right? Need to be integrated and deliver it to a platform. And so the opportunity is certainly there for the operations teams to deal with all that complexity to have to integrate things themselves. Because remember, most all of those three dozen technologies were not built to work with each other. They don't come pre-integrated out of the box. Hence a value proposition around day two IQ and others. So the opportunity for complexity is certainly massive, but I would say automation is on the other hand, the key to, you know, that answer. You need to embrace a good ops approach. You need to embrace a platform that automates all of that stuff and simplifies your day two operations. And therefore, you know, we actually see very large customers of ours, very high scale with lots of applications already have been deployed with minimal like two, three, four person operations teams. And the reason they're able to do that is through automation, right? So I would say most of our customers love to start their journey by experimenting with these dozens of technologies. You very quickly learn that as you start putting applications in production, there's no way a team of single digit people will be able to manage all that without automation. So I'll answer that that way. And the other question that was on topic was how can I get my teams excited about this newer way of developing, deploying and managing smart cloud native applications? How is the culture different than traditional IT app dev deploy manage? I think that's an interesting, you know, question. It goes towards culture and a few other things. We talked earlier about, or I did about, you know, alignment, you know, internally and breaking down some of those silos. I also think that there's an opportunity for us as technologists to talk to the business side of our organizations around, hey, how can I help the company, you know, deliver their revenue ambitions? How do I help the company deliver customer loyalty that the business teams are trying to accomplish, right? And at the end, that's what most sales, marketing, services, digital executives in your organization are looking to accomplish. And so there's a huge opportunity about having more traceability and line of sight into how you, as individual developers or operators can directly map to your customer, your company's success in the marketplace. So I think that, you know, embracing that conversation and knocking down some of those silos, I think, frankly, invigorates people when, whether you're a developer or an operator, understanding where you fit and understanding the value that you're providing to the company in whole is an exciting thing. I think the second way you get teams excited about it is, you know, you've got several dozen new technologies that are on the forefront of where digital innovation is today and what developers and operators are considering to be best in class, right? So the gaining of new skills, right, to all of the individual teams from development to operators is a huge opportunity. And I think, you know, we've seen most of our customers look at this as a way to really fulfill a lot of their personal, you know, ambitions around staying current with technology and always being able to have some time to play and also run, you know, the technologies that are on the forefront of innovation. And I think that, you know, that's a huge motivator for folks. So I would look at those two avenues as a way to get your teams excited about this new way, right, these new smart cloud native applications. Okay. So as we start, you know, moving our way towards a wrap up here, and again, we really appreciate the questions and the times. There's probably a few other questions we'll try to get to here in a race to the finish line. But again, our final kind of point of view on smart cloud native is, you know, the time is now, right? We believe these smart cloud native apps are these digital experiences that by definition are powered by AI and they're constantly evolving and improving, right? The cycle times that this architecture allows you to have reduces, you know, application releases from months to potentially hours, to days or hours, right? So there's a huge opportunity there and these will be the defining attributes of tomorrow's, you know, winning digital products, right? In every industry. You know, Toby, you know, wrap up from there and I'll talk a little bit about, you know, who D2IQ is since we can start with that. But, you know, any thoughts on that? Yeah, I think that was a good summary. I think, you know, again, it's, we believe that those types of apps will be part of the leading solutions in every single industry. So, you know, really, if you wanna be a leader in your category, it's time to get started with that. Others are already doing it, right? Chances are there's a player in your category that's already adopting this and building applications that have AI built in. So the time is now. And, you know, I think since we're talking here in the context of the Linux Foundation, right? And the Cloud Native Computing Foundation, you're already in the right place, right? Adopt open source, use open source to do this because that really helps you innovate faster than going the alternative route. Yeah, great, great point. So a little bit about D2IQ here, folks. So if you're not familiar with us, we have a long history. In fact, Toby is one of the three co-founders of the company going back nine years, helping enterprises and governments around the world with their journey to Cloud Native and now Smart Cloud Native. Our platform called DKP simplifies the challenges of D2Operations and it provides the automation that solves all of those challenges around the complexity of these new tech stacks. So we help customers that are just getting started on that journey through the platform, but we also help them with their services teams that can offer help to you in the form of CNCF certified training and professional services so that your organization can ultimately become self-sufficient and experts in your own journey. For those that are already down the path and been doing this for a while, we are quite experienced in intermediate and advanced use cases, like running these AI-enabled smart applications in multi-cloud, in Edge, where it's even, frankly, more difficult than it is when you're getting started. And finally, we're complementary too and we partner with the likes of AWS and Microsoft and NVIDIA for AI and others so that we can embrace and extend the investments you already made in cloud and AI technologies. So we are a way to certainly accelerate your path and your journey here towards smart cloud-native applications and we'd love to help you. Toby, take us home. Final takeaways and any calls to action? Absolutely, yeah. I think, since I imagine we have a couple of different folks here in the audience, different personas coming at this problem from different perspectives and wondering, how can I engage? How can I get involved in this? So depending on what your role is, very different ways to do that. And there are a lot of open source projects around this, right? If you're a data scientist, for example, look at and get involved in those open source projects that cater to data scientists, right? Like Jupiter, TensorFlow, PyTorch, and so forth, right? If you're more on the operations side, get involved in open source projects that are more on the operations side, like Kubeflow, for example, right? If you are looking for a partner, right? Like you've looked at the space and you've decided, well, I wanna go the open source route, right? I want to embrace the best of breed open source technologies to give me that speed of innovation. Find the right partner, right? Our philosophy is really, we cater to people like that who decided, I'm gonna build a stack, my stack based on the best breed open source technologies, but I need a partner to help me with that complexity, to help me test all the integrations and get sort of a solution that I can use and download that is fully tested and validated and integrated and supported. So, go check out our website. We talk a lot about this there. There was some questions in the chat too about diagrams, how does this all fit together? We have that on our website under solutions. And I would say, finally, you can read our blog. We'll write a lot about smart learning apps on our blog. If you go to day2iq.com, we actually have a category. So there's a direct link on the slide right now. Takes you to that category of cloud native or just go direct to our website, day2iq.com. And finally, we're hiring. So if you want to help build one of the best platforms here for running these smart cloud native apps based on open source, check out our careers page. So I think that takes us to the end. We're right on time. So thank you all for listening and for the great questions. Appreciate it. Yeah, thanks everyone. This has been fun. And Christina, back over to you. Well, great. Thank you so much to Toby and Joe for their time today. And thank you to all the participants who joined us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. And we hope you're able to join us for future webinars. Have a wonderful day. Thanks everyone. Thank you.