 Thank you very much and thanks everybody for joining today's presentation. We're going to be talking about and talking about enabling business agility with open technology. Over the next half hour, I want to examine some of the problems that businesses have or the challenges with agility and really go into what is agility? What is the industry doing, particularly the communities and the cloud native computing foundation to enable companies to achieve a higher level of agility than it's ever been able to be done before? When I started thinking about this topic in writing about it, you know, one of the things that really came to mind is most of you probably don't know. I'm a competitive obstacle course racer and I found this definition extremely pertinent to the topic we want to talk about today is that, you know, as we're going through a race, a trail race, an obstacle course, you know, we find the need to respond to unknown and unfamiliar terrain and obstacles around each corner. There's, you know, many things put in our path to challenges to keep us from being successful. Their competitors are also going through and it's, you know, who could get through there the quickest and come out ahead? And to do so, we really have to call on a variety of skills necessary to kind of overcome these challenges that are placed in front of us. Whether it's, you know, the obvious ones of arms and legs, of navigating the terrain and climbing through and over obstacles, but then also the sense of sight, mental acuity, balance, you know, all of these play in concert with each other, working together with split-second timing and accuracy, you know, to make an obstacle course racer successful. In getting to the end. And you know, businesses today really face many of these types of challenges, challenging terrain and obstacles every day. And, you know, a vast majority of them knew that they've got to figure out how to navigate quickly in a competitive environment to keep from being disrupted or left out of customer opportunities. And so these can be in the form of business requirements from regulators, international rules, GDPR, or maybe in the form of opportunities, new products, new channels, customer segments that they want to enter, or new technologies. And then there's the changing requirements in the area of customer expectations. You know, customers, you know, are coming, you know, we use the term, term consumer fine. They're used to immediate response, immediate gratification. And so as customers see and experience new ways of interacting with customers and develop new expectations of response times, of where service is going to be provided and how they want to interact with you, and a business needs to be able to respond to that. So how can you enable, you know, the numerous roles in the company, or even the partner ecosystem to work together to seamlessly deliver the new capabilities at the speed of today's business? I would propose it's through adoption of the rapidly evolving cloud-native application in infrastructure model. Many people think that cloud-native is simply a way to describe applications that are run in a public cloud. But the truth is that that phrase is not really so much about where an application runs. But rather cloud-native describes a broad approach to delivering applications. It's about how applications are designed and built, how they are tested and released and deployed into production environments, and it's about how they are managed as well. Now it's true that most cloud-native applications are deployed on a cloud infrastructure, which could be in a private cloud or a public cloud, but where is really irrelevant? It's about the how that matters. At the heart of the cloud-native approach is about delivering better software, faster in its scale. And your path toward delivering applications faster will ultimately be one that takes you to a more cloud-native approach. With that said, let's get into why and how of cloud-native. The cloud-native approach is vastly different than a traditional waterfall development model and goes far beyond just being an agile development model. The first striking difference between cloud-native and traditional models is the application architecture. Monolithic is definitely not the way applications are done if you want to succeed in a cloud-native model. Cloud-native applications are instead built from many independent microservices, independent in that they can be deployed and run as standalone software services. They still ultimately come together to form a complete application in the end, but now the application is organized in a loosely-coupled way, and this is really transformative and a key enabler in this business agility. Now there's no more waiting for the slow guy. You can push your set of microservices through the delivery process independently. As soon as any microservices completed one phase, it can go on to the next. No need to wait to deliver all the functionality as a single release. This means that new features can come out incrementally, microservice by microservice, allowing us to deliver new capabilities more rapidly. Also, since microservices are small, they require less people to code, to test, to deploy than the complete application would, and therefore, more efficient to deliver. Now it's possible to restructure our teams. We can put everyone needed to drive a microservice through its entire life cycle into a single DevOps team, and these teams are likely still quite small, and this changes the thinking and culture among the team by emphasizing an end ownership of the software. In practical terms, it results in better integration and management of the work needed to move the microservices through its complete life cycle. Finally, in a critical importance, the process we are looking at here can be highly automated. In fact, end-to-end process orchestration is the most valuable of all cloud-native trades. It, too, is enabled by microservices. Their independence allows microservices to plug into an orchestration process, and this kind of automation is a great revolution in software delivery, as much so as the first factories were to manufacturing. Automation not only speeds delivery of any one microservice or application, but the overall orchestration dramatically increases overall production capacity, efficiency, and quality, allowing many applications to be produced faster and more reliably at a lower cost. The culmination of all of this is to reduce application delivery cycle time significantly for many months or years to a few weeks or even hours, and we do it in a way that can scale so we can deliver numerous applications at great speed. So there are many facets that contribute to how cloud-native applications get developed, delivered, and run. Analogous to the many skills of an agile runner, let's look at what each independent cloud-native domain brings to the table, and then we will put them together to work in concert later. So each row here highlights a different dimension of cloud-native. From left to right we see the historical progression as technologies and capabilities have evolved from the traditional model to cloud-native over time, and we get to the far right, meaning we say that we've really achieved a cloud-native computing model. So the good news is it doesn't necessarily have to all be done at one time, moving everything from waterfall monolithic physical data center infrastructure all the way to cloud-native computing. So these can be chosen in increments that make sense for your environment, and each piece will add incremental value as you go through the process. But ultimately the kind of one plus one plus one definitely equals greater than the sum of its parts. So you ultimately want to get all of these put together at some point in the future, and have a vision and a roadmap for how you're going to get there. So the first point here is that cloud-native is automated. You can think about the move from agile to DevOps as expanding the agile team and process to include operations, folks, and processes as well. Agile teams that initially included only developers have consistently grown to also integrate test and release teams, reaching further to incorporate ops in the next extension of the model. It's been an evolution, not necessarily a revolution. This is also where we introduce the idea of orchestration, because as we focus on the process, it's the orchestration that really brings it all together. Remember that we are dealing with more moving parts than at any time in our history. So many microservices, rather than a few monolithic applications, and the teams are small. So orchestration is essentially required to repeatedly do this process. If we want to deliver applications faster and at scale, the orchestration isn't as absolute essential. And note here that kind of looking ahead, the blue ship's wheel is the Kubernetes logo, and it's really the leading orchestration platform for containers in one of the hottest open source technologies today. And it's a platform that you're most likely to use sooner or later if you're not already. And we'll come back to this more in a few minutes. As we discussed, the move to DevOps involves changes to our organizational structure as well, and to the way we think about our responsibilities. People become part of the application team rather than independent functional teams. And this drives shifts in perspective and thinking about what our responsibilities are and the priorities. And that in turn impacts the culture of the organization. And a cultural change can be a lot more challenging than just embracing new technologies. Despite that, this evolution is happening across all industries, and it's being successful. And so if you're practicing agile development today, you're already proven that you can make that sort of change and make that first step along the journey. So Cloud Native is also about container or componentization. And looking at one of the key aspects of that architecture is this notion of microservices. Microservices are the key enabling incremental change that delivers shorter cycle times and allows us to deliver incremental value instead of having to work at a monolithic level. By breaking the application up into smaller pieces, we can introduce new capabilities rapidly without having to wait for a complete validation of the entire system. A good approach to implementing microservices architectures is to start with a green field application. There's a lot to learn, and it's generally easier to do when you're unencumbered by accumulated technical debt, typical of applications that have been around for a while. And once you've developed some sort of expertise on the distributed systems and the microservices architecture, fine-tuned the design, development, deployment, and management of it, then is when to take on some of the more existing monolithic applications. And even in those, you're going to want to analyze them to determine which ones need to be refactored and which ones are better off just left alone. There are many that may just not be worth the effort to redesign. But if you look opportunistically at them, you should be able to identify areas where there may be sections of code that are good for refactoring or certain functions and services that you want to add to an application. And that's a good way to think about breaking it down and really being needs-driven as you go through that process. The next aspect is about its weight. Container or cloud native is lightweight and portable. Container growth was really driven initially by developers. Containers are a way of packaging the code, distributing and delivering code, and being able to make it as portable as possible. And this worked out great for the developers. They're fast, lightweight, and easy to use. And developers can deploy and manage them on their own often, and particularly testing them out and validating, even on as simple as a laptop before moving them to a larger IT ecosystem. And most compelling is the containers run consistently almost everywhere. Why is because they include everything they need to run? The application code and all of the operating system dependencies that may go along with that. So as they move from development to test to staging and production, it should continue to work consistently across all of the environments. And this virtually eliminates the historical challenge of, hey, this application works on my machine problem, where an application fails in production or users having a problem with an application, but the developer doesn't see that problem in their environment. So it's a great opportunity to solve that issue. Containers also make moving code around much easier, which increases the developer productivity and streamlines the dev, test, stage, production workflow. They can also take good deal of stress out of the software updates, whereby they only need to update the specific service that is changing and therefore just delivering a specific container rather than taking down an entire application, scheduling a weekend to do a monolithic update. And finally, Cloud Native is abstracted. As I've already mentioned, Cloud Native is really about an operational model and not a place. And as the technologies matured and containers be evolved to where they can run essentially on any platform, now that's taken it so that it's not just in a cloud and it doesn't even need to be in a core data center. The container technology and Cloud Native technology is stretched and federated all the way to the edge, enabling edge and IoT devices and applications to be delivered and supported in the same Cloud Native model, which greatly increases productivity and flexibility, especially when you want to make incremental changes in far distributed applications that maybe push the edge where historically you would need to do an all or nothing update and a monolithic update that could essentially be many gigabytes. And doing that is very time consuming, very bandwidth consuming. Here you can continually deliver incremental capabilities or make that update to a specific service and continue to operate and it's done in a much lighter, much quicker methodology. So let's look at the maturity progression that this process and adopting Cloud Native takes you through. You'll grow to kind of realize all the benefits over time. Starting at the lower right hand corner, as you move toward the Cloud Native approach to application delivery, you improve upon your software cycle, lifecycle management processes. Through automation, we realize productivity and efficiency gains and we increase the reliability of the software releases. Those increased reliability releases make us more comfortable with releasing software more frequently as we do it repeatedly and see the success in that. Then moving to the lower left, by introducing new capabilities more frequently and incrementally to users, we speed up that feedback loop, allowing us to respond more quickly to feedback. And this helps us build the right product for our customers, the ones that the users really want and value. And you can see which ones that they value through either A-B testing or other types to really get a good assessment of what users are wanting and make changes fairly real-time. And you can also in this, it helps us identify problems more quickly. And since releases only change things incrementally, the release cadence can be rapid. We can also quickly implement and release fixes, thereby raising the quality of the software. And then finally, bringing it home. So now that we've improved or provided the needed capabilities at a high quality, that of course leads us to improved customer satisfaction. And because we are able to achieve that goal rapidly and remain responsive to changing needs by continuously delivering applications faster, we can stay ahead of our competition. And this is how we grow our business in the digital economy, even through challenging times. So I promised we would get back to the orchestration piece. And so let's take a look at that. I do want to clarify a key point, the difference between automation and orchestration. Sometimes we use those interchangeably, but in reality, they're not. Automation is really about essentially automating or referring to a single task or a small number of tasks in relatively linear fashion, where orchestration arranges the tasks to optimize a workflow. For example, orchestrating an application not only means deploying an application, but also connecting it to the network so it can communicate with users and other applications, and also adjusting and dynamically responding to scale and response time needs. So automation without orchestration is brittle and difficult to maintain. So here is a depiction of a common software development process. And when we look at the ways to deliver applications faster via automation and orchestration, this is the macro view of the processes we want to automate and orchestrate, I should say. You know, the green chevrons identify the lifecycle phases of dab and test, release, and deployment management. And the first part is really where the application developer side comes into play. The application lifecycle kind of put in through the development into test before moving it into production. And the responsibility, as I said, really falls in application development. The second part of the application lifecycle takes place in the production environment where the work of deploying and managing the application happens. The work done here generally falls under the umbrella of application operations. And applications also need infrastructure to run on. So underlying the applications and supporting the needs of the applications and the teams working on the applications is the foundation of infrastructure. A lot of work goes on at this level as well, making sure that the infrastructure is available to run the application. So agile development and early DevOps began to provide automation across the application domains. But the degree of the velocity and the agility was still very limited. Now as Cloud Native takes hold, the full ecosystem can participate to unlock further potential. So now let's look at the Cloud Native ecosystem that can be applied to automate and orchestrate these disparate systems. Now going into detail on each phase and project is well beyond the scope and time we have today. However, it's important to understand the breadth and completeness developing in the open source communities. And the collaboration that is ultimately enabling the numerous contributors across business application development and various operations and security teams to work in concert with one another. Undeniably at the heart of the Cloud Native ecosystem has emerged Kubernetes as the core container orchestration platform. Containers are a good way to bundle and run your applications. But in a production environment, you need to manage the containers that run the applications and ensure there's no downtime. For example, if a container goes down, another container needs to be started. If performance is beginning to lag, additional services may need to be started or rerouted. And Kubernetes ensures this behavior is orchestrated at a system level. Bringing all of the resources that need to participate in a change or deployment and configuration are done through from the application level all the way down through the acquiring the infrastructure resources. And so Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover of your application and provides deployment patterns and much, much more. Infrastructure is also actively participating in the Cloud Native Revolution. From projects that enable infrastructure as code to Cloud Native implementation of critical infrastructure runtimes, including compute, networking, and service meshes, and persistent and business critical storage. Kubernetes can actively orchestrate the allocation, scale, and availability of these infrastructure resources in context of changing application needs. And this alleviates the burden of traditional system administrators tasks with rigid deployments and limited orchestration capabilities. So where do you start? You know, all of these projects are out there and available for you to tap into and utilize the beauty, which is the beauty of open source. But most companies that try to go about it on their own and do what I call the DIY approach, ultimately run into more overhead and management in keeping up with what's happening in the open source community, which projects are ready to go, et cetera, than is worth what the effort that they put into it. And that's where somebody like SUSE comes into play. A distribution that can bring all of the pieces together and deliver and ensure that you've got a set of packages that work in concert with each other and are supported and lifecycle managed. You know, many of the open source projects will go through multiple iterations, even in the realm or the timeframe of a quarter or two. And determining which iterations are ready for prime time and production and which iterations need further work and stabilizing is the work that we put into the distributions every day. And so you should rely, highly recommend that you would rely on a distribution vendor to be the source of how you go about that. And it's really in this technology, in ensuring that the technology is enterprise ready and viable for the long term. And we can also help you understand what skills you need to develop and support the program. And so SUSE is really your strategic partner to help you achieve this business agility transformation. And as many of you have probably read about and heard, SUSE has announced our intent to acquire Rancher Labs. And so SUSE and Rancher will soon be coming together to bring you truly the world-class ecosystem all the way from infrastructure at the Linux Foundation up through container platforms and Kubernetes through a multi-cloud hybrid, multi-cluster operations and management. So I want to spend a minute here before I wrap up on, you know, one of our customers and what their results were and what their challenges were. Or APIO-MAT's mission is to provide enterprise companies with agility to deliver new digital services faster. And they do this by simplifying the development of front-end applications for any device, be it mobile, web, voice assistance, chatbots, now even AR and VR. And the key to APIO-MAT's offering is the flexibility to support different IT environments. And the capability to integrate easily with existing business applications, legacy systems and cloud APIs. And so to speed up their time to value that they wanted to deliver to their customers, APIO-MAT decided to containerize its software. And they selected SUSE's container platform to enable the quick and easy deployment of APIO-MAT in any environment from bare metal on-premise deployments to public cloud deployments to meanage service environments. And to offer simple solutions that are easy to set up in different environments, APIO-MAT needed a more efficient way of rolling out its client data centers. And to support private cloud on-premise deployments as well as public cloud implementations. So APIO-MAT standardized its software to run in Linux containers, which this container platform, as we've already talked about, now gives them the ability to deploy their applications in any of those environments without extensive retesting across numerous platforms that they have historically had to do. So APIO-MAT wanted a container management platform based on Kubernetes and a powerful open-source solution. However, since Kubernetes can be challenging to install, operate, and maintain, they wanted to look for a solution that would minimize the time and effort required for them to set up and operate and maintain the Kubernetes environment. And that's really where SUSE's container platform shown through. As we run on a wide range of infrastructure, including cloud platforms, and a key benefit of the SUSE container platform over other container management solutions is its flexibility. APIO-MAT customers can deploy their applications in any environment, be it on-premise bare metal, virtual machines, private cloud, or even public cloud infrastructure. So they have really been able to achieve greater ROI, faster agility, supporting more customers than they could prior to this transition. It is really going to result in growing their success and delivering more satisfaction to their customers. So with that, that's the end of my formal presentation. So I'd like to open it up for questions that we may have, that the audience may have. And just as a reminder, if you have questions for Brent, please go ahead and add them to the Q&A box. You'll see it at the bottom of your screen. Thanks. I'm not seeing any questions. Brent, you must have done a very thorough job. I must have answered them all right up front, which is a good thing. Yeah. Anyone have any questions? Well, if you don't have questions now and you do have questions later, keep them on. You can reach out to us at our website and then we can always follow up. And in particular, as we close with our Rancher acquisition, be able to get more details on that. Looks like you got a few now, Brent. Oh, yeah. I guess I spoke too soon. Yep. Let's see. Do you require virtualization from SUSE or is VMware acceptable? This is one where we're wide open on that. So we can support any number of virtualization environments. So yes, we can run on a bare-metal SUSE environment or it could be run if you've got a VMware infrastructure and run or run a SUSE container environment, absolutely supported in that environment. And we plan to continue that open methodology going forward. So absolutely believe in a heterogeneous environment. See, do you see those deployments more on-prem, public, or in a mixed hybrid? Do you see that mix changing in the next one, two, or five years? That's a great, great question. I'm seeing more and more in a mixed environment. Most people will do a first one, obviously, in picking a very homogenous environment. So that may be in the cloud or it may be on-prem. We've got a lot of customers going in both and I don't know the exact distribution, but it can be done in either. And I see kind of much of the core data center over the next, say, two to five years, probably moving more and more to the cloud, to public clouds. But that the balance of that, I would say, that will always keep things in a very hybrid model is as edge emerges, edge is by definition an on-prem experience. And so regardless of the technology, we see the container technology moving to the edge and being able to be deployed in lighter weight implementations, such as ranchers, K3S, a very lightweight, edge-targeted solution that allows a hybrid model, where the edge can be that point of presence interacting with the customers, gathering data real time, filtering that back and making localized decisions. But then after that data has been filtered, that can be then uploaded to the cloud for further analytics. And so we see a big momentum in this hybrid application where the application is federated across on-prem resources in edge or micro data centers and connected and interoperating with cloud deployments in one of the major cloud distributions. Theses container orchestrator, today we've got a platform that is called CAS. The container is a service platform and will be streamlining that. We still haven't closed with Rancher, but we once that closes in the coming weeks, we will be putting together an integrated roadmap and providing further updates on what that portfolio looks like. But you can count on some continuity going across from our current portfolio to the new portfolio. Next question, there's a question on how is SUSE agnostic in their choices? And let's say, is there lock-in or how do we minimize lock-in? I'm going to turn it into the minimize lock-in. It's one of the things that's quite different in how SUSE makes their choices is everything that we do is open source 100%. We don't have any proprietary extensions on top of the products that we do or the code that we do. The other thing that we are very supportive of is a heterogeneous environment. I already made one comment on this with the VMware question is that whether it's our container platform or the container management, our current container management, and this will continue on another area that will absolutely continue on with Rancher is that we support a very heterogeneous environment. So it doesn't have to be an all or nothing SUSE solution. If you need to intermix some other vendor technology, you want to use Microsoft, AKS for their container environment, with SUSE containers on-prem, with even a competitive container environment and Linux environment for another application, we can help manage that entire estate. Our tools are designed in a very open manner in a very heterogeneous manner. Whether we're managing Linux, we manage a heterogeneous Linux environment, whether it's SUSE or Red Hat or Ubuntu, we can do the lifecycle management of all of those. If it's our container platform or AKS, when we bring on Rancher, we'll support a multi-container, multi-cloud environment there as well. So I hope that answers that question. Are there plans to pursue AI ML strategies with SUSE technology and partners? Absolutely. We are building out more and more of our AI ML capabilities. We support at the container level, we support acceleration technologies today and support many of the most common AI or machine learning tool kits. In the future, we've got some exciting things coming with new products that you'll see this fall on helping the data scientists also in the full implementation of machine learning pipelines and the management of those. So we definitely plan to invest more heavily there, delivering that both in a bare metal scenario as well as a containerized model on-prem and in the cloud. What was the deal size of Rancher that I cannot comment on? So a nice question nonetheless. There's one question going back up to one of the first questions. Is any recommendation regarding security in the public cloud? I don't have a specific recommendation for the technologies. We don't have a security portfolio in and of ourselves other than I guess there's securities are very multifaceted areas. So I'm going to respond to that fairly loosely in that we obviously focus very heavily on security of our products, ensure that they're engineered in a way to be as secure as possible and then helping with the governance of that infrastructure and its life cycle management from a patch management standpoint ensuring that everything is up to date. It's compliant on a patch level on a configuration level. We can do that with our tools on-prem and in the cloud. So ensuring that your Linux environment, that your container environment is up to your specifications, your policies in configuration and patch compliance. We provide the tools for doing that. Beyond that from intrusion detection, further configuration management and governance and application configuration management and governance. We don't provide anything specifically. So it's a little bit beyond the scope of what I could answer today. I think that's all of the current outstanding questions. Oops, looks like one more. Sorry about that. No, no worries. Here's another one. This is a great one. I mentioned that Kubernetes is difficult to deploy, manage, etc., and there are plans to add layers to make this easier, recommend suggestions. So there's multiple approaches and we're supporting numerous of those. I'm going to start with one to go all the way to the abstract away Kubernetes and is the opinionated model. We have a cloud foundry based solution called cloud application platform today that really abstracts the user from the developer from Kubernetes at all. Everything's done behind the scenes, set up in workflows and policies in the cloud foundry environment, set up essentially as an opinionated model. That completely abstracts. It's extremely easy from a developer standpoint. Then a small set of Kubernetes experts can manage the back end and ensure that's operating at its easiest. The next level of abstraction is typically where a core team of DevOps experts and Kubernetes experts would set up a custom CICD pipeline for an organization. If they set that up and roll that out as a company standard, then developers can be abstracted from having to each repeatedly build their own pipelines in their own ecosystem and pull in many services that can be provided from a centralized core services team. That's probably the model that we see being adopted that I see coming on as probably the ultimate model. In the initial phases, it was either the completely opinionated or the completely cowboy on your own, the developers on their own model. I think we're gravitating to the center where there's going to be that centralized set of expertise that build customizable, but CICD platform for inside a company. This remove more into beyond DevOps and into GitOps. I think we'll see that even of all of them further. For recommended training, if you go to SUSE's website, we've got quite a bit of training that we offer. We also have training partners and Linux Foundation actually has an excellent set of classes and you can become LF certified as well on many of the technologies. The last question I've got here is, will we upload the slides after the presentation? Yes, the slides will be, not the slides necessarily, but the presentation itself is being recorded. That can be gotten and reviewed in the LF channel. Very good. That was good. I thoroughly enjoyed the session this afternoon. Thank you for giving me the opportunity to present. Thanks everybody that attended the event. I greatly appreciate it. Thank you so much, Brent, and thanks everyone for joining us and hope you have a wonderful rest of your day and hope to see you next time. Thanks. Bye, everyone.