 Thank you. Thank you all for joining this morning. I know it's early for some. My name is Kevin Martelli. I'm a principal advanced technologies cloud leader at KPMG overseeing a lot of our cloud innovation ML ops and pipeline. So nice to meet everyone virtually today. Yeah, sure. Thanks a lot, Kevin. I think so. Thanks for joining the session today. My name is up and up Joshi. I'm a senior manager in the open shift product marketing team at Red Hat. I have over 20 years of industry experience in a lot of roles on both the customer side as well as on the vendor side in my current role at Red Hat me and my team. So we focus on building out and being able to evangelize the value of Red Hat open shift industry leading Kubernetes platform with integrated DevOps capabilities for cloud that is workloads. That also includes emerging technologies like data analytics, AI, ML, and so on. Back to you, Kevin. I'm looking forward to the talk today. Thank you, Evan. So before we get started in today's talk track around ML ops, I wanted to give a brief background around KPMG. Many of you may know KPMG is one of the big four consultancies and advisory firms. And within our advisory practice, we have a group of practitioners around 25 finders so located in multiple countries where we really focus in on maximizing the value from data, AI and emerging technologies. We do this both from our internal enablement as well as we support our clients in their journey in the space. In addition, we have partnerships with many core technology partners, you know, IBM and Red Hat and Microsoft and Oracle to name a few. But these ecosystems allow us to again accelerate our clients journey in the emerging tech space. So today, the goal of this talk track is around ML ops and the way that we want to show you what ML ops is and how KPMG solves for internally, as well as how we solve for it with our clients is we're going to be doing a deep walkthrough, both at a business process level as well as a technical level of our patented approved platform called KPMG Ignite. First, we'll give you a little bit of a background around what KPMG Ignite is. We'll talk through, you know, who is using Ignite, why they're using it, and then we'll drill into some specific challenges Ignite is used to solve for. We'll also talk a little bit about use cases and finally we'll get into the meat of the presentation around how it might solve for the problem of ML ops, both from a process perspective, as well as a technology perspective. So KPMG Ignite initially was built to extract value out of unstructured data. So we saw the need with unstructured data sets, predominantly contract documents, loan documents, emails, voice, etc. But there's a big need early on in the days of machine learning where values need to be extracted out of unstructured data. And that's really where KPMG Ignite started its generation, if you will, and over time it took in unstructured data as well. It's a platform that's built in a modular form, really based on open source, with the containerization such as OpenShift running it, and open architecture. This gives it the flexibility to above over time and take into tooling as the market demands. And finally, who's the platform for? Initially it was for data scientists and engineers, but you'll see a lot of hooks where business users can come into play and also integrate within the platform. So Ignite solves many of the end-to-end challenges that enterprise face from taking POCs or pilots into production at the ML use case level. However, we think about the ops perspective, and where does ML ops and what types of challenges does Ignite solve? It solves through different areas, whether it's pre-deployment, deployment, and post-deployment. In the pre-deployment space, it provides a robust set of tools, AI tools that is able for data scientists to run their experiments, enabling for risk management teams to see the outputs and the results and explainability of models. It's ready-to-scale training infrastructure, being on built-in-topic containerizations that allows you to scale out on your training needs, artifact and version management, has the ability to store the metrics associated to models, and finally, there's some new feature sets under development around explainability and surrogate or alternative models that can be used to get through your governance processes. In the deployment phase of Ignite, many things it solves for the models as well around automatic infrastructure provisioning, heterogeneous model deployment environments, multi-tenancies, and again, the hook into your model risk management and governance processes. And then finally, around the post-deployment, some of the more important things here I think are the logging and alerting after the fact to keep track of the model, the scalability to be able to scale the platform on demand based on the specific needs, the integration into a CI-CD pipeline, and overall performance monitoring and health metrics. So Ignite itself is the concept of Ignite is to build use cases and or solutions. Use cases and solutions are really built by stringing together what we call components. You can think of a component as an individual piece of work or functionality to achieve some type of solution or use case. A component can almost be anonymous to a microservice. Components can be something that is custom built, maybe something KPMG built and their IP is there and that could be maybe an anomaly detection model here at the bottom. It may be something that is an open source type of capability in the marketplace today, maybe like Tesseract for OCRing. Or it could be something that comes from a third party like a proprietary type of algorithm, something maybe for speech to text that you may be able to get off of the CSP provider. But at the end of the day, the components are the smallest piece of work effort that strung together into a workflow will produce a solution. And all along the way, the ability for the human in the loop to be able to evaluate the outputs that's coming out to determine if the models are in fact predicting accurately or over time making changes so retraining can go into the process. The human in a loop also extends to other groups in the organization. It could expand to the governance team that needs to see that the metrics that are part of the model can expand to the operational team for deployment and the health of the system and we'll go into some of those details. But again, at a high level, the solutions are made up of many components that are individually pieced together into a workflow that will then produce an output. So ignite itself solves for many different types of use cases. I'm not going to go into each individual use case that you see here on the slide. But as you can see, a lot of these use cases are around unstructured data. The first three are more around contractual terms and PDF documents. The last one is around, you know, KPMG intelligence interaction, which is that the chatbot types of interaction. I'll take one use case here as an example in cognitive contract management. One of the challenges that we see organizations facing is that they have a ton of contracts out there and being able to get the right information out of these contracts to make the right business decisions. So for instance, are vendors compliant to terms and conditions? Are payments being paid on time? Any type of information that you think is relevant as part of, you know, your contracting process. If night can help break the contract down to usable pieces, that will ultimately give you an answer to be able to determine the next steps. Similar for LIBOR analytics, the LIBOR rate is going away shortly and there's a lot of needs to change these rates in documents. What are your alternatives? You have to understand the documents and figure out what you can replace the LIBOR rate with. And this is another one of the use cases where it breaks down into components, maybe OCRing a document, you know, breaking it down into sections, using some rule-based inferences, maybe using classification models to determine what type of options are available. And as we mentioned before, you know, open source and Red Hat are really the foundational pieces for what makes up the Ignite platform. And open source was really chosen because of its, you know, cutting edge algorithms, its ability to scale massive models, you know, I think we can all agree to the innovation that takes place. So foundationally, open source was, you know, a core tenant of what the Ignite platform needed to be. And then partnering in conjunction with Red Hat, Red Hat provided it the containerization environment, which gave us the ability to scale massively, consistency anywhere, enhance security and controls, as well as the robust CI-CD pipeline. You put these two together as part of the overall, you know, Ignite platform and the foundational pieces that run it from the containerization. It's an achievable path that we're able to bring AI applications into production in a quick and fast way. I think a lot of us have experienced the challenges of creating a pilot or a POC successfully. But you'd spend months and months getting that POC into production because there's a lot of processes and controls and things that may not have been considered specifically around the model, the ML ops management. And we're going to dive into the details specifically, you know, how it might sauce for that. But before that, share a little bit of information around Red Hat. Thanks a lot Kevin. Before I talk about OpenShift, I wanted to talk about the fundamental value that containers, Kubernetes and the DevOps operational practices provides for the AI ML workflows. The technology and the operational best practices provide the much needed the agility, flexibility, portability and the scalability for the data scientists to build, to develop, iterate and share the models. It appears in a seamless way and as well as with the software developers as well. And then for the developers, so they get the capabilities to be able to do a rapid coding of the software apps that are powered by the machine learning and the deep learning models. And the data scientists and the developers no longer have a dependency on IT ops for every infrastructure provisioning task. Next slide Kevin. To execute on the AI workflows, what you need is a bunch of software tools such as say TensorFlow, Spark, PyTorps, Jupyter, Notebooks and so on. And the data services like the data streaming technologies like say Kafka, SQL and NoSQL databases, it could be object storage and so on. And then what you need is the end-to-end solution architecture. It may span across on premises in the cloud and it's all the way to the edge for the various needs, such as the security and compliance, the data generation at the edge, the data gravity and so on. And all these tools and access to the data sources, it should be ideally supported on a self-service hybrid cloud platform that is able to encapsulate all these infrastructure endpoints and also provide the consistency and the scalability anywhere. Now the hybrid cloud platform should have the integration with the hardware accelerators such as the NVDA GPUs to be able to speed up the data analytics, the model training as well as the inferencing activities. Finally, the hybrid cloud platform should be able to offer the consistent experience across like on premises, public clouds and the edge and be able to efficiently manage in a unified way by the IT operation. So OpenShift built on the containers, Kubernetes and the DevOps principles is the industry leading open hybrid cloud platform that has been chosen by many organizations, such as KPMG, to be able to provide these capabilities and be able to accelerate the AI ML life cycle. It has empowered the data scientists, the data engineers and the developers to be agile and be like very collaborative throughout the AI life cycle and without having dependency too much on the IT operation or the individual activities. Kevin, can you go to the next slide please? Yeah, so OpenShift provides like a lot more than the fundamental values that we have to get with the containers and Kubernetes. The first thing is it actually simplifies the deployment scaling and the life cycle management of the containerized AI ML tools such as the few of the examples that you see on the screen here and a lot more. By being able to automate the day one to two operation task that are associated with these tools and this helps ensure the high availability and the faster time to value. The second key thing is integration with the hardware accelerator such as the NVIDIA GPUs using the Kubernetes operator concept which ensures that the modeling and the influencing task can seamlessly consume the GPUs for the data scientists and the data engineers directly from OpenShift. The third key thing is OpenShift, it comes with a self-managed as well as the cloud-hosted option and this actually helps you get a consistent way to be able to perform the day one to two ops, be it on-prem at the edge or in the public cloud. And also providing the much needed portability and the consistency of the modeling and the app-dev workflows for the data gathering, the preparation, modeling, the deployment and the influencing task. And then, like I mentioned earlier, OpenShift also comes with the integrated DevOps capabilities. That way, it actually helps extend the value of DevOps for the NTR machine learning life cycle and that helps the collaboration between the teams. And this helps ensure that the model can be easily deployed in the app-dev processes and the rollout of ML powered intelligent applications. And then finally, OpenShift is a fully integrated hybrid cloud platform. And it includes all the key capabilities like the monitoring, automation, the DevOps tool chain, the pipeline, GitOps and so on. And all this is built on 100% open source technology. And this helps drive the innovation without having the locking. So back to you, Kevin, to dive deeper into the night platforms. Over to you, Kevin. Thank you, Evan. So as we kind of set up a little bit of the lead in now, we're going to dive into some of the technical details on how we are solving, as I mentioned, from both the KPMG perspective as well for our clients around the entire MLOps life cycle. If we look at the platform itself, the KPMG Ignite platform, it's made up of many what I would call sort of your infrastructure components. And these are things that you'll see up here at the top box here. And these infrastructure components are ones that are continuously running on the OpenShift platform and are intaking the use cases or the components that string together to form a use case from an execution perspective. I'll call out some of these components that I think are relevant to the overall understanding of the platform. But for the rest of the presentation, we're really going to be spending most of our time around the model management capabilities of the platform itself. As mentioned, if you look at the bottom layer, this is right on top of Red Hat OpenShift from a containerization and orchestration perspective. We're going to have an all case of mantels around the advantages of using OpenShift and these types of solutions. In addition, the full and embedded CI CD pipeline is there. And then we also have an abstraction or a layer called MinIO, what we're using from an object storage perspective. So it's a technology layer that achieves the same level of object storage that you would see on cloud providers for quick access and read and write times. At the component level, there's a few things again that I think are important to call out. One is the workflow engine. We mentioned a little bit earlier is the workflow engine are nothing more than a component and a component is a microservice and do our piece of work. So it could be a data ingestion, document classification, some type of data extraction. These types of components come together to form a workflow and they can split off or they can be a linear based on your use case. And as these workflows are executed, the output of that particular solution is generated and everything is coming through, which is a restful service of the Ignite API. So a workflow can come in through the API to say execute a certain workflow or the API or the UIs that we have can also come through the workflow. Some of the other components are important parts. I think again are the model management pieces, which I think we'll drill more into later on around how do we store the model metrics, how do we deploy the models. How do we get explainability and surrogate models? How does all this come into play? So it's not only the technology enablement, but it's the business process. It's the risk processes and it's the business users integrating with the end-to-end process to make this successful. And we'll go into how that works. There's the message controller, which is Kafka. So as components are done in your particular workflow, it notifies Kafka that this component is done start the next one. And the nice part about using the OpenShift platform is that each one of these containers can be spun up one or an amount of time. So if you have a container that's a little bit heavier, maybe an OCR container, you might want to run 50 instances of that where if you're doing something with less compute, maybe it's only a few instances of that container. And within the ecosystem, the Kafka sort of keeps track of the messaging and know what container needs to go next. As I mentioned, there's coder development environments through notebooks, log and storage, elastic search, storing some of your model statistics after it. And then if you need time series databases like Permeteas that may be relevant for some of your model, modern processes, and then ultimately ZooKeeper. But this is the component level view of what the Ignite platform itself looks like. And then one more view we'll show you of the overall Ignite platform and then drill into the pieces of the ML Ops is the platform itself, you know, as mentioned is everything comes through our authentication or API gateway. The API gateway can has two ways it can come in. One is you can come in as an end user through an interface, which has a lot of interface controls that we'll talk about interface applications. And then the other way is through a RESTful service. When you're sending a particular workflow through the RESTful service, think of the workflow again as being a strong and together of components. Those components come into a particular order of how that workflow needs to execute. That workflow is stored inside of a database in Postgres as a metadata. And then as it's stored in that particular database, you know, it comes through and it will then start to actually execute and expand out on the ecosystem. So the Ignite component builder understands the workflow, understands the components associated to that workflow, and then we'll pull it out of the Artifactory or your registry or Artifactory registry and deploy those components onto the platform. And again, those components are completely dynamic based on the need for that particular use case of how many times that component needs to be instantiated. If the component has a model, it will then go through through the model manager, it will treat the model that sits in MLflow and pull and load the model into that component. And again, it can launch that component one to any times based on again, how many documents or how much data needs to go through that model for prediction. And outside of the platform, you'll see it's right on top of OpenShift. There's orchestration capabilities that may need to be put in the place. And then where is the consuming documents or the consuming input for the process and then the consuming output. Now getting down into sort of the meat of the overall topic area and conversation is what the talk track was around MLOps. Now, when we think about MLOps, MLOps has many different functionality that's needed to be built, such as model training, model serving, you know, MLMAT, all management. In addition, there's a lot of business processes that also come into play and we're going to drill into each one of these content areas. But if you think about model training, there needs to be a place for your data scientist to run training, to run experiments. There needs to be a place for your governance team to understand the model attributes that are being generated. Are they aligned to kind of expectations? Is it a bias model? Are you getting the right, you know, output for what the parameters need to be? Then there's the serving part. I need to serve this model into the ecosystem. How do I serve it? How do I consume it? Can I serve one model? Does it can be multiple models based on scale? And then finally, the management part of it. How do people come in in management? And again, how do I create training data? How do I evaluate the model results? How does my governance and risk team get comfort that the model, in fact, is appropriate? All these different pieces need to come into play to have your full end-to-end model lifecycle. So if we think about model training, Ignite supports two mechanisms for model training. The first mechanism is you can bring your own model. So maybe you built your model somewhere else and you want to bring it into the Ignite ecosystem. The second part, maybe you're a data scientist that wants to build a model using the Ignite ecosystem. So Ignite has Jupyter Hubs. It instantiates a notebook for each individual developer. A developer can then use that notebook to build those models. Those models have a wrapper layer that we call Ignite Model Manager, which is a custom-built component. That model manager functions in two ways, whether you're bringing your own model in or if you're using, you know, your own notebook in the platform to develop it. The Ignite Model Manager can then send the relevant attributes into MLflow. So things that you may want to capture in this metadata is very flexible. You know, some clients want to capture the training data, so there's metadata to where the training data was. If you're running classification models, you may want to capture the accuracy, you know, the precision, the F1 scores. If you're running regression models, maybe the mean square error. So based on what you're doing and based on what you want and based on what your governance processes are, a lot of different metrics can be captured as part of your model. All these metrics and all the data in the model itself is all, again, persisted in a persistent volume via the MINIO, and they're also backed up into Artifactory. Another part that the Ignite Model Manager also does, which is currently under development or enhancement features, is there's a lot of ask that we're seeing from our clients to have explainability. You know, what is going on in the black process? I want to get confident that the information that's coming out of this is in fact stuff that we're expecting, or how does our risk processes gain comfort through into production? So explainability, we see that as becoming a pretty big topic and adding capabilities and features into that. Another thing is model alternatives. Are we using a very complex deep learning model? Can we replace that with something less complex and still get the same level of accuracy? These are all things we see that the business is asking for and how they can get these models moved into production quicker in a more governed process. The next is around model ops and model serving. So how do these models get served into the Ignite platform? We saw the high-level flow on the prior page, but just to kind of walk you through the use case, as we mentioned before, a workflow is made up of components, and components strung together in a workflow is a solution or a use case. So if we take this particular example, there will be some type of service, whether it's an application or a user that will initiate a workflow. They'll say, please run these components in these ways. The workflow will come through the Ignite platform, and this is an example of maybe a demonstrated workflow where we're doing OCR, we're doing template matching, machine translation, et cetera. This could be anything that you want to do as part of your workflow, data extraction, anomaly detection model, et cetera. But the idea is if it comes at a time, and these are all containers, so part of that process of the deployment, which I think makes the OpenShift platform and the containerization very unique and quick, is that these containers are sitting inside your artifact or your registry, and only at the time of workflow execution do these containers get pulled down into the platform to run, and they scale based on the needs of that particular function, if you will. So let's say, for instance, you're going through your workflow. These are the components. Your containers are being pulled out of your registry. You're having an OCR component running, maybe one to end, your template matching, machine translation. All these components are running. They're notifying Kafka back and forth that they're complete, and the next component is picking up on that activity. Now you get to a point where you need to run a classification model. Well, as part of this classification model run, what will happen is through this Ignite component or Ignite model manager, it will integrate into your MinIO where your model is stored. It will have the right JSON. It will pull the right value, and then it will pull the data through into your component and then load it into your component. And your component for this classification model, maybe it's one instantiation or one container, or maybe it's 100 containers that need to run. Based on that, you're loading the model in real time into your classification container, executing that, and then you're passing the results to the next step. In this case, it's data visualization. So the main areas or the main points of this is you're getting real time model loading during runtime. So it's picking up the most recent model based on your parameterization of what model version you want to pick during the actual runtime of the use case, which can change your scale model based on the needs. And finally, it's outputting a lot of the model metrics as well as part of your runtime evaluation. So any metrics that you want to capture can be stored as JSON, can be stored in a time series, database like Prometheus based on the types of models, but all these metrics are going to be stored. So over time, you can review those metrics to determine what type of changes you need to make. Now, the next part around here is what I think is one of the most important parts, which is the ML Ops management. And the ML Ops management is the dashboarding area where many different types of business users can come in to evaluate the process and the end. And we talked through some of the ability to train models and serve models, but that's probably a very small piece of the puzzle. Once you get done that and you want to get something into production, there's a lot more work that needs to get around that. So what is this management console that is put together? So some of the stuff that it does is it helps provide the businesses to create training data. So training data and labeling the data as we all know is a huge problem for businesses to do. There's different techniques on how to create this label data. Some companies like AWS will actually create it for you, but this is a big problem. But there's hooks for the business to create this training data. Also, as we talked about earlier, there's a lot of challenges in the governance process. What we traditionally find is the governance teams will take legacy ways of approving models, maybe past financial models, done in different tools, and apply those same techniques to these more advanced models that may not be trying to solve the same business problem. And sometimes the governance processes become a little bit more challenging. So it's good to understand the types of metrics and data points that they want to capture. So that's all part of your end-to-end ML Ops lifecycle. In addition, we're seeing a lot of wants and needs around how these models are working for the governance team. Can you provide ways that the explainability can gain better insights so they get better comfort on how these predictions are actually occurring? And maybe there's not biases in our algorithm or our data is not skewed. So the explainability is becoming another very critical piece for how the governance team wants to work with that. In addition, there's the pieces where the evaluation of the training results. So the businesses have the opportunities both during the training time to say, you know that this is right or this is wrong and update the new value to give to the data scientist. And then also over time, right? Over time, as you're looking at the model's predictions and the different scores and metrics that you're capturing, it's important for those business teams to also be able to understand that. So over time, they can feed it back into the loop to be able to get better and smarter models through that process. And this next slide here, it talks a little bit about what many of us have probably seen as a traditional CI CD pipeline. And probably not a whole lot here, I would say is different than what you've seen in before. You know, the ability for developers to put their information into some type of repository like GitLab or Bitbucket, having the ability to trigger it through something like Jenkins and build it if you're using maybe like Red Hat OpenShift to build it, to store that container image to go through the right scanning processes, testing, et cetera. But one of the interesting points I wanted to call out over here to the right is the CD part of it. So as the Ignite infrastructure, the CD part is pretty straightforward as we talked about before. You have the ability to deploy Kafka or Postgres or different componentories into the ecosystem. But there's also a deployment. Probably not your traditional CI CD, but none of the containers and none of the components that make up a use case are deployed inside Ignite until runtime, until someone calls that particular workflow. So once someone calls that particular workflow, this Ignite component builder, what it does is it reaches out into the Artifactory or into JFrog where you're storing your information. It will pull that container. So say it's pulling a classification model. And then based on pulling that model, based on the workflow inputs, it will know how many of those containers it needs to instantiate onto the cluster. So again, it could be one the end number of containers that are running that model or that component to run. And the Ignite component builder is continuously deploying images that are or containers that are part of a workflow to run on top of the environment. So I think this is a pretty good aspect of the platform and that we're not constantly running these different things, but really only when it's required and when the workflows are needed. So again here, you'll see just some traditional tools around scanning, storing, triggering the process and then how we're moving both from this code repository to the registry into the environment. Okay, so with that, I just wanted to kind of touch on a few of the key takeaways that we found important as we went through this. And one is, we kind of hit on this a little bit too, is make sure your business processes keep up with the technology needs. And what I mean there is that we have a lot of processes today that businesses are working against that help them move models into production. A lot of these may have been more traditional financial service types of models that aren't necessarily the same as the type of models that we're building today. So it's important that as we're building these new capabilities, we still have the right governance and controls in place. But how do those business processes evolve just like the technologies themselves are evolving? And sometimes we're pushing legacy processes against newer ways which can cause a little bit of a challenge. The second one here to call out is containers, Kubernetes and the DevOps powered by a hybrid cloud. And what this means is that there's so many places we can build these advanced analytical models. We can build them on-prem. We can build them in any number of CSP providers. How do we keep an ecosystem that is easy and quick to bring something from pilot to production? Well, through containerization, through OpenShift, you have the ability to have a consistent way of deploying these models in any environment that you're using, whether it's a cloud or on-prem. And this third one here is one that I like as well. It's to leverage an open and layered architecture. And what we mean is that, you know, as we know, every day there's a new piece of technology coming out and a new tool that we want to test out and try. How do you keep your architecture flexible enough and open enough that you can plug and play these new capabilities and at the same time expand out your architecture? Because you're never going to get everything the first go-around. It's going to need to expand to have new capabilities. So being flexible and open to be able to plug and play new component trees, as well as being layered so you can build upon the foundation is really important. And the fourth one here is the integrated training and up-skill of technical business teams. So these are newer technologies, newer ways to operate. Kind of goes back to point number one, but training your teams, getting your teams up to speed. And it's not just a technical side. You know, it's really important how the business teams align to this as well. So with that, I don't have any other slides. I don't know if you have anything that you want to add. So I think that was great. I think there were a couple of questions in the chat. Let me agree those out. This is a question from Praveen. And the question is, do you have a security layer on the model saved in MINIO? Do you use a few specific formats or extensions to save the model? For example, dot pickle format is not as advisable due to security reasons as against ONX, et cetera. It's a good question. I would say that that's something that we could probably enhance upon from the security paradigm. A lot of these are run as batch processing. So there's controls on the outside, but the RBAC controls themselves specifically in the model specifically to MLflow is more controlled through the ignite pipeline. And then through the model IDs that are stored inside of MLflow, we're not using native MLflow capabilities from an RBAC perspective. But I will say it's probably not as secured as one would want in some of the enterprises. That's a good question. Cool. Thanks a lot, Kevin. There is one more question in terms of MLflow. Is this MLflow from your Databricks? Got the question. That's a really, really good question. So this is not the MLflow from Databricks. That's only available through the Databricks ecosystems on the cloud. And one of the challenges that we experienced with MLflow was the ability to serve models. It's not very robust to serve the model into the production pipeline. You may want to use something like Qflow or as you've seen, we created a custom connector, which we call it a connect, which is leveraged to serve the model. So it's not one another the same. Databricks did put some hardening around MLflow and does give you the flexibility to deploy those models in production via the MLflow and the Databricks enhancement. But MLflow out of box doesn't support the deployment of models easily in a scalable way. Cool. Thanks, Kevin. I think these were the key questions that I saw in the chat that were not answered. So I think they're good from the Q&A perspective. Thanks a lot a lot all. I hope you learned something new. So Kevin, do you want to say anything to kind of close things out? No, I just really appreciate everyone spending the time here. And like I said, I think our journey around ML ops, AI ops, whatever we want to call it has evolved considerably. First, it was really just kind of getting the model out there and embedding it into our code base, which really didn't become, which really wasn't advantageous. And then we evolved into what we had today. I think by doing this and doing it in other large organizations, we find it meets a lot of the needs. However, I see where the biggest gap in this whole entire process is still around the explainability of AI and how well can we explain it to give comfort over the regulators and the governance teams to move some of these models forward. So I see that as being one of the biggest challenges still yet to solve.