 I'm Sherard Griffin, director of AI services at Red Hat. I hope you've enjoyed the conference so far. There have been some fascinating talks about all the interesting work that's going on in our open source communities with AI. I want to talk a little bit today about data scientists in Red Hat and how we're better together. But before I dive into that, let's take a little bit of a step back and look at why Red Hat actually decided it was best to get into the AI industry and look at ways in which we can help our customers along those journeys. So the first thing we saw when we started talking to customers about AI is that they really needed to be able to build and run AI workloads on open source infrastructure. That was key to them to be able to leverage their existing commodity hardware, not have to worry about specialized compute resources that they needed to run these workloads, but do it in a way that they could just maximize their existing investment. So we worked really closely with them to be able to do exactly that, bring technologies like OpenShift and some of the other things we're doing with our partners like NVIDIA to make sure that they have the best of breed tools on open source infrastructure. We also saw that Red Hat itself needed AI in order to increase the open source development and production efficiency. We've looked at things like analyzing build logs with anomaly detections to be able to find interesting patterns or discover new things that may not be right with the way that we're building and developing the software. And it's allowed us to increase that efficiency and get products out to customers a little bit faster and be able to adapt to the market quicker. The third thing here is we saw that customers needed AI integrated into open source products and services to be able to leverage an intelligent platform. Now what do I mean by that? If you think of a lot of the platforms that customers are using like OpenShift and also things like Red Hat Insights where we're helping them manage their own infrastructure, AI allows that to be smarter, to be able to predict things before customers know about it. And it's been great introducing those technologies and at the end of the day customers benefit from these intelligent platforms because it can react to their environment where maybe perhaps humans aren't able to grasp all the data that's coming in and be able to make decisions as quick as machine learning can do that. Now how do we go about doing this? One of the foundational pieces of approaching the AI problem space that we saw is we knew it had to be on an infrastructure that could run in a myriad of different areas. So if you look at the bottom of this graph, we knew it had to run on physical, virtual, private clouds, public clouds, hybrid, as well as the Edge. That had to be a baseline where we needed to meet the customers where they were. On top of that, we also needed to bring hardware accelerators into the mix. With some of the challenging machine learning initiatives that customers were endeavoring on that it has to benefit from the use of GPUs and FPGAs in that space. And then also being able to utilize self-service capabilities with the hybrid cloud. That's key with technologies like OpenShift and RHEL. On top of all of that core infrastructure, that's where we started looking at where do we need to work with the open source AI communities as well as our partners to provide the rest of that story. And so when you look at that chart that shows a typical AI ML initiative, it starts with setting the goals and preparing the data all the way through developing and training the model and deploying that model as a service and getting some value and some data back from that model being generated. Well, in order to do that, we had to bring a number of different types of tools and technologies into the mix to not only allow the data scientists to get access to the data in the way that they need to, but then also to use the tools that they need to. Tools like TensorFlow and Jupyter Notebooks and Spark and Python and all of our partner technologies to be able to solve their own challenging problem. And so when we did that, we not only open this up for customers to use but then we started using this internally ourselves to be able to bake all of that intelligence I mentioned in the previous slide into our technologies and also into our business processes. Now at the core of all of this, and once we provided those tools, we realized what we were truly doing is democratizing access to the tools and democratizing the data for the data scientists. No longer are they burdened with having to know where all of the data resides. They have one platform that can run in all of these different data centers and all of these different cloud providers and they don't have to carefully craft their machine learning models to only run on certain technologies. But the key part of this is all of the access to the tools and all of the access to the data is still governed by IT. But in a way that the data scientists have their own self-service capabilities, they can spin up their tools, they can get access to their data without bogging down IT and having to work with IT to get all of these things done. IT can curate that process in the platform itself and then the data scientists have the freedom to make the choices that they want. So I talked about working with partners before, but that was a huge endeavor and a huge initiative across Red Hat and we really truly listened to the data scientists and it still today is a driving factor for the partners that we engage with. When we looked at how we needed to provide the tools, it wasn't in just one space. We recognized to be able to have a data scientist use tools from beginning to end from in data ingestion all the way through to deploying their model. We had to work with partners that helped them along that journey and some of the partners focus on data governance and security, data processing, databases as a whole and then also the hardware accelerators. This is just a glimpse of the partners that we've worked with today and there's many more to come. Now I talked about what we've done in the past. I want to talk about where we're going in the future. We're starting to transition from empowering data scientists with the hybrid cloud, democratizing the data and now we're moving into improving the data science experience across the hybrid cloud. That's very challenging, but we're hearing customers in their journey and it's really resonating with what we're trying to do in the space as well. We're looking at ways in which we can optimize data governance across the hybrid cloud. That's an interesting problem because no longer are companies storing all of their data in one place. In fact, no longer are they storing it in one cloud provider. Everything is becoming fragmented because of the need to be able to be as close to where the data is generated as possible, but it's also becoming fragmented because enterprises are getting so big and there's so many tools out there that different organizations are just doing processes and generating data differently. But in order to get access to all of that data, it's very key that we work with the data scientists to figure out the ways in which they're trying to bring that data together and lots of efforts are going on right now to improve the services around that as well as the technologies to break down those data silos. We're also working with partners specifically to decrease the maintenance of the machine learning tools that they're offering through automation, intelligence, and additional services. And this is key because we don't want IT departments to be bogged down with maintaining all of the tools that data scientists need. But if you can imagine baking more intelligence into their tools, being able to know when things aren't quite healthy and having self-healing capabilities or self-diagnostic capabilities, those are critical to being able to have a platform that runs on its own. And so by working hand-in-hand with our partners, we're providing better tools for data scientists and for IT departments that work in a way that provides more intelligence around what's going on. The third thing I want to talk about is the area in which we're improving the usability of machine learning tools by minimizing infrastructure management. And when I think of this, I think of the job of data scientists. And ultimately, it's not their responsibility to be able to maintain the infrastructure themselves. It's an ideal experience would be that they go in and they use their tools the way they need to, but they don't care where those tools are running. It doesn't matter that it's on-prem or in the cloud, or it doesn't matter that in fact that it's Kubernetes or OpenShift or some other technology, they just want a certain experience. And so now what we're looking at are ways in which we can abstract the infrastructure from the tools themselves so that data scientists don't have to worry about the infrastructure management themselves. And so there's some exciting work going on there and we're hoping all of that's happening through both the platform itself, but then also looking at ways in which we can provide a better managed experience for the customers. Now another area in which we're innovating and we're working with the data scientists are the needs for bringing AI to the edge. This is interesting because we want data scientists to have the capability to train at the core and then deploy at the edge. This is very critical for some of the workloads where customers have data centers all over the world. And data scientists traditionally it's a challenge to be able to build a model in one place and deploy it into a vast ecosystem of clusters. So by breaking down those silos and really understanding what a data scientist needs, it's allowed us to start moving in the direction where we're providing the right tools for these capabilities. And it allows the data scientists to be able to do things like a model life cycle and be able to monitor and manage all of those models that they're deploying, do versioning of it, roll back if they need to. All those things that they traditionally do in one data center, now they have the capability of doing it in many. And you can follow along with that project down below. You see the link. We call it our blueprint for industrial edge and industrial manufacturing. The last thing I'll talk about today is a really, really interesting project that we have going on. It's called Operate First, but it's in conjunction with the Mass Open Cloud. If you're not familiar with the Mass Open Cloud, it's a public cloud where the industry together with a lot of research institutes have worked to build out this public cloud where anyone can go in and collaboratively do work. Now, we've extended our philosophy at Red Hat of how we do open source technology and we've moved into the space of operations. If you can think of this, you know, traditionally MLOps is focused on how to develop the best practices for businesses to run AI successfully. And now by introducing that to open source, we have an open source infrastructure that's in an open public cloud for anyone to take a look at and anyone to get involved in. And we're bringing machine learning to that environment so that the data scientists, the operations, all of the stakeholders, the application developers can all work together, even our partners, work together in one open way so that we can enrich and better the AI community. So some interesting, fascinating things going on in that space as well. It's a great test bed for new technologies, new concepts that companies and open source communities are working on. And it's also a great way for the data scientists to provide a feedback loop of what they need so that the companies participating can listen and help to create more technology to fulfill their needs. You can look at the URL below as well to take a look at what's going on in that community. So that's just the few things that's happening, but it's really, really exciting. I'll end with this note. Innovation specifically in the AI space happens when we work together. And that's why we're really focusing on our open source communities and how we can work together to take things to the next level and really look at what data scientists need and help with that journey. If you want to find out more, please go to commons.openshift.org. I hope you've had a great time listening to the talks. And absolutely, you can find us on OpenShift Commons. And I hope to see you in the next OpenShift Commons Gathering. Thank you.