 Okay, welcome back everyone. theCUBE's live coverage here in Boston, Massachusetts for Red Hat Summit 2023. Also Ansible Fest folded in big news around automation, mainstreaming Ansible Big Store here. I'm John Furrier, host theCUBE with Rob Streche, breaking down the analysis. We have a famous CUBE alumni since 2012, original gangster on theCUBE, Steve Watt, Senior Director of Global Software Engineering and the head of the office of the CTO at Red Hat. Steve, great to see you. You've been on many times, but going back to 2012, that's some good pedigree right there. Glad to be back, glad to be back. Office of the CTO sets the direction, also works with the team and partners and customers to kind of frame the 20 mile stair, looking down at the industry and figuring out what's going to happen. You come back into the present, kind of make sure things are aligned. This shows pretty damn good. I mean, you guys did a good job, a lot of announcements bringing Ansible on the front and center with automation and Lightspeed, big news there. You got embeddings, you got event driven stuff, you got AI all over the place, and then you got the blocking tackle and open shift. Yep. Not bad. Yeah, yeah, and I was particularly pleased to see a number of our longer term investments that we've made over the years get profiled here and come to life. So, MicroShift Sixth Law was big news today, right? That was something we started two years ago. So, it's just really satisfying just to, it's quite a grind, eventually commercialized, eventually in the product now, so. You know, AI and trust, huge edge is big. Open Collaboration's been a topic we've been hearing about, open source is changing. Red Hat's commercialized rel that was a huge accomplishment back on the day. Matt Hicks yesterday said, he thinks AI will look a lot like from an adoption standpoint the same kind of hockey stick that Linux had, but faster. Yeah. How do you look at that? And as you're in the office of the CTO, you're kind of corraling and hurting the cats, if you will. But also, you've got a global ecosystem of commercial partners, AWS was here, Dell, others, as well as stakeholders in the open source. So, Balancing Act, how do you handle it? What's the strategy? Well, I think we have principles around anything we develop, right? We have a way of doing things, whatever it is. You know, and so we have our upstream model and we look at what's taken off in the upstream and we evaluate it on a number of different dimensions. I think one of the reasons people buy Red Hat software is they trust us to help them and navigate what's going on upstream and pick the right thing and then partner with them on using it. And so we have these principles, we're looking for projects, whether they're AI projects or software projects where they have transparency, agency, good governance principles. And then we work with them and if we have a good experience, we bring that to our customers. I think what's really interesting and I think having that longer vision, one of the things that John and I have been talking about over at KubeCon and at Open Source Summit a couple weeks ago is the fact that Open Source is one, right? I mean, I think the proprietary, I wouldn't say it's dead. It has its little places, but I think Open Source is one and the one place where I wonder how fast that will evolve and I'm looking kind of for your insight and your vision on this is that AI and the models and how people get to there. It, you know, you had some great announcements around the infrastructure pieces of it. Where do you see this going? Yeah, I think this is really interesting. I think right now you see in this sort of Cambrian explosion of just innovation in the AI space but a lot of it's proprietary. I would say almost most of it, right? You see other, there's a few outliers like hugging face, community full of open data sets, open models, but it's primarily proprietary and I think that's the next evolution, right? Is we've got to be able to sort of leverage the collective brain power of everyone else and make it accessible. And so I think to me that's actually an area where Red Hat's looking at getting more involved in the office of the CTO and in our emerging technologies group specifically trying to accelerate the open large language models as well as the smaller models. Yeah, and I think I'd like the way that it was put about how it's focused models. And I think that was the terminology. Yeah, domain specific models, yeah. And we actually had, what was it, Banko, am I going to butcher the name of it? He had a customer on. A customer on from Argentina, one of the banks using OpenShift data science and doing AI on top of it and using NLP and really starting to look at how they're taking that to the next level. Then we've had a couple on from Ansible looking at, okay, how do they adopt light speed? And I think one of the things was with the focus models and the actual community driven, I would almost call it a community driven model in that because if the playbooks are going back and people are getting smarter off the community. Yeah, exactly. In fact, this was brought up in an analyst session with Matt Hicks just earlier today. Where do you see that in that? Because I think transparency is key. How does that play in with these focused or smaller models that you guys are working with? So, you know, these models are created from data, right? You've got to start with access to data and make that data available. So, if you look at the Ansible Light Speed Announcements or Project Wisdom, that's an IBM large language model that's then being trained on Ansible Galaxy playbooks. So like the best practices of how to achieve X or Y, right? That's the domain specific training. That corpus of playbooks doesn't exist without a community contributing them back, right? And so, that's key to actually getting to something that's useful and then that's being produced out of a set of best practices from an open community, right? If it's proprietary, the amount of data you have to train your thing on is much smaller as opposed to a large open community. How do you see the interaction between proprietary foundation models like OpenAI, Cohere, A21 Labs, Anthropic, even Google AI? Because Open Source recently with the Google memo that was leaked a couple of weeks ago showed that once Meta leaked their code, there's been a massive adoption of Open Source, almost a surge, almost a leveling up and some people comparing it to the iPhone Android moment. Like you got a, now a whole nother set of stakeholders in Open Source. Right. How does, we're seeing interactions. So for example, we use our proprietary, or Open Source, or we use Open Source with our data, proprietary data to call and prompt the proprietary models. So the relationship between these models is a data interaction. How do you see that as a technologist because it seems to me an architectural thing. Yeah. Well, I think it's just a point in time. Like current state of the art right now, you have a lot of proprietary models. These things take a ginormous amount of infrastructure and power to train. That's in some sense, a bit of a control point right now, right? And so what we, and then also they're trained using data sets owned that are largely proprietary, right? And so I think what we've got to get, you know, advanced in the Open Source ecosystem is curating and gathering more data, right? And then also figuring out like where you get access to the infrastructure to be able to train these and produce our own open models, how we host them. So there's a number of problems that we're busy solving in Open Source, but I think it's more just a state of time. I think it will eventually flow like we saw with Open Source software, right? So, you know, Linux 25 years ago, 20 years ago wasn't at all what it looks like now, right? So it just needs a bit more time to develop. I think what's also interesting and you just briefly touched on it and it became very fascinating. We were having a conversation, somebody from your team a couple of weeks ago talking about Kepler and some of the projects that are going on in Open Source and particularly around sustainability. In these large language models, we heard some estimate that it was $700,000 a day to run chat GPT and, you know, just the unfathomable about of energy and carbon and all of that that's produced with that. You guys work on that in the Office of CTO. How did, how was that? At such a great point, you know, I saw a chart the other day about like as these large language models evolve, you know, and the amount of properties that they're able to calculate, eventually like I think after like two or three iterations exceeds like the power output of planet Earth. So we have to solve this problem to make these things better and better and better. And so Kepler, which is a project that we talked about on the main stage tonight is another project in the Office of CTO. And this is a project that's basically about correlating power usage and consumption down to various, like the application and just different components within the application. So you really sort of know what's consuming the most power and then you're able to take that information and make smart scheduling decisions to have things run more optimally and other plate. Maybe you schedule on an arm cluster. Might be a little slower, but a whole lot more power efficient and the math ends up in the correct place for the cost savings you want. But those two, like basically, you know, training and hosting the models are tied to basically being able to instrument and track from a sustainability standpoint. I will say though, like kind of another interesting space though is like we're solving and addressing all of these things right now with classical computing, right? Another thing that's often not talked about with quantum computing is it's also like exponentially more power efficient for a whole lot of these use cases. So I think that's another interesting space we could see in the future. How do you feel about quantum right now from the progress bar? It seems to be in the trough of disillusionment. Well, it's... I mean, people are backing up, it's not a lot of hype, but I think there's kind of a, definitely energy, but it seems like it's not going to get in the way. It's like, it's an interesting thing. Like if you, you know, I'm primarily exposed to it through, you know, IBM's research team and what they're doing with quantum. But they're, so they look at it in different ways of tracking it, but the amount of qubits, like their processing power they're adding is wild. Like how much progress they've made over time. So they're making progress. They're making a lot of progress on that. The biggest focus right now is like reducing error rates. So while they can compute more, like they're trying to get it to be more consistently accurate in the outcomes. But so I think that's interesting, but actually like where there's a whole lot more movements on the quantum spaces and post-quantum cryptography, which is, you know, I was actually in a meeting with Arvin, the IBM CEO and he made this great statement. He said, this is like Y2K all over again in the sense of that was a big like identify, discover and identify where the scope of the problem and then remediate it. And so post-quantum cryptography is the ability of one day a quantum computer be able to crack your current encryption. And there's, you might think, oh, that's La La Land, but people can harvest now decrypt later, right? And so it's a really interesting space and there's quite a lot of momentum going on there in quantum. So you talk about the office of the CTO, we have a few minutes left. I want to make sure people understand its role, what it does, what are the metrics, what do you guys work on? Yeah, sure, great question. So we're a path finding function for Red Hat and so we're there to help define the future. Red Hat's a very collaborative group, so I wouldn't say we own the strategy, but we're responsible for getting everyone together to find it. And so, and then we're also an engineering group that basically runs all the prototypes to sort of iterate up using the empirical method to like get to validate our hypotheses. And so some of this is emerging technologies where we're actually building the prototypes. Other parts of this is Red Hat research with university partnerships, like where they're figuring out how to get like risk five to like run on FPGA, you know, or our open source program office, which is identifying which communities or foundations we should form, like in the case of risk, you know, or in support of the projects, but we have this playbook. But you're not a strategy group only, it's not like you're doing strategy, hey, let's do it. Yeah, yeah, well that's an anti-pattern, like nobody likes the person wandering around with slides telling everyone else what to do, right? You guys get your hands dirty, you roll your sleeves up. Absolutely. You dig in, you collaborate, inside and outside of Red Hat. Correct, yep. And yeah, we have at any point in time, probably 15 different prototypes under development across a wide- Any good words now you can share? Well Kepler is what you saw today, right? That's an actor, it's not commercial yet, but that's, and then obviously SigStore, which is like the heart, to me, like if you look at Kubernetes as the nucleus of the cloud native ecosystem, SigStore is the nucleus of the secure software supply chain ecosystem. That's a big one. That was created and invented in our team at Red Hat. And so I think Kepler is our new hit. Sometimes I joke around like we're like a pop star, we have to have a hit every year to stay relevant. Some TikTok videos going. Exactly, yeah, yeah, yeah. Gotta have a dance around it too. So Kepler is our sort of hit in the making right now, and it's getting a lot of interest and I think it's super relatable right now. Do people get involved? Do you outreach to them or is there a process for people to look under the covers or is it all public? Is it transparent? Yeah, so we have like emerging technologies as a website called next.redhat.com. So you can go there and see what they're working on. We blog about it. But ultimately we- There's no secret. There's no, it's all upstream, right? It's a standard Red Hat open source and we need that upstream validation because we don't want it to be pure blue sky. Yeah. We want to actually have this grounded in, oh, I found this useful too from other people and other companies and many hands make light work. So you're open for people to engage with you. Absolutely. Yeah, yeah, that's our model. I think it's super interesting because I think some of the other things that you get involved with is kind of edge and far edge and I think people and how trust factors into that and I think to me, there's a lot of, as we become way more distributed, way more multi-cloud, hybrid, compute everywhere. I think that seems to be a place that to me is fascinating because from just the exit signs in here are IoT devices that are creating data. Where does the data land? How does it get processed? Where are you guys going with that stuff? You know, there's a lot of work we're doing down at the operating system level being able to light up these far edge devices inside RHEL and so that's the first step, right? We've got a far edge platform micro shift. That's another one of our recent hits that's being commercialized right now which is a really lightweight small version of OpenShift that runs on a far edge device and so we're doing a lot of work sort of in risk five, like making risk five real and then lighting it up inside RHEL. Can you quickly define what far edge device is? Yeah, sure. So to us, it's basically that, you know, a Raspberry Pi type thing, right? That's like within, you know, the last hop, network hop to an end consumer. Has power and some compute but not a lot of heavy weight. Exactly, it's about a gig of RAM, one socket. It's probably an ARM processor or something even lower power and, you know, maybe a couple of different peripherals. It may or may not have an accelerator. That's often the differentiator, you know, on what kind of far edge device it is. If it's got a GPU, that's very common so you can do machine learning out on the far edge. Yeah, and you were talking about how you were working on some stuff with trust out at that far edge. Yeah, so this is an interesting one. Another recent project that we did with MIT is KeyLime. And so KeyLime does integrity measurement. So if you imagine you've got a far edge device that's screwed into the sheetrock 10 feet above ground in a train station but they're in train stations all over the states. So in order to actually exploit the device, you know, everything's on this little SD card. Like you can pull out the power, pop the SD card out, stick in an exploited one, put the power back in and it looks like the thing just went offline for a second but meanwhile now you have control of the device. And so integrity measurement takes basically uses TPM module in the device to do a hardware measurement and a software measurement. And then it stores that and every time it's got an agent and every time that thing comes online, we can check, is it the same? If not, fire an event, do what you want with the event. It's even the last question for you as we wrap up. Thanks for coming on, you're super busy wrapping up day two. What is on your mind here at Red Hat Summit in Boston 2023? Is there anything from the show that you're going to take away that's spawning some ideas that you're going to bring in to the office of the CTO that wasn't there before you came here today? I would say, you know, I was in our executive exchange yesterday and just the amount of help people need on automation, you know, and to me, I think there's just a lot of low hanging fruit for us I think Ansible Wisdom specifically is just like right in the sweet spot of what people are looking for right now where they either don't have the skills or they can't hire the skills or find the skills or they don't have the headcount to bring the skills in and just find it. So, short answer, artificial intelligence and us making that easier and easier to create models and deploy models is where we're going to be focusing. I was talking to Rob and the team when we run off camera, I was like, the Ansible expertise is so awesome and that community is so tight. They're like the SREs of configuration. They're so talented, so if you can get that replicated with AI, then that will scale because it seems to be like an art. Being an Ansible guru is like... It's a, I think what's so exciting about that like just real quick, like on this idea, right? You can use Ansible Wisdom to generate a playbook for a failure event. You can use Ansible Event-Riven architecture to basically when there's a failure event, take that playbook that was created for you by artificial intelligence and respond and autonomously self-heal from the event. Like, think about the cost savings. It's like magic, you know? It's like magic, yeah, it's like magic. I tell you, Ansible, exciting to see them in the mainstream, good call folding it in. We did a lot of content here on theCUBE because there's no event, but I think that's a sign that AI's next gen, it's legit, it's here. It's here and it's useful. It's not just vaporware or, you know, or a bad chatbot. Hey, can I help you? Customer support? Put your, it's not clipping. Yeah. Steve Watt, Senior Director of Global Software Engineering, but head of the Office of the CTO at Red Hat, CUBE alumni, thanks for sharing your insight. Let's keep in touch. I'm really interested in your projects and it makes for good CUBE, thank you. Yeah, yeah, my pleasure. I always have fun here. Yeah, all right, your CUBE coverage. Rob Stutcher, I'm John Furrier. We'll be right back with our next interview after this short break.