 Welcome back to theCUBE's coverage of Red Hat Summit, an Ansible Fest happening together, better together here at Red Hat Summit 2023 with Ansible Fest. We've got two great guests here. I'm John Furrier, Paul Gillan. Tom and Sanderson, Vice President General Manager Ansible. Good to see you, thanks for coming on. Good to see you, John. Richard Hensheld, Head of Product Manager Ansible. First of all, welcome to theCUBE and congratulations on, essentially basically being the keynote this morning. So Ansible really mainstreaming into Red Hat's core value proposition. It's always been there, but you guys have done great work, great community. We've been following it obviously with theCUBE. It's mainstream, the concepts of automation, the things that you guys have been working on. So how do you guys feel? I mean, you must feel like pretty pumped. It feel great. I mean, it's really the culmination of about three years of work that we've put into kind of transforming the Ansible automation into a real platform. From Ansible, the community project into a real automation platform that customers can create, manage, and scale automation. Bringing that together with AI now to sort of open up automation to the masses, if you will. Kind of lower the bar to entry for automation means that we can touch a lot more people in an organization. And then having, also having event-driven Ansible being announced at the same time means our ability to take costs out of day two operations is a huge win for us. So I couldn't be more excited. It's like I said I had to pinch myself earlier when we were doing the keynote. Two big announcements regarding Ansible at this event. One is Project Lightspeed. Tell us what that is, and why is it important in the Ansible community? So I'll start, Rich, and maybe you jump in. So Lightspeed is about a two-year project that we've been working on in conjunction with the IBM research team. They brought the expertise around large language models, foundation models. We brought the expertise around Ansible, obviously, in automation. Bringing those two things together to apply generative AI to a very specific use case, which is Ansible Automation, and not just Ansible Automation, but Ansible Automation for the enterprise. And how do we bring that generative AI as opposed to general-purpose AI? How do we really train it and focus it on the needs of the enterprise? Has been huge. And we've invited the community to join with us. As you guys know at Ansible Fast last fall, when we did the kind of preview announcement, we invited that community to come in and participate with us on it. And just think about the principle of what AI is giving us at that point in time. You go, I've got a task that takes a certain amount of time, and I'm doing it manually, and I want to be able to shave bits of time off of that. And that sounds exactly the same as what we've been doing for the past 10 years with server builds, with configuration management, with network, with cloud, with what we've been doing with Edge for the last couple of years. It's just how do we get people to be more efficient? How do we help generate more efficiency and more value out of what people do to give them more time to do more of what they're able to do? And I think the big difference now with the change that's happened with AI is moving away from that, hey, this is, you know, get rid of the person, replace them and do something else. It's like, how do I take that person and supercharge them, right? And say I go from being a job that takes an hour, takes 30 minutes, takes 15 minutes, which means I can do three or four of those jobs rather than just do one of those jobs. And pairing that with the ability to actually train something to specifically help around a set number of tasks. Really, I think there's a real big part of it. I know that Paul's got a question but I want to just follow up real quick on that. So you're saying, and we've been saying in the queue, I want to get to see if you agree with this, humans plus AI is better than AI by itself. Yeah. That you're on that side of... Think of automation, right? I have to, even to use Ansible well today, even as easy as Ansible is to adopt, I have to know what I want to do, right? So even with the AI piece, I still have to know what I want to do. It just helps me get there in a more efficient, faster fashion. So who is the audience really? Is it developers or is there sort of a citizen developer, citizen operations play? Today, Paul, it's taking the existing subject matter experts in our customers' environments, which are, by the way, few and far between. And you guys all know about people talking about the skills gap in IT. So we're not growing a lot of new SMEs. So to make those SMEs, existing SMEs, a lot more productive. But it's also to bring new people into the automation world. Today, like Rich said, if I want to automate a storage subsystem with Ansible, I need to know the Ansible language to write a playbook to do it. And I need to know the storage subsystem and the interfaces that of what I want to do. So I need to know both of these things. If we can lower the Ansible bar, and over time, lower the bar to the domain expertise, then we'll start bringing more and more people into this family, so to speak. Can you share with our audience, because we did cover Project Wisdom, I think that you guys previewed last year. Would I be in research? Which I was, well, love that, love that product. What's changed? I mean, it was like a, it was a coalescing of a lot of things coming together. You mentioned large language models, foundational models, you got computer vision, a lot of great AI, it hit the scene just at the right time. The world spun on the criteria. What did you guys have in motion there? Because I remember, I was asking questions like when is it going to be available, you know, you did a good job of setting expectations. What happened? When was, take us through internally, what was the moment? Was there stings and a snap together? Was there a moment where you say, okay, we're going full throttle, double down? Take this one first. What was the, take us through when that happened, take us through play by play. So we knew we had something before that whole movement happened, right? And we were looking at something, the first time we had a demo, and I think we might have had this discussion last time, it was like, IBM demoed us and they said like, here's, spin up a virtual machine in Azure. And then I said, ask it to do it in AWS. And it just twisted around a little bit, did this little spinny circle and boom, a playbook came up for the alternative cloud. And that was the moment we were sort of like, that gives us the capability and the power. And then within the space of like three months, the whole world went crazy for generative AI. And then when you look at that, and it's fantastic what they do. I mean, my backgrounds applied mathematics long enough ago, I couldn't have imagined what the stuff they're doing now far more advanced than I can remember. And you see what they're able to do, but right, there's still this thing of, am I getting what I actually need to run enterprise scale in a way that an enterprise needs it to be? And that is automation has to be deterministic. It has to be known. So I want to be able to generate it. I need to know what I'm going to be able to run. And having us go through that motion and be able to get to the point where we can say, specifically, it's getting consistent in its answer. We're making sure that it doesn't do those, what you would say were intuitive leaps in the model to do things that shouldn't be there. A great example I'm using, talking to a customer last month, they were using a general purpose model. They tried to automate some F5, it threw in a restart. You don't want to restart your network when you make a config change. Same time, the actual answer module doesn't have a restart function. So it was wrong twice. It looked right, it looked good, right? So it looks like it was the right thing. That's a hallucination, right? And so you want to be able to say, Ashley, how do I make sure that that doesn't happen? And that's been the focus in the work we've been doing with the community, with the closed beta, with our customers that have been working with us, with IBM Research, with our internal specialists. Having that time and focus of manually sitting there, training the model, validating the model, getting that feedback into it, so that we can actually have that higher level of consistency and deliverability around what content comes out of the system. Because without that trust, how goes at all? Therefore, you're producing YAML code and playbooks using the generative AI interface. So just do that and check. I want to get to event-driven Ansible, because I think for some operations people, that's an even bigger announcement out of this event, Ansible now able to handle streams, big applications in observability. Who are going to be the winners with event-driven Ansible? Yeah, I'll start and then you jump in. So event-driven Ansible, light speed and the AI stuff is sort of the sexy stuff, that gets all the headlines. Event-driven Ansible's what's really going to help operationally today in our customers' environments. And really, that idea of connecting your observability platforms, whether it's for your application or your network or your infrastructure, connecting that directly to automation in a way that, again, kind of superpowers your existing teams as Rich talked about, sort of amplifying their ability and to allow them to take the mundane and the repetitive off their plate by having that automation remediate or take an action automatically and free them up to go do innovation, of what Matt talked about on the main stage today, which was focus on what's coming up next that they don't know yet. So we're super excited about plugging this into our customers' environment. I mean, it is such an easy plug-in. As you say, it connects to all these different streams, whether they're Kafka streams or whether it's connected right to proprietary products like Dynatriser, Instana or the other ones, it's coming out of the box with this whole ecosystem built around it, so our customers can take advantage of it right away. And I understand you have been testing this in IBM's IT operations. What kind of results have you seen there? Yes, I think that we, the office of the CIO, I think that's what they call it. I think they're called, and so there's about, there's just a large number of them. This is the job they do on a day-to-day basis. They've been off doing this testing. They've been trying to prove how does this take, how does it give me efficiency? How does this take these things out? But we've also, you know, the thing that's been nice about EDA is whenever we talk to people that have, you know, 20 years experience, right? I tried to do this in 2006, right? As a lowly engineer, think I had a great idea with whatever tools I had at the time, and they weren't consistent and all the seniors just said, no, you can't do that. No, it's not possible. And now those same people are sat there and they, we start talking about EDA and suddenly their interest perks up. You know, it's not just talking about, I'm just taking a bunch of scripts and converting them into Ansible playbooks. It's like, no, you mean I can trigger that from my system over there and that network device can do this and you see their brain spin up and suddenly they start thinking, oh, it's them solving their problems, it's them solving their challenges. It's not us suggesting it to them. We're just extending the capability that we've got. And that's been how most, this has been very organic, right? Since we, I mean, I think the example when we launched, five days later, the first plug-in turned up from one of our partners. Completely unsolicited, right? That was a nice surprise that we'd hoped for, but not quite expected. It's an interesting trend, Paul. We've been hearing about observability connecting into other things. We were at Open Source Summit in Vancouver. One of the hot conversations was Software Supply Chain, SBOM, Software Bill of Materials. And they weren't really, they were being obsolete at the minute they were built because they weren't plugged into the observability. So this dynamic of observability data, I mean, isn't AI ultimately a data opportunity and challenge? 100%. I think you can see down the road as we connect the generative AI stuff that we've done today and you start connecting that with EDA or with event-driven answer down the road where you start having content being dynamically created based on an event structure or a pattern rather than even having to do task automation. You know, I mean, that's, we're not there today, but you can see a point not that far down the road where you start to see playbooks being recommended by AI based on an event type that may not even exist. And that's the conversation we've been having with our observability partners and there's got them, one way they got really excited by this and gave us some really interesting data. So like one of them said, 70% of all changes are what cause outages. All, sorry, outages are caused by social changes. That's the right way around. And anyway, and so I said, well, I think it's, when we have, yeah, AI would have made that, that sentence graphically correct for me. If we had access to the change record that was going on, not only would we have to observe the systems, we'd also be able to see what change happened against them. Well, if Ansible's the change system of record and they're the observability system of record, when we can start sending data back to them, which is our next level of development, they can start doing that correlation. So their AI capabilities around ML and doing all of that correlation around that data, that becomes an important aspect that they can focus on and we're a source of that information to them. But we're also a source of action to them when they want to decide to take some course of action as Tom said with the playbook that's generated. You know, it's interesting, I've been doing a lot of like trying to figure out like a good mental model for AI and you know, humans plus AI, it makes us better than AI by myself. That's what we said earlier. I'll talk to a friend and he's like, he's into chess. And he said the, there are more grandmasters in chess when they allow AI, augmented AI, humans playing against humans with AI. So if you machine against machine, we know what that looks like. A person against a machine, we know what that looks like. But a player with AI, there's more people who weren't competitive by themselves with AI, makes them more competitive, which brings up the point of chess is kind of like declarative, you know, you mentioned that. This is the value, it's the humans. This whole job loss thing is kind of just a fake story. That's just kind of fear mongering. But there is a real human opportunity here to leverage the AI. What's your philosophy on that human role with AI? How do people harness it? Because they use words like training data. I think of a pet when I hear training. It's like, I train my dog. So is AI going to be a pet? Can AI train my dog? I need to train my dog. I mean, people have to embrace AI. Some people are a little bit skeptical. Some are jumping all in. Yeah. Look, let me make a point on that because it's one of the areas that we focus on, which is obviously our customer base, our enterprises, government agencies that are really cautious and careful about making sure that what they're doing in their environment is not going to put them at risk of some sort of issue. Let's put it that way. And so the idea here that we give them enterprise guardrails to guide how their users use AI with Ansible in their environment is a very powerful message. Me just entering in a question into an AI platform. If someone's listening to that question, they can immediately see what are the areas that I'm working on. If I enter some question about configuring firewalls, they know I'm working on something with firewalls. That's already a pattern that someone could observe and say, hmm, Tom's Bank is trying to reconfigure firewalls. Maybe that's somewhere I ought to go exploit. So our ability to deliver AI in a purposeful way to the enterprise and be able to have them implement policies on how it's used and consumed, I think starts to protect a lot of that sort of human fear of what's going to happen in the environment. It was the same argument for automation. 2012, 2013, when we first started getting into that, automation was really possible at the scale beyond what was available for. And everybody was hesitant to say, why would I do that? That will automate the amount of a job. And those things didn't happen either. And we're busier than we've ever been. There's more complexity than there's ever been. We're doing more work. There's more work to do than we've ever had to do. And at the moment, we're still churning manually We see how much burnout is there as a new trend in the world, from a wording perspective, we've all known about the stress and everything that comes from it. So why wouldn't we use it to alleviate some of those things? And so I understand there's a negative nature of people who are going to go to that thing, but we also have to remember the positive that will come. Well, they were negative on the web. Oh, that's just a toy. It's so slow. Yeah, well, it's not the real internet. What's the next frontier for AI as it applies to Ansible? I think what's going to get really interesting is when we move to bring EDA into the AI space, and then, you know, automation has to be deterministic. So therefore, as I said before, we have to have that ability to say it's the same piece of automation. Once everyone gets run time and time again, once I have, I know what automation I've got and how many of them I have, well, then I can have the AI also makes those suggestions about which bit to run, which I think ties into the observability. It ties into that ability to say, I can react quicker, right? I may not react for the entire way. I may just be able to react quicker than I could do. That meantime to recovery, the ability to, I was talking to a customer this morning and saying, you know, it takes six hours for us to resolve serious incident calls. Well, three hours, two hours, one hour. Maybe not even having to have the serious incident call because we knew what we were doing. I think when we can combine those two things together further down the path, I think that's going to give us a really interesting opportunity. Thomas, Richard, thanks for coming on theCUBE and we should do something after the event. Get more content from Ansible's community out there. Maybe do it asynchronously on theCUBE remotely. But I really appreciate you guys coming on. I'll give you the final word. Speak to the folks out there in your Ansible community about Ansible Fest this year. It is part of Red Hat, so it's going on here for the folks that didn't make it or didn't know about it. Give them a message about what's happening and what they can do to connect. Yeah, no problem. So yeah, Ansible Fest at Summit this year has been a great opportunity for us to bring more people, expose Ansible to more people. We had a contributor summit or a contributor day, the community day on Monday yesterday, and it was triple the size that it would normally be at a regular Ansible Fest. So this opportunity for bringing more and more people into our community is a great thing. And as usual, we always invite people to community to get involved in the new Ansible projects we're working on. Ansible Lightspeed, a venture of Ansible, those community projects are growing by leaps and bounds right now. So we welcome everybody to get involved. Great, congratulations on your success, continued success. And looking forward to chatting, chatting more later. Thanks John, thanks a lot very much. I'm John Furrier with Paul Gillin, The Cube Coverage, day one of two days of coverage of Red Hat Summit and Ansible Fest here in Boston, Massachusetts. We'll be right back with our next guest after this short break.