 Welcome back to theCUBE, our end-to-end coverage of Red Hat Summit and Ansible Fest here in Boston, Massachusetts. I'm Paul Gillan, here with Rob Stretch A, and we're about to talk about one of my favorite topics, which is emerging technologies. And with us is Aaron Boyd, who is Red Hat's director of emerging technologies, distinguished engineer, software engineer, full-time geek. And pleasure to have you with us on theCUBE. Thanks for having me. Let's start by talking about what a distinguished engineer does. Sure, so a distinguished engineer, we have several, obviously, at Red Hat. And within the office of the CTO, we even have a distinguished engineer vitality program. So basically, a distinguished engineer is both able to see the 10,000-foot view, but also go very deep on subjects. So we help contribute to portfolio architecture, different designs, talking with customers, and most especially innovation, which is what we do in emerging tech. Emerging technologies covers a very broad swath of topics. How much expertise, how do you acquire the expertise you need to dive deep into these so many different areas? Yeah, so surprisingly enough, we have very small teams dedicated to about eight different subjects, anywhere from computational infrastructure to security, to edge, sustainability, AI. And the idea is that we can work with upstream communities or customers or even Red Hat research to look at new ways to solve customer problems using open source technologies. So some companies have emerging technologies. It's more like a research skunk works that none of those things see product. And really, the point of what we do is that we're innovating within that space and lowering the risk for our customers by co-creating with them and then bringing those features into product in a rapid way. So that's really getting into working with those open source communities that kind of helping them mature what they're doing? Absolutely, absolutely. So one of the technologies we recently graduated into product is SigStore. You've probably heard of it. And it's the gold standard now for container signing that came directly out of emerging technologies from we're seeing what our customers are struggling with. We understand the security space. And so then we helped grow and mature that project into a very large community to help solve one of those problems. And I think they were talking about with Ansible and Backstage and the developer hub earlier. Is that another one that was coming through that emerging technologies as well? Yeah, Backstage was incubated in our office. It was an experiment looking at kind of how do we do open services? Services mainly within the large cloud providers is pretty private. Their secret sauce is how they do support. Red Hat would really love to crack that open and figure out how we can as a community solve those problems more collaboratively. And so Backstage was one of the tools that we were using to experiment of how could this work? How could it integrate into OpenShift? And how can we expand it out into what our customers need? And so yeah, a couple of people from our team contributed to those plugins that are now being G-Daid next month. Now unlike most technology companies, everything that you do ultimately is released to open source, correct? Yes, absolutely. So how does that, what variables does that introduce or complexities I guess to working with those different open source projects and those independent developers within the context of a Red Hat organization? Yeah, so I always like to think about my team leads and the people that work in emerging tech is like enterprise entrepreneurs. So they're going out, they're learning about the technology, thinking of new ideas and possibly creating new communities. But the point is that it has to be grounded in reality. So we work with our also customer open innovation team to understand the market, understand the opportunities and really understand customer use cases. So the balance is, is this interesting? Are we solving real problems? And is this something customers are going to want? And we really come forward with a business case of how we then integrate it into products where it makes sense. How do you bubble up from customers, those projects that merit your attention, that get the resources to be developed into projects? So we prioritize like every other engineering group based on the critical need. And so it really is important to us that we get feedback from our customers so we understand, is this an edge case that a singular customer has or is there a much broader market or problem to solve that we can help a lot of customers? So as I said, we run pretty lean, we're less than 50 people but have a pretty fast velocity of what we can turn out to put as a POC with the customer to validate and then tech transfer into production which goes into product. And what's the life cycle of that? From idea to is it a committee or it comes in through a customer or are you more forward looking? How does that, the next thing you're looking at come to bear and come to your office? Yeah, great question, really anywhere. So we are looking 12 to 18 months out so we're more looking at long-term strategy so technologies that are going to solve what is on the roadmap beyond possibly what's in the current product portfolio. And so ideas might stem from a conversation at KubeCon or they might stem from a community meetup or they might stem from a couple customer conversations but we do a lot of experimentation to vet whether or not it's a viable technology that we should be using. Let's take a current example, there's craziness over generative AI and chat GPT and the like right now and that has raised awareness of AI in general and the buzz level about AI. How do you penetrate that and decide what really makes sense to turn into an actual product? Right. And I think people should be asking those questions. So from our perspective, we started looking at the generative AI late last year especially with our partnership with IBM and IBM Watson and the capabilities there with Ansible LightSpeed. Just to understand, first of all, is this something that is on a hype cycle as you see within emerging tech or is this something that can really be viable to our customers and in what sense? And so we've taken a little bit more conservative approach to looking at how can this augment and make more efficient what developers or operations IT already do in a way that it can be verified. And so I think Ansible's taken a great approach into doing that to make sure that as you use that model it's retraining itself, it's taking feedback from the user whether or not it was useful and putting it back through. And I think this integration of AI into IT should be done hand in hand with the people who work in those jobs to really make it smarter, make it more very specific to each one of the things that it's trying to accomplish. I think that was a really high point that what Chris was talking about, how it was very focused on from an AI, from a model perspective, being focused versus being very large and general. And is that something that you work, I think you mentioned IBM Research and was that something that incubated within your group and moved into Ansible or was that, how did that? We worked with IBM Research in the AI space, most recently with Code Flare and Ray integration into OpenShift, but the other piece that we're actually looking at right now in emerging tech is more the small models. How do we actually run these models where the data is being captured? How do we run them on the edge? The Holy Grail, how do we run them on a CPU? So while there is a use case for these LLMs, there's also a use case we really believe on inference and AI at the edge. And the life cycle of that to your point is how are we protecting the data? What is the provenance? How are we running that there and not having to move data around and have costs considered in terms of network? So really what we're looking towards the future is we think models will actually shrink from here on out so that they can be run on more commodity level hardware on the edge and help things with 5G has provided us. I think that actually hit on a very interesting, because you mentioned that one of the pillars was sustainability. And one of the things that we've seen in talking and we were at an open source summit in Vancouver a couple of weeks back, and one of the topics that we kind of broached on was there was a lot of discussions around energy and some open source and LF energy and I know Red Hat's contributing back into that. From a sustainability perspective, the models being big definitely causes problems and we're hearing that it's $700,000 a day to run chat, GPC and things like that. So is that something you're focusing on? Like when you're saying you're looking at how to bring down the size of the models and going towards inference and things of that, is that focused? Absolutely, so one of the projects that we started in emerging tech is called Kepler. We've recently just contributed it to the CNCF sandbox and Wamen Chen in my office is one of the people behind this and the idea was we need to be measuring how much energy we're using. Just like the original metrics around chat, GPT took a long time to come out of how expensive it actually was and behind that expense is a lot of energy that's being used and so we really are taking into consideration how do we standardize reporting of these metrics down to the kernel level and then how are we measuring that and improving it and scheduling it on a way that makes better use of the hardware. So what we hope is that maybe you have a small model that doesn't need a GPU and so this kind of goes into our computational infrastructure where the data center's reimagined. You then have pools of resources instead of having to tie yourself to a specific server to run on the specific GPU. Instead, let's look at the workload, let's look at the possibilities of what the infrastructure provides and Kepler has some insight to that. It uses modeling, machine learning right now to be able to do the scheduling and it will just expand beyond that. I want to go back to something you mentioned earlier is customer co-creation, which is a fairly powerful concept, how does that work? So the idea would be we have field chiefs, field architects that go into customers talking about what their issues are. We started a very remedial level use case. What are your pain points? And understanding the pain points of today also gives us some insight into what could be the possible pain points in the future. It's kind of seeing around corners as we say. So we know for instance that because of the chip shortage we were going to have to be smarter about the way that we use energy. We'd have to be smarter about the way that we build data centers. We'd have to consider there's going to be different architectures that we may need to run on because of all those things. So it's seeing the problem but then also seeing beyond because you're having this problem, we go with this technology, what are the other things that we see might happen? A great space is edge, where we've done micro shifts or red hat device edges is now branded. Looking ahead, what if we have millions of devices then to manage, how do we do that? So co-creation with customers understanding how will they use this? How does it affect their business? And what can lower their costs and make them run better? And so that co-creation around innovation comes from really deeply understanding what the customer is trying to achieve. Do you have customers actually giving you feedback on prototypes that you create for them? Absolutely, yes. Yep, we work in lockstep, working together, figuring things out, coming back to the drawing board if we need to. And that's really a great story behind micro shift is we worked very closely with Lockheed on that prototype, on the POC, up until productizes red hat device edge, giving us feedback every step of the way. So give us a little scoop here, we got only a couple of minutes left. What's in your labs right now that you think we might see as red hat products in the next year or two? So I think as red hat Kepler, which is still new, it isn't officially a product, I do think that that will become a product within the next year. I do think we will have more solutions around AI at the edge. We don't have a name for it yet, but I can see us releasing something over the next year. And then a year is a long time. There's a lot of things I'd like to see. I think we're going to have more collaboration with our customers in terms of infrastructure. And how can we utilize the infrastructure more just more efficiently, more sustainably? Can we leverage things like risk five? Can we do more things at the edge? I think we'll see a lot more growth in the computational infrastructure space, things like CXL. Well, Erin Boyd, I can't wait to see what comes out of your office. Hopefully it will be here next year, and some of these dreams will come true. Thanks for sharing with us what you do and all the innovations under the covers at Red Hat. Of course, thanks for having me. Paul Gohm with Rob's Richet here at Red Hat Summit 2023. We'll be back right after a short break.