 Hi, this is your host, Abhi Bhartyan. Welcome to here for Let's Talk. Today we have with us once again, Alam Gupta, voice president and general manager of Open Ecosystem at Intel. Aruna, it's great to have you back on the show. I'm really happy to be here. It's my pleasure to host you again. We saw each other at CubeCon, and during CubeCon, I mean, all the discussions that I have, no matter if it was vendor or ecosystem player, and generally, yeah, I mean, that is a big topic even. Outside of CubeCon, we saw all that Open AI drama that happened there. It unfolded and folded itself. A lot of discussion is around JTVI and LLMs. I want to hear from you, first of all, at CubeCon, what kind of discussions you saw there, and then we'll talk about what role is Intel playing in this space. If you remember Priyanka's keynote at CubeCon was started with that whole element that JNAI, sorry, Cloud Native is the scaffolding that really allows JNAI to launch successfully. And the reason that note was made is because I remember at TED AI conference that happened a couple of months ago, and I was leading a hackathon over there, and we were asking people that, hey, how are you running your AI workloads? Where are you running it on-prem? And how are you running it on-prem? Kubernetes? And they were telling us like, what do you mean? How would I run it otherwise without Kubernetes? And that was sort of the aha moment that Cloud Native is really enabling these workloads is sort of the Linux moment of Kubernetes that this is the de facto default compute layer. You know, all that auto scaling, all that recovery of pods, et cetera. Nobody has to worry about it. It's available in all the hyperscalers, all of the good things. And that's what allows data scientists to really focus on their core competency on building those models and somebody else is taking care of converting those models into containers, running, scaling, all the ML apps, all the good things. So that was one part of the discussion that heavily happened. The other part that heavily happened was, there is a AI ML working group in CNCF. And a lot of the good things came out of that discussion as well. There were constant discussions around what are the things that we can do by, what does Cloud Native bring to AI developers and what does AI developers are looking for in Cloud Native? So a lot of the deep conversations happening in those sites. And again, early stages, rapidly moving, lots of good stuff happening. And then within the board itself, we had a strategy meeting where I led the session as part of the strategy meeting along with Taylor Dolizal, who leads the end user working group. So there we had a discussion around what can Cloud Native offer to the AI developers and what fun things we have to offer where data scientists building this model. And again, if we think in terms of the audience, there are people who are training, there are people who are fine tuning and then there are people who are inferencing. If you look at those three levels, the number of developers at each layer, if you think of it as a pyramid, it's a large pyramid at the top. So if you think in terms of how many are doing inferencing versus fine tuning versus data training, data training is a handful of big companies because when you're building these foundation models, only a handful of companies have that level of resources. Fine tuning a bit more, but RAG or fine tuning and inferencing is a lot, lot more. So I think those were the discussions that happened, that how could Cloud Native really support that audience? Since we were talking about KubeCon, CNCF, Linux Foundation at the Open Source Summit in Spain also, I was not a Tokyo, but you were there. Now, Brandt Belendorf is leading as an AI strategist. So can you also talk about what does AI-geni mean from the perspective of open and open source? Multiple ways. I mean, first of all, I was at the Open Source Summit Tokyo a couple of days ago, really good discussion around AI, and I was part of the open SSF day, Open Source Security Foundation day. So the discussion was, what does it mean to secure AI? What does it mean to have confidential AI? So those were some of the discussions that were happening. And Sandy Redisky, who is the Associate Director for CISA, she was there and she's the lead for the vulnerability management. And she was talking about those elements that how do we manage those vulnerabilities? And those vulnerabilities become critical given that somebody is gonna be relying upon that information on a very serious manner, like doctors and lawyers and all kinds of things are people are relying upon that information. So that was part of the discussion that was very good. So I think that's where a bit of the discussion is happening. Then from a broader LF perspective, if you think about it, there is PyTorch Foundation. There is LF AI and Data Foundation. Those are sort of the two primary places where a lot of the AI related work is happening. In LF AI and Data, they have a few dozen projects where the innovation is happening. PyTorch, of course, is part of the PyTorch Foundation and that transition from meta coming over to LF is happening so that it truly becomes that community governance, essentially allowing customers to participate in this more actively. From Intel's perspective, we are governing board member for both PyTorch and LF AI and Data. And if you think about it, governing board is mostly responsible for administrative and of financial elements of it, but Intel has a large legacy of contributing to a lot of open source projects. I mean, we contribute to 300 plus open source projects and the primary reason we contribute is because our customers consume our product, the silicon, whether it's in a hyperscaler or a network or a edge or a client, by using these open source projects. So a customer goes to Best Buy, they pick up the latest Dell, Lenovo, whatever laptop, they download the latest upstream PyTorch and if it's based on the latest in our chip, they expect PyTorch to operate streamlined on this. So that's the reason Intel contributes to these PyTorch and TensorFlow and Scikit-learn and a large variety of projects so that when customers download these projects on a brand new laptop or try to run it in a hyperscaler, say AWS C7i, M7i instance, it just works out of the box. So that's been Intel's philosophy all along. We are the number three contributor to PyTorch and TensorFlow for exact that reason. And one of the joys of contributing to PyTorch is in addition to Meta, Intel is the only maintainer on the CPU module. And what that means is Intel continues to contribute patches that makes Intel CPU works best for PyTorch, but it's not, in open source, it's not a very selfish world. You've got to help others. So being the maintainer, that means we get to review requests from ARM, AMD and other CPU vendors and we and Meta are the only maintainers over there that can merge those requests. So really the top would carry water work, lifting the entire industry, leveraging their entire leadership thought process that we have for building that JNAI platform stronger and better for everybody, creating a more equitable place. Since we are talking about AI JNAI, I do want to talk about this question. There are a lot of concern with the safety of AI, misuse of AI. Of course, most of the cases we see a lot of efforts in Europe because I used to live in Europe. They have totally different approaches. I mean, CRA is putting a lot of burden on developers. Here a few weeks ago or a month ago, Biden admission, they came up with executive order for AI, we did a show with Brian Bellendorf to understand it. From your perspective, what do you feel the threats that you see and threats or not? You know, some of them are, I'm more worried about people than AI, but realistically where you do see, yes, we do need regulations so that organizations can safely consume AI. I was listening to, there's a podcast called as Hard Fork is by New York Times. I love listening to that podcast. And this morning, they release every Friday. So this morning's episode I listened to on my run and they were talking about how, if you look at Google, Google is now powered by Bard and now Bard is powered by Gemini, which is sort of their latest LLM that they have released. Now, LLMs all known to have hallucination. And it was talking about that how Gemini, they found all LLMs have flaws, but particularly for Gemini, they were talking about how serious the hallucinations are. Now imagine people are doing search on Google and then as they're doing their research, they depend a lot upon that research, to be authentic, to be true. So really cleaning up those hallucinations is really important. So that whole element of responsible AI, ethical AI, having those regulations, the US AI safety institute that the executive order talked about, where all these big companies that are doing these large foundation models, they need to report their pen testing, purple testing, red testing, blue testing, all of that results to the government is super important. And how are you testing it? And then Google particularly talked about Gemini is, within Gemini they have, I believe, three different models, Ultra, Pro, Mini, and they're saying we don't want to release Ultra because Ultra, they were talking about these models or what 90% accurate passed on something called as MMLU, which is multi-model language, some test. And with that, if they are able to pass 90% humans are able to score about 88, 89%. So it's actually scoring better than human. So the criticality of responsibility, ethicality is that much more important. And in that sense, we're very excited. For example, a couple of days ago, working with IBM, Meta, and a large set of companies, we announced this new AI alliance, where we are talking about that, hey, these large foundation models, they need to be done in open source because open source is what builds transparency, is what builds trust, is what builds that diverse set of people brings them together and together they can make sure is not doing something that is not supposed to, as opposed to being in the confiner for all gardens, which open AI is today. When it comes to AI, Genetic AI, and when you talk about open source, it's not as easy as talking about the lamp stack, hey, these are the four components, they can all go open source. So how complicated do you see talking about open source is in the context of AI Genetic AI? Also, we can throw the word open everywhere, but open necessarily doesn't mean open source. At the same time, there's nothing wrong with having an open ecosystem or so. So can you talk about that? When we think about open source, that is a simple definition, right? Open source is four degrees of freedom. How do you use it? How do you distribute it? When you alter it, et cetera. So those are the four degrees of freedom. When we talk about open AI, not open AI is a technology, but open AI is a spectrum. There is sources a part of it, then there is data, then there is model, then there is weight, then there is training. Where are you, how are you putting all this together? So it's a spectrum. Are you, then there's an API part of it. So think about it. The only thing open in open AI, the technology is open API. Everything about that is closed otherwise. With the most recent announcements of Purple Lama by Meta that came out yesterday, that's pretty brand new. AI Alliance that was announced a couple of days ago, again came out pretty brand new. I think the impetus, because a lot of, most of the people would not have the resources to create these large foundation models. These large foundation models are primarily gonna be created by these large enterprises. And that's where the relevance of AI Alliance is that much more important. That we're gonna work together. We're gonna create those open standards. We're gonna build that transparency. We're gonna build that trust with the customer. We're gonna put the right governance in there. So that's part of it. The other part of it is where we are working with OSI Open Standards Institute to they provide that definition of what open source is, right? I had a discussion with Stefano Maffoli who leads the OSI, essentially that how are you defining open AI? And at least 0.0.3, very early version on how they are trying to define it is, really is gonna be more like a spectrum. That, yeah, our source is available. Yeah, our model is available. Model is available, you can fine tune it, but we're not gonna share the weights on how we have done the training around it. So I think it's gonna be really a spectrum across the board. I don't know what the future holds, but it would be like maybe levels of openness that you can have. And I think to me, really personally, how I see this is as long as we are clear on what that levels of openness are, then a model coming out, you could say, oh, I'm level one open, level two open, as opposed to just slapping open on top of it. And then customers kind of getting confused or figuring it out what that open means. I think it's gonna be the key. It's all about communication, setting the expectation right, adhering to a common standard, agreed upon standard, as opposed to coming up with your own standard every time. Earlier when we started talking about AI and Kubernetes, then you said that Kubernetes is the same level of technology as the Linux Kernel is. And it's not that we are looking for, hey, what's next? No, Linux Kernel has become the foundation of our technologies, same way Kubernetes is becoming. Now, when we talk about Genetic AI, do you see that this is the same scale of next kind of, we can say, hey, this is the same thing of the kernel or Kubernetes, or you see that these are blips in the redars, the big thing is still to come. What I'm trying to see is that when it's come to, because AI has been around for so long, but the interest has been rekindled with the whole Genetic AI, you know, chat GPT. So I want to understand, not just from the perspective of some hype or just like chat bots, but a technology that will transform industries. I think that is already happening if you think about it, right? You know, pretty much all the boardroom discussions, startups across the industry, medical, tech, legal, transport, all the industries, movie industry, there was a strike by the writers in Hollywood. So all across the board, people are feeling threatened. And I would say there are two camps, like one who are feeling threatened, the others who are embracing it. And I'm more on the line that, you know what? Technology is here to make our job easier. It's not going away. It's such a pervasive part of what we are doing and how we are using it, better to embrace it and recognize how we can be a bit more effective and kind of defining those right guard rails that allows us to continue operating in that safe manner. I think that's the critical element that we've got to think about. So I think the, how Genai is going to be pervasive is too early to call it as a Linux moment yet, I think, but it is definitely a lot more pervasive on how do we make things more effective? And since you're talking about, you know, there are two camps, one is that they're worried, one is that, you know, they're embracing it. Of course, even silently, most people are embracing it, even those who are kind of apprehensive about it. We do hear a lot about, you know, what AI is going to replace all the jobs, but the reality is different. The invention of Ville did take away jobs of the people who used to carry stuff on their back, but it also created the whole automobile industry, you know, EVs, airplanes. So one technology may take away some jobs, but it will get much more. Photoshop is a good example. It did not eliminate the job of a photographer. It actually made more people being able to create a piece of work. So when it comes to gen AI, AI, and the whole threat around taking over all jobs, what do you see from your perspective of what is happening in reality? I have been around the industry for a while. You know, I grew up in India, and I remember, gosh, back in 1986 or timeframe, when the then prime minister of India was trying to bring computers into India, and people were like, oh, computers is going to take our job away. Well, it automated the menial tasks. The workers of the Indian industry had to just retune themselves, relearn themselves, and be more efficient. I think that's where, so I've seen that story played multiple times. When computers came, when internet came, now gen AI is coming. So I think it's just that, there is always that fear of inertia or that innovator's dilemma, right? That I know this thing really well, and now they're asking me to change, I'm not gonna change, because I don't know how good I'm gonna be in that area. So I think it's gonna, again, impact every industry ready or not, here it comes, whether you are a writer, whether you are a song producer, whether you are a developer. This morning I had a webinar with a friend of mine in what about 40 minutes, we built a chat GPT clone, ground up. So, and he was talking about how fast the technology is evolving. He was saying about six, nine months ago, when he was trying to build that, it would require him to write almost 2000 plus lines of code. And now he could build that chat GPT clone in 60 lines of code. So it is evolving rapidly and is people who gonna embrace it faster, quicker, more effective are gonna be the winners and they are the ones who gonna make it more successful. So I think my recommendation, my request to people would be, is here to stay, embrace it, see how you can make the best use out of it and make it work for you. Open up source, Genetic AI and Intel. I mean, you folks have been doing open source for a long time, you know, it's kind of part of DNA, but it's a hardware company. So it becomes a bit tricky and mixed there. But just look at, you know, looking forward, the world is changing. We don't live in the same world of big giant data centers. We are living a cloud, but we're also looking at edge computing, which is not just tiny IoT devices, more like a smaller data center near users with whole, you know, 5G network and 5G private network, you know, when the government also released some, you know, spectrum for the democratized 5G. We are talking about 6G now. Talk about the role of hardware, open source, Genetic AI and Intel in the world that we are going to live in. Intel's approach for AI in general is bringing AI everywhere. And, you know, with 100 million Xeon instances installed around the world, that's sort of the power that we bring across the board. You know, so we look at it at like four different layers essentially. One, there is AI specific layer. Like how do we accelerate that workload, which is sort of the foundational silicon hardware layer. We have, of course, our Xeon. We got the Intel Arc CPU. We got Intel Gaudis. We got our GPUs all over the place. You know, we got one API. We recently launched the Unified Acceleration Foundation, which is several companies coming together to provide a unified layer. You know, one API was contributed to the UXL and really providing a unified API, which is vendor agnostic because we believe in open, equitable, horizontal ecosystem. So that's one. The second layer on top of that comes as Cloud, where how do we simplify the AI infrastructure? You know, whether it's a data center AI systems or whether it's the AI PC that was announced earlier this year. So how do we make sure we can give solutions to customers on the cloud side of it or the client side of it? So end to end and in between, of course, we have network edge and everything in place as well. So how do we simplify that AI infrastructure on top of that? On top of that is comes where, how do we streamline the AI workflow? How do we make sure the training and the fine tuning and the inference and the deployment can happen very in a streamlined manner? And how do we keep it very open, productive and accessible? And then finally on top of that is where it comes out of how do we unlock that AI continuum, right? We work with our different verticals, whether it is FSI, healthcare, retail, automotive. We work with these wide range of our partners and customers to make sure that they understand that advantage of Intel and leverage it. So I think it's a multi-layered approach, which we are very excited about. And I think Intel is in a unique position all the way from Silicon to the application to provide to our customers and have them be successful at it. Last question before we wrap this up is going back to the people question is that we talk about cultural change, cultural shift. We started talking about the whole DevOps, that's a lot of shift left. When it's come to embracing AI, JTVI, first of all, if you look at expertise in data data scientists, that brings already, we don't have that many people there. We have to lower the barrier of entries. Of course, there are a lot of tools so that developers, they don't get overwhelmed. I mean, Kubernetes, we're talking initially itself is very complicated. So we are trying to make it easy for it, that complexity is not going to go away. So from AI, JTVI perspective, we can look at either the ecosystem suffered or we can look at Intel suffered to lower the barrier of entry. So more and more customers can start leveraging these technologies versus having to worry about making investment into them. That's gonna be really the key here. You know, I mean, you could build a rocket but it really requires a rocket scientist to operate and only a handful of people can really fly it. But if you build a rocket where you can say, click here and go to Mars, a lot more people will go to the Mars. So I think that's really how Intel has been thinking that how do we lower the barrier? What is it that we need to do in our hardware? Let me give a simple example. In PyTorch, when we do contribution, we don't want to introduce anything that is cryptic. We wanna look at, because people are building a lot of the JNAI models, PyTorch is the foundation layer for that. So when we are doing contributions to PyTorch, we make sure that we are using the natural programming model of PyTorch. That way, the developers who are building these JNAI applications, they can actually use the natural data types and it's very intuitive for them to build the applications. So we, again, when you're using the natural data types, it automatically unlocks the underlying accelerator for you. You don't have to do anything different for that. So in that sense, you know, and again, working with our partners, the idea is how do we simplify it? How do we create an AI kit which gives you a simple blueprint for you to get started to build those JNAI applications? So really, looking at hardware layer in between software layer at the top, building those blueprints with our partners, with our customers, what do we do to lower the barrier to entry so that more and more people can embrace that is completely aligned with bringing AI everywhere mission for Intel? I don't thank you so much for taking time out today and talk about this, you know, the larger, you know, evolving journey of AI, you know, Genetic AI. As you said, you know, we don't know yet whether it's the same scale as Linux kernel or Kubernetes. We will see, we'll know soon, but the hype is real. Thank you for all those great insights and I would love to check with you again. Thank you. Thank you so much for having me here.