 Welcome back everyone to theCUBE's live coverage here. In Las Vegas, I'm John Furrier, your host of theCUBE with Dave Vellante, my co-host, co-founder. Heads up, CUBE Research. Got two great guests here. Gavin Day, who's the executive vice president at SAS, and Chris Devise, general manager, America's Technology, leadership team and platforms, and ISVs at Intel, which makes all the great compute that we need. Chris, great to see you. Gavin, thanks for coming back on. Thank you. Great to be here, guys. Appreciate it. So, obviously, Brian and your CTO came on. We just chatted. You can't get enough of compute. It's like Star Trek, Scotty, more power. I need more power from engineering. So, compute is going to drive all the data analysis. We've seen that from, by the way, from quantum on the high end and research down to CPUs, XPUs, GPUs, all kinds of different versions of compute are coming. Talk about your relationship with Intel. Why is it so important for SAS? Yeah, I mean, this relationship's been 25 years in the making for us, right? And it starts with co-engineering. It starts with joint customers. It starts with joint go-to-market. And then from there, you mentioned that there's an absolute need for compute, but there's absolutely a need for the performance of it. And then also, how are we going to help our customers? Because the cloud spend is running out the door right now, right? And now we're partnering with people like Intel to get efficiency and performance. And it's been a great ride. Chris, Intel's built on partnerships. Your entire business is about getting that technology in the hands of partners. Yeah, absolutely. So that just sums up our relationship with SAS. We have, some of our biggest customers are common, and it's so much better when they can see the TCO opportunity of both of us working together because they know it's co-engineered, and they know how to co-implement it and get the best and easiest result for the best TCO for their use. You're one of the few companies that was founded before SAS. And you guys both were founded in the mainframe era, so okay, X86 didn't exist when you guys were founded. And of course, so help us understand the roots of the partnership. So it goes back 25 years. Take us back, and where did it start, and where's it gone? Yeah, so I mean, I can tell you from our perspective that our CEO is, Jim Goodnight, is very interested in the performance of compute and always has been, and had relationships with executives at Intel, and through that, started talking about the needs that SAS had from a compute and a design perspective. And then that led into, like I said, joint R&D, so designers from their side, engineers from our side, working together so that way we can bring the best solution forward. And one stat that's pretty cool there is over this period of time, we're looking at upwards of 90% of our customers are running SAS on Intel chips. So they're a huge market leader there for us. What are some of the co-engineering things that you guys have done? Can you share a little bit about some of the benefits that come from that partnership on the SAS level? I mean, first of all, it comes with, we're designing these offerings for our customers. And then when we look at optimizing SAS specifically to run on Intel chips, we don't do that anymore, right? And then we take that out a little bit further and optimize with our cloud partners to run our software in AWS or Azure on optimized Intel chips. And there's, you're obviously servicing a lot of highly regulated industries. And I'm interested in if there are any things beyond, anything you're doing specifically around security and if not, are there any security features that are advantageous for the SAS customers? Yeah, one of the capabilities we've worked on together is trusted domain extensions of TDX. It's, just think of it simply, it's like putting a moat around your castle. So it's that extra layer of security in the hardware and it's SAS Viya is completely validated and optimized to use this capability where it's available in the cloud or on-prem. You know, one of the things that we always go to these events and stuff, we're in a market transition. Obviously, Jen and I, everyone's seen that. And the GPUs, everyone is hoarding them. They're trying to bow grab as many as they can. But in all the old days, I remember old days when I was 10 years ago, maybe just go back 10 years, the word one-off was a bad word. That's a one-off, we don't want to do a one-off. One-off meant it wasn't optimized or purpose-built even was like, purpose-built better be good. Now we're in a world where everything's a one-off in generative AI. Technically it's like these things are being built out, new things are being generated, so that's a phenomenon. And then you have existing workloads that were existing that were known and deterministic before generative AI. So you now have this collision of deterministic workloads with now non-deterministic features. So does that change the paradigm on how you handle where to put the workloads and what compute to assign to it, how do you think about that? Yeah, I think some of the SAS Viya features that have just been announced, you can use all the standard great SAS features that have been around forever. And now generative AI has become the big workload everybody wants to look at. So we've worked together to fully optimize around technology we have in our chips, in our Xeon chips called Advanced Matrix Extensions. And it basically is a matrix multiplier. So it's got a big cash in front of a matrix multiplier function for each core. So think of it as your gen AI accelerator right next to each core to accelerate it, just like SAS is offering your traditional predictive analytics type of workloads using statistics right next to generative AI. Each of our cores has that capability in it. So you get the best power, the best TCO, the best efficiency, but what it really means for the Fianne user is like, I don't care where my workload or where my data is, if it's running on the Intel chips, I could take advantage of it and work smoothly and easily. Okay, please. The workload and where it runs is an interesting conversation right now because early days cloud lift and shift was the model, pick it up off-prem, moving into cloud, all of a sudden that's not performance, costing customers too much, and now we're having very real conversations with our customers about where they're running what workloads and why. And I think part of that is absolutely due to the cloud spend that we're all seeing, right? I see that bill every month here at SAS and I know what we spend across our crowd providers and so CIOs want to be thoughtful now, right? And it's a huge part of our message to be able to go to them and say, this was engineered jointly between SAS and Intel and these are the benefits that you're getting from it. I'm smiling because Dave and I tend to take on topics that are kind of cutting edge in the front lines, but also boring topics that no one's talking about that's super important, like governance and TCO, right? You mentioned TCO multiple times. These are two very important concepts that are really front and center because costs are everything if you don't have a TCO model that works, and by the way, half the workloads right now we're seeing and reporting on our research don't need the high end gear, right? So can you guys talk about the TCO equation and then there's a mix and match factor. If you need to throttle something up super high and you go with something there but the general market, that's a bunch of one offs generative needs general purpose computing because they're compute hungry for the SAS application. You guys crunch data, right? So this generally, I don't know the general purpose that's made me a bad word, but like there's a market and cost matters. Yeah, one of the things that we're working on is having SAS Viya be smart enough to guide our customers on where they should run certain workloads because of the either economic or time impact and as we get better with that with our customers we're seeing the performance and TCO go down. TCO is important to us because SAS will use all the compute you want to give us. So we have customers that are saying, hey, I'm spending this much on SAS and potentially a fraction of it is the licensing cost. The rest is all the power under the hood. So TCO is top of mind. What are some of the TCO variables? They're probably the same, energy's probably a big part of what's the TCO calculation look like? Because they're going to eat all the compute they can get and they're like always hungry, appetite's huge. Yeah, so power, time, how long it takes to run the model? So that's where performance comes in. And then when you start looking at generative AI models, you need a different set of hardware to train a multimodal model, a large parameter multimodal model. Everybody thinks a chat GPT, you need dedicated GPU accelerators to do that or AI accelerators. But for most enterprises running AI workloads if you're doing generative AI, the compute requirements become much smaller. So if you're, and a lot of people are talking small language models or using RAG or all these other techniques to take their data and use their own data, their own IP, make sure it stays in the confines of their walls and not to go train some other model that everyone gets to use, they could use that as very efficient and they get the results quickly. So low spend, high performance, great results, easy for the customer to implement. You mentioned Xeon before, is that really where the focus of the partnership is? Because of course there's a lot of discussion in the industry about CPUs and GPUs and NPUs and ColoXPUs, all the silicon diversity. Is it the case that the general purpose computing paradigm works well in your world or are those alternative silicon designs coming into play? So the 25 year history Gavin's talking about has been largely CPUs, because that's been our biggest business, that's been our biggest relationship together. And as we've enhanced the CPUs, we've done more and it's been from the compiler level up. Very, very strong deep technical relationship. Well, we've just introduced discrete accelerators. We just announced the Gaudi3 accelerator. We're working with SAS to look at how do we integrate that with our products? So there's alternatives for TCO, for the customers who need that additional compute power, if they're going to be training large multimodal models. I like the hook example, where you just talked about the Viya figuring out where to get the intelligence to place the workload. Because I think we're going to have a distributed compute environment as well. So what we're watching right now is you can have a spectrum of portfolio of power, if you will, compute power, not power power. When you say, okay, I got this over here, I got this different configuration over in this side of the cluster, I'll run that over there. So maybe reasoning goes over here. Or I want prompt response here, IO basically. And the user doesn't care, right? It shouldn't care, right? The economics of it and CIO cares, right? We want that, and that translates from compute to storage, because one of the, I'll say errors we saw early in the, as the cloud expanded was, I need a whole bunch of CPU power, so I'm going to put this on bigger machines. And it was a whole bunch of IO and a whole bunch of memory that the customer was spending money on that they didn't need. So that part is absolutely important for us. And what's the customer, your customers can figure out what they look like with Intel? Is it generally like, okay, if you're running X processors on Intel, Xeon, are all the, what processors are you guys supporting? And do they buy clustered systems? Are they racking and stacking? What's your customer environment look like? It's all of the above, right? We have deep partnerships not only with Intel, but with Dell and a whole bunch of other companies like that, so we're going where our customers are. All of them know how to size them and do the architecture for SaaS. We've done that work over decades, or as we go into optimized compute within the machine types within Azure. So one of the things you've heard this week is, we believe we have to be able to say yes to where our customers want to run, how they want to run, the languages that they want to use, et cetera. And all generations of Intel, right? Absolutely. So it helps get the most other existing investment. And if they choose to go to a new investment of newer hardware, they'll get the absolute most out of that new investment. Yeah, just to Dell, by the way, they love that story because they get the whole AI factory thing going on, which is booming for them, because look at AI clusters are just a bunch of servers. Okay, it was Interconnect. That's Ethernet. Yep, I mean. Sounds like it's Jada Center to me. Exactly right. I've been in this industry a while, and from an infrastructure standpoint, customers want three things. They want it to be rock solid, they want it to be lightning fast, and they want it to be dirt cheap. So run your software on that, and I'm happy. That's absolutely right. Yeah, let me do my stuff. No one says I want a slower processor. Yeah. It says that. And slow and more expensive. No one's calling me without it. And that's the TCO equation. It's like nobody wants to go backwards. I'm telling you, the stuff that's sexy is TCO and governance. Those are the two factors right now that are the hottest things that we talk about. Everything comes back down to, if you don't get the governance right on your AI, you're pretty much screwed downstream. And then obviously on the performance side, you got to have the right cost structure because the cost envelope's driven by power constraints. And also, why am I spending all this money for this gear I don't need? Yeah. The workload doesn't need it. Yeah, absolutely. And I think just to add on to that whole governance part we were talking about, SAS has uniquely put all the models together in one place so you can make sure your data is enhancing your business. And you could use it a way that's easy to improve your business. So you could have traditional models, new gen AI right next to it. And then underneath that, whatever you're the hardware is you're running, you're going to get the most out of it because of our long history of working together. Chris, that is a great point. I want to double-click on that today. We were talking about earlier that is exactly why latency is huge in models. So if your data can't get delivered to the compute, it's not factored in, then the whole model could crash. It's all about data availability, right? So like, I mean, this is just basic networking concepts. Move this point A to point B. It's stored here and that's got to move over there to be served. Yeah. If you don't hit the latency number, your AI doesn't really work. Yeah. And there's the performance and the chip side of it with Intel, but it's also, we know data has gravity and we're trying to talk to our customers about not moving data around nearly as much as they have either on-prem or within the cloud because there's your cost and performance issue. If your governance architecture is wrong or the overhead of that is huge, it crashes the value of AI. Absolutely right. So Raj was saying on the stage this morning about moving the data. You don't have to move the data. Single store, right? That's his whole wrap. We're taking a premise on our research right now. We haven't published anything yet, but we're starting to take the position that you should be moving data around a lot faster and that we think there's ways that do new architect and switches in the new configuration where you can reward from good new networking techniques massive data movement. That doesn't cost an arm and a leg. I like the derivative of the Einstein quote. It's not his scope, but I apply it. Move as much data as you need to, but no more. No, there's truth to that because you run into the laws of physics and speed of light problem and there really is a cost. There's a power cost to moving the data around. So depending looking at your compute setup on where the data is and data gravity does become important. So if you want to do it at the edge, how much compute do you need at the edge? If you want to do it at the near edge, if you want to do it at the data center, all those factors play in a place and Sass had a great case study on stage around Georgia Pacific. I went through each of those examples of how they do compute and why it needs, that data needed to be stay local in some of those cases so the data, so there was real-time access to that data. That was a home run on day one, if you haven't seen that, check that out. But speaking of physics, did you hear Pat sort of brought forth his three laws? Pat Gelsinger. Pat Gelsinger, we first heard this years ago on theCUBE when he was the CEO of VMware, right? The laws of physics, the laws of economics and the law of the land. He's now applying that to Intel's business. He's recycling that, but it's good. Is there all three of them? Immutable. His other line is if you're not out on the right wave, you're going to be driftwood. You're too far out front. You driftwood. You don't want to miss the wave but if you get too far out front, you're driftwood. Yeah, you're absolutely right. You have timings, everything. Welcome to this business, right? So what's next for the partnership? Obviously we're in your hungry for compute landscapes and distributed computing. Architects are now seems to be the AI thing. That's what's been validated. What's next for the partnership for you guys? Yeah, I see us continue to work with Intel technology. So more of what we've been doing, latest, greatest processors supported. But as we've talked about GAUTI3, is we're very interested in working together under that as SAS enhances and expands its capabilities on SAS Viya. We would like to do the same with our hardware portfolio and our GAUTI3 was rolled out. One and a half times performance of the current best product out in the market. 1.4 times more power efficient. So a lot of opportunity for SAS customers to take advantage of that and have some choice of their options no matter what they're doing with their data. And that's what we want to provide ease of use, TCO and choice. Yeah, your comment on Driftwood is really true, right? One of the things that we do with all of our partnerships is make sure we can center a customer in the middle. So do we have a customer that's bringing us a problem that we can go solve with Intel and the partnership's going to continue to be anchored there because we don't want to solve problems that no one's asking us to. We need to solve the things that are keeping our customers awake. Awesome. Final question, just to wrap up. First of all, thanks for doing that. Final question, what is the most important story in this AI world? From your beginning where your practitioner had or industry had, you don't have the way the company had if you don't want to. It's too controversial, but what is the most important story in this AI conversation that people aren't talking about that should be talking about? I think from our perspective, it is absolutely our customers and the market being measured. And what I mean by that is I believe we are getting ready to have the summer of disappointment on AI. People are investing large sums of money, right? They're doing a bunch of awesome science projects, but is it bringing them the value of the investment that they're getting? And we talked about it earlier this week of being very measured on stage, making sure we're using generative AI, where it makes sense and where it helps creativity, but not trying to always solve the every problem that we see with AI. So overhyped is pretty much overhyped. Everyone's talking about the hype, not the meat and potatoes, chopping wood, carrying water. I think a lot of the hype is warranted, right? It is going to, this is a transformative technology, right? And I believe everyone in the market believes that, but when all you have is a hammer, all you see is a bunch of nails, right? And we don't have to try to solve every problem with an AI model or a generative AI model. Let's use it where it helps. That's exactly it. I'm right there with you Gavin, is getting the TCO right. And it's looking at, and I love the French rugby team example that you had today, not only because I played rugby in college, but also because they used all the traditional analytics, and then the generative AI right next to it to augment how you're thinking about the analytics. And when I've talked to the best uses of AI today of people getting results, it's that kind of assist you to help the other tools and how you're thinking about it. For example, like, hey, everybody thinks it's going to replace writing code. Most software developers are spending, they're spending a little bit of their time writing code. The majority of the time, they're spending like architecting or thinking about what's going to happen. And nobody likes writing comments. So it offloads some of those to make them better at their core job at the whole software architecture. So in this case with the French rugby team, it showed them how to use all these other tools they have, traditional tools, it made their team better with the generative AI working together. So I think it's the, how do I use it? And it was done in a cost-effective way because it was done with their existing tools. Yeah, productivity and good ROI on that. Yeah, absolutely. Yeah, awesome. Guys, thanks so much for coming on. Thanks for coming on. Chris, appreciate your time in Intel and Gretz. We love what you do. We mean more power as they said in Star Trek. Captain Kirk Holm, that's got any more power. We're here for you. The crystals aren't working. Thank you so much. Thanks guys. Thank you guys. Thank you. All right, we'll be right back with more coverage. I'm John Furrier with theCUBE with Dave Vellante. We'll be right back.