 Hello, welcome to theCUBE. We are live here in Las Vegas with SAS. Innovate 2024, I'm John Furrier, your host of theCUBE with Dave Vellante, Rob Streche, a CUBE analyst at theCUBE Research. It's two days of wall-to-wall coverage of the show Innovate here. We are rocking and rolling. Are you doing, boys? Dave, keynote review. Rob, great to see you. Good day. So we got in yesterday, got in early. Rob, we got in early. Dave got in right in time for all the action. During the partner sessions yesterday, we had a good chance to talk to some of the folks, and the keynote at this point we just came back from. So we look at the keynote, we're going to analyze that, but also look at what's happened in the ecosystem here. Their partners and customers are all here, and they're seeing real progress. The Dr. Goodnight gave a demo, he's a legend, actually referenced his first line of code in 1980. This is a private company, Dave. We were here last year. Rob, they have a lot of data. They have an install base. They got a lot of customers in all the industry verticals, and GenAi, as we said last year, was a gift to these guys, because it gives them a new opportunity to essentially go to the next level on the interface side and leverage the data with their customers. Yeah. I think that one of the most powerful comments that we talk about a lot here on theCUBE is that garbage in, garbage out. And the whole purpose of SAS is to get the data right. And I think everybody talks about GenAi, because it's the new shiny thing, but they've been doing AI using the modeling techniques within SAS. All the way back, I did it 30 years. Yeah, I was going to say, I was a SAS programmer 30 years ago. That's what we were talking about. It was SAS, a little bit of SPSS, but there was no R back then. You were a SAS coder. Yeah, yeah, I mean. We have a SAS coder in the semi-colon. You know, I love the semi-colon because everything ended in a semi-colon. And in fact, they have a really great sticker. Here that my love language ends in a semi-colon, which is kind of the winky-eye thing. So it's funny. All right, so it's good to have you have that DNA. Dave, you're looking at the marketplace. Again, this is a private company and they could go public, but Dr. Goodwright, they go public, stay private. They've sort of threatened it before and I think they will eventually go public, but they're about a $3.2 billion company. They've been around since the mid to late 1970s, okay? So they've had to navigate through a lot of different waves. I mean, they came in at the mainframe era. They had to go through the client server era. They had to deal with the PC era. And then, you know, then cloud, that's what Vi is all about, is their cloud native platform. And now they're injecting Gen AI into virtually everything. And of course they compete with the likes of Databricks and, you know, the big hyperscalers. Snowflake is kind of catching up to that whole. They also had them on the partner list too. Yeah, so they do partner with them. Absolutely. There's a little competition there. Let's get into the key now. I'll get your reaction on Dave. You were tweeting feverly. Rob, you were some tweets up there. The topic here is future of data and AI. That's the theme. I love the chart data plus AI plus decision plus outcomes equals learning rate. And I like how they added learning rates. So it's just not about outcomes. They're in the left-hand side of the equation. The equal sign is learning rate. So one learnings, but the rate words implies acceleration movement that it's never done. It's always generating. And with this new category of gender of AI, it's a huge part of the equation. So a nice subtle point there. The other thing I noticed is that they now have the three pillars locked and loaded. Productivity, performance, and trust. I was just talking with Reggie who handled that whole trust department. He'll be on theCUBE. Huge part of it. Obviously via workbench. And the big news is that they're introducing SAS models. Okay, SAS has tons of industry verticals and fraud detection to all bunch of others across all the industry. They're going to have selling Lego blocks of models. That's my word, not theirs. And to me, what was most impressive that I liked that was kind of like squinting through kind of the dots that were connecting is they had this concept of stored prompts. Okay, which I thought was an excellent illustration of where this is going. It's not just about generating an outcome and response, prompt return, answer, question. It's consistently solving the quality problem. So I think what we see with stored prompts, Rob, is that directional power law specialty models that we published over a year ago coming to fruition. And the value is going to be in using other things, stored prompts, either other modules to make the data better so that the ultimate decision and the response, whether it's reasoning or an outcome, is key. This is a key point. Yeah, I think where it really goes to is their persona that they've always been really tight with is the data science crew. And I think you develop in SaaS, I thought what was nice is they talked about Python and are coming soon. And I think that gets across all of the people in the data science community that really want to do these things and do them at a level that helps drive these models. And I think the key to their models are, it's some of their IP in those models. So it's not like you have to understand or worry about where it came from. And I think that's part of the trust and the governance that they're really looking to help organizations do. What Brian Harris was saying, he said he used the example of we go to a meeting and there's a lot of dumb questions in the meeting. So if you can map your questions into sort of multiple stored governed answers, we can give you not only better answers, we can give you better questions to get to better answers. So that was kind of interesting. And the other thing that I want to ask Brian about when he comes on is and nobody's really doing this today. I think Databricks is hinting at it. Amazon's hinting at a little bit is being able to dynamically not choose models, not like the model garden, that's one thing, but actually to have the system dynamically choose the model for you. I know for instance, Amazon retail, their supply chain, they use different models for different parts of the supply chain, but nobody has figured out, and maybe you've got some visibility on this, how to dynamically actually execute models for the specific use case or even parts of that workflow. That is going to be I think the next big thing beyond the stored governed discussion that we had today to improve the quality of the problem. I think to that point, there are a lot of people saying, hey, we'll make it easy for you to switch models, but you still need to do that testing and data quality. And I think the big message around synthetic data and how they're helping create that synthetic data or using Gen AI to create synthetic images like George Pacific was on stage with these odd looking trees that they acknowledge were odd looking trees, but again, looking at how you create cases where you can train data on safe data that is capable of getting to the answers repeatedly. And I think we always have heard that about the data quality. Again, garbage in, garbage out, and that was a really big thing. I think that was kind of under the covers for the whole thing. I mean, they had the package and then they had the four areas, Gen AI, co-pilot, data maker, which is a synthetic piece, and then the workbench, all that working together. And I like how they ended it, Dave. They said the future of data science development, that's how they're positioning as AI powered developers with a little red cube, where the blue cube, good to see the cube out there. But I like how they're positioning that. I want to get you guys reaction to that because my takeaway was, wow, the future of data science development. Interesting choice of words, guys, because we've been seeing in the cube that the data science world is shrinking in the sense of, in terms of core people, you don't have to be a data scientist. You don't have to be in the game with Gen AI. You can be a user. And the democratization way that's coming in, Rob, is very clear in the data. And I know we got some research going on there, the cube research on this point, but we're seeing the trend being, okay, data scientists, you don't have to have all these data science in where you have a bunch, but you can now have real people doing data. And I think this plays into SAS's hand, but they're saying this is the future of data science development. What about just data in general? Now the title of the show is the future of data in AI. So I think what I wanted to see more of from SAS was the democratization angle. I think obviously everyone in the room is a data scientist pretty much in the SAS category. So what's your reaction to that, guys? Is this just playing to the crowd or is that? I thought they got at some of that, right? With the productivity, with the Viya co-pilot and where they were going and enabling people to build their own chatbots basically into there and actually add their data into the Viya co-pilot and workbench co-pilot so that they can be more productive. Because I think it is a productivity issue. And I think it's, again, having been on that side and done coding in there, it's fairly straightforward if you understand heavy data science and how all the algorithms work. And then, okay, so to your point, pick a model. So here suggest me some models. It suggests some models. You can switch in between models. I thought they were going to go and say, okay, now combine those two models. When they looked at CO2 emissions across the entire US and different cities, I think there's still places they can go. And I think they will go with this to make it easier for data scientists because I think it is such a finite resource. What do you think? Here's the way I think about it. And Brian actually mentioned, I'd like to his little wrap about, remember Hadoop? You know, we're not really match reducing anymore. And you remember Data Lakes? Well, you know. I was going to throw in, hey, about the mark. So the way I think about it is, you know, the Gen AI moment, it's, you see it and you go, oh my God. But then applying it, it actually has to be evolutionary. You can't just, you know, overnight apply this stuff and drive your business. So when you think about the use cases, they were talking about, you know, fraud detection, you know, document visioning. I mean, we've been doing that for decades, right? But the way I think about it is it just keeps getting better and better and better. So we're getting more intelligent with all this AI. People are afraid that it's going to, you know, we're going to lose jobs. And yeah, that may happen, but I don't think anybody's going to like tap this AI and then say, oh, I want to go back to the way it used to be. I want to be dumber. That doesn't happen. So I think what you're seeing is this, this really progressive evolution of technology, but it's always the people to process the business models that are the blockers to integrating and it takes some time to do that. And then once it's absorbed, everything gets better. And nobody ever wants to go back. Well, I think that's a great point, but I would also add that, I think they get a lift from Gen AI because the interface modernization of Gen AI gives SaaS an opportunity, Rob, to take the best of what they built their legacy business on and put a front end on it. That's not just a fresh coat of paint. I mean, it's a new categorical category of user interface. User expectations. The generative, we heard Jensen Wong say this in a video when we had our chat with him. We heard it from this morning from Georgia Pacific. Basically, I don't know if you caught this. He said, yeah, we do a lot of stuff in SageMaker. Okay, because a lot of AI can be done with things like SageMaker. But what they're doing with the Gen AI is they're making it an orchestrator, right? To your exact point, John, it's the interface now that's simplified and, of course, becomes natural language. So it's a combination of legacy AI, if I can use that term, and Gen AI being the new abstraction layer. Well, I mean, Gen AI is generating, okay? Other AI was part of subsystems, machine learning, other ops. And Rob, this is really weird. But you're not going to throw that stuff away. In fact, if you add the stored prompts to it, I mean, everything gets better for the worker. In this case, the data scientist or the developer or anyone who's managing data. So as Gen AI and the interface comes in, this is where I think they miss the democratization angle. Maybe they're going to add it on the next year or whatever. But when they bolt on that user interface change, you don't have to be a data scientist. You can extract away all the other stuff with stored prompts. You're going to have stored stuff. I think you still need some data. At the data quality level, you still need data scientists. And I think it'll be not exactly that persona going away, but to your point, I think when you're doing model evolution and you understand what the output needs to look like, maybe you can have people with less data science skill, people who are newer to the industry. So to your point about democratization, it's an interesting point you're bringing up, but do you think about the market? I'd love to get your thoughts on this. You look at Microsoft, which I would say, I would position Microsoft and I'm overstating it, but they got the one model to rule them all. It's the open AI approach. And so what they're trying to own is the low code kind of no code space. And so I almost think it was maybe by design because that's maybe too simple. Not that these guys can't do no code and low code and Google, you guys were just Google next, you saw a lot of that, but there's still a lot of like serious coding that you can do and these co-pilots can make these data scientists smarter, take away some of the heavy lifting and do some things that you're not going to be able to do with low code and no code. What do you think about that? I think it's time to value of the data who's going to accelerate. And I think from that perspective, it's democratizing the data and getting it into the right places at the right time faster. And I think also the quality of the data should improve assuming that you're training the models properly. And I think that's where they were talking about is that iteration process. So I think, again, the data engineering part of it, the data science part of it, still needs to be there when you get to the algorithmic aspect of it. But I think, again, you'll have other people involved in those pipelines. I mean, to me, there's no doubt in my mind that the benefits of the under the infrastructures there is going to be a lot of AI. In security, we have the thing, security for AI and AI for security. The same thing goes for in industries like this. You've got AI for SaaS and AI for SaaS infrastructure. And if you look at Georgia Pacific and the examples they had with how they're using the use cases. And again, Brian Harris had real meat on the bone, real use case production. He called up production workloads and all those were based upon Amazon infrastructure web services. So again, back to who wins here. Amazon's got the models, but they're running infrastructure. On top of that, the SaaS has the intelligent decision engine or whatever they called it, that next layer up. That's their core layer. And the next layer on top of those modules, real time data, knowledge bank and process models. And then the GNI layer on top. And then the interface on the top. This is kind of a real world production workload running in the AI stack. So I think to me, that's all SaaS. There's nothing to do with open AI or Anthropic. They might use some stuff within Amazon in VPC, but Rob, this is a use case. I thought it was a fantastic stack for them to show off and being able to see that to your point, the intelligent decisioning is really important. And that's where that QC comes in, in that entire stack. Once you've actually deployed the models. And I think they talked about it at the edge. They talked about it from an inference, how fast, especially for Georgia Pacific, they need to be able to make these decisions when they're flying drones over fields where they're trying to harvest stuff. I thought that was a really good understanding of how they were using lightweight models at the edge for inference. Well, the thing that impressed me, I mean, they built a modern AI platform. Now the challenge for a company like SaaS that's been around for 40 plus years is they got the old stuff and they got to balance the transition to the new stuff. It's just the way the business is. Dell has to deal with this, HPE has to deal with this, IBM has to deal with this. But they're dealing with their own stuff, right? So like when they abstract it with any kind of old SaaS legacy called code or whatever interface, it doesn't really matter if you don't see it. It's still fast. So I think this is the opportunity that Dell has. And we're talking to Dell about that. You can have all your stuff under the covers and the interface is here. And I think that to me was the big aha because what matters is the data, the code. So they say, okay, we want some Gen AI, let's do it. Now, what was interesting here on that point to take it to the next level is we've been saying on theCUBE on our last podcast, we went in deep on this. I did this on my NYSE report. The question is, do you build your own AI or do you use AI? Now there was an interesting point in the presentation by Brian Harris, the CTO, where he called out the industry perspective, said there are buyers, builders of AI, and then there's buyers of solutions. And then he said subscribers of models. So a little bit different twist on our thinking of you can use AI and then build your own or not. And sometimes you don't need to build your own AI. And he called out industries that need to do that like pharma, drug discovery, and some don't need it. Some just can use this lose. So I think they're on point there. And then this idea of subscribing to models, the first time I've seen that from an enterprise player, we've seen it with Open AI with their little model marketplace. But this idea that they're going to have baked models is a pretty genius move in mind. So I want to make a nuanced point. I've even wrong about this, but I don't think I am. It's not just putting a veneer on sort of the legacy products. Like we saw so often we saw cloud washing. People would take their on-prem stack, they'd wrap it in Kubernetes and stick it in the cloud and host it in the cloud essentially and say, oh yeah, we got cloud. They built a cloud native platform with Viya. It goes back to probably 2016, 2017, I want to say. So they're building their AI stack on top of that modern cloud native platform. So that's to me an advantage. To me, the example I would use would be like MongoDB Atlas. That's a new stack that was cloud native. And so a lot of companies, they'll just kind of frankly have half asset and through a little wrapper. So that's my sense is that these guys are doing true organic innovation. It's called SAS Innovate. So they are innovating. And there was a lot of good announcements today. We're going to hear more in a moment. I think to that point though, like the speed at which Workbench came up and that you could pick how many cores you could even put in GPUs. They're speaking platform engineering and abstracting that away because data scientists don't want to think about that stuff. And I think so really they did it in a really, I thought it was a really slick way that they're doing their cloud platform and making it easy for people who are used to using Workbench locally to come up and use it through the browser. And how about Brian's quantum wrap? I mean, that was kind of interesting that he spent so much time there. I mean, I remember last year at Cisco Live, you were sort of like surprised that Liz Santone, he spent so much time on quantum, but he said it's one to two years away from their gen AI moment. Well, we know that NVIDIA was poo-pooing quantum. We're going to ask Brian about that. I'm going to pull the quote from Jen's speech. Yeah, I mean, I think that the quantum, I mean, it's a science project that has use cases in areas of production. That's opportunity. The rest of it's all kind of there. Now, one of the opportunities that's networking is a big one. Photonics areas is a challenge. So I think the networking of quantum is going to be a problem, but there's going to be use cases. I'm not as bullish on quantum as others are as you know. I thought what he said though was interesting because he was talking about also hybrid models and using classical computing and quantum computing to actually get at different insights, which I think is probably where we, they're probably at the pointy end of the sphere based on what the personas are who view SAS over the years. They're at these companies, deep research, super data science people who are probably tapping into some of the quantum techniques that are out there. We were at IBM last November at the Thomas Watson Research Center and of course IBM's deep into quantum. We got the tour and I'm not an expert there, but there were several analysts that were really deep into this stuff. And what I learned was there's like six or seven and maybe even 10 sort of competing philosophies on how to do quantum and everyone has its benefits and everyone has its trade-offs. Some use a ton of energy, some are really expensive, some aren't that accurate or aren't that stable because the qubits croak. And so I think maybe you're right, John, one to two years I thought was quite aggressive. Well, I think what's impressive about this keynote and we're going to unpack it more at their media conference now we're going to go to and our interviews is that you see the consumer side of it with the chips, the hyperscancer, all the buyers of the GPUs and products. The enterprise has always been kind of like where's the beef on the enterprise and what you're seeing it is things like retrieval augmentation generation or RAG and things where customers have their own data. And I think what we're seeing here is with SaaS they have such a great install base and what they're showing and bringing to the table is the fact that you're seeing enterprise use cases right now where they can manage their data just like retrieval augmentation generation, you can do some RAG, you learn about prompting. That's all with your own data. And again, I think this is going to propel the power law that we've been talking about. You'll see a lot more use cases where it's my proprietary workflows, my proprietary data and enterprise using generative AI and building, mixing, matching that. So that's going to be very interesting. It's like a hybrid build by and subscribe model, Rob. So we'll be going to watch that. We'll add that to our repertoire and our research. Of course, we've got two days of wall-to-wall coverage. I'm John Furrier with theCUBE with Dave Vellante and Rob Stretcher kicking off day one of SaaS Innovate 24. We'll be right back after this break.