 Hello, and welcome back to the SuperCloud Live in studio performance here. Pella Welta, I'm John Furrier, host of theCUBE with Dave Vellante and team. This focus of SuperCloud 4 is on generative AI. Our fourth installment and our next guest could not make it in the studio live, so we decided to bring him in remote. Brian Harris, the executive vice president and CTO of SAS CUBE alumni just had their event. The CUBE was at. Brian, great to see you. And by the way, congratulations on a great chance. SAS Championship Pro-AM and then Pro Tournament, you guys were involved in. Congratulations on that great event. Great tie-in, golf and tech go together. Great to see you. Same here, John. Thanks for having me on here. I would spend, it was an incredible event. It was actually great to have you out there. You know, you get a couple amazing shots out there. I saw firsthand, so great job. And what an incredible experience to play with some of the pros like Patrick Harrington and Justin Leonard, I mean, just amazing. So we're coming off that high and getting back to business to have a strong Q4. Awesome. We'd love to watch those pro athletes, but you know, the CUBE's all about tech athletes. Great to have you on. Your CTO, EVP SAS, going through a major transformation, huge install-based customers, storied history, private company. Well, great culture. We talked about that a lot at your event. Generative AI is upending the industry. It's changing the landscape in real time. The hype is out there. The reality is matching the hype. You're in the middle of it. You're re-architecting. The solution's in real time, which actually means that's going to be in addition to the existing services, not a lot of rip and replace and take kill the old to bring in. There's going to be a lot of both going to add on top of it. So I want to get your thoughts. As cloud goes next level, how do you see organizations deploying generative AI? What are some of the low-hanging fruit use cases? What are they looking at? Clearly experimentation is there. Production, we're seeing not many conversations there yet, but how do you see organizations deploying gen AI? Let me put it into a technology bucket and then the business side. First, from a technology perspective, we believe this is a feature, not a product, meaning that you've got to integrate generative AI throughout a much more robust AI platform because at the end of the day, when a generative AI prompt gives you a response, you need to back it up with facts and supporting data, especially if an answer is not intuitive. So you need to have the entire lineage of an answer if it's not an obvious answer that's being provided back from the generative AI experience. So I think it's important to know because that's why we think as a company that's been doing this for a really longer than most in the market, we believe we can add this on in a very natural way to our entire product portfolio to extend generative AI to real use cases and industry solutions for our customers. So we're very excited about that. I think, and then let's tackle from the business side of the house. So we just see generative AI actually really playing a big part of this simulation and digital twin experience that we're going to see in the market coming over the next five years, which is, you know, the disruption in the world seems like every single year, we have incredible disruptions we're all navigating as businesses. And really what a C level, board level, everyone wants to know is what I want to ask, what if questions get answers quickly about how do I react? How do I position the business? How do I build resiliency in the business and enabling people to ask natural human questions about the business against data and systems. This is really, to me, a huge game-changing capability that allows businesses to just get access to answers faster. So we think that simulation and digital twin of that you're going to sit there and interact, ask questions of your business and based on that question, you're going to see systems be orchestrated behind the scenes to really answer those questions for you and which is going to reduce barriers to entry on interactions with data in the company like we've, unlike we've ever seen before in the industry. And what's great too, is some of the use cases out there around objective databases showing that you can actually do things differently. And I was talking to a startup that was doing that and they're just now getting to the distributed nature of it. So you have the whole startup open source community kind of growing into it fast from a new perspective. Again, it's generational. Then you have existing businesses that you guys have a lot of presence in and have been doing a lot of AI over the years. And you made a comment at your Explorer event, I want to get your reaction to now. I asked you a question. I said, what industries have the most uptake or affinity towards generative AI? You said it'll impact all industries, but you made a comment that was interesting. I want to get your thoughts on again. You said the regulated ones actually are set up for this. And I asked why you said, because it's not a lot of ambiguity. What do you mean by that? And explain that, what that means. I think this is a really important point. I think you got to follow. I mean, you're going to see, I mean, I think that the support industries are obviously low hanging fruit, right? Wherever you have now, I'll talk about knowledge bases in a minute, but where there's regulatory pressure, there's incentives for compliance, which means you want, if you're an investigative doing fraud, or if you're in regulatory for banking and you got to do with risk, what you really want to do is that when the auditors come through, you want to have, you want to ensure that you have compliance to the way you're processing data and making decisions about the business for customers, right? And so what a lot of these organizations have are policies on how you're supposed to do that. And so the first thing you can think about is, how can you bring in this augmented intelligence into workflows that allows a fraud analyst, you know, fraud leader to basically ensure that their staff is executing their investigations in a consistent way, or a, and say, police investigation is investigating in a consistent way, according to the processes and regulations that they're being asked to adopt in a year or two. And so immediately knowledge bases that are already there, whether in either Word documents, PDFs, or maybe even in databases or content management systems, these become the vernacular, right? For, I think you mentioned before, you know, small language models that we can bring into generative AI workflows that really ensure that, okay, I can actually get tasks recommended to me on what's my next best action within a regulated business. And that next best action is really aligned to the compliance of what your business has to adhere to. And you saw actually some of those, we demoed this actually, it's asked Explorer is one of the, with Visual Investigator and how we allow police officers or even investigators to kind of see how to take an investigation that was driven by a generative AI workflow. That's where I'm coming from when I say there's regulated industries. There's a built in compliance function that allows that to be used as knowledge management to drive workflows that are more consistent and defendable to a regulator. The next question I want to ask you is, which areas do you see in the next few years that have the greatest impact for business? But I want to couch it with what we saw at the pandemic with cloud, right? And when the pandemic hit, if you were in the cloud, you had a tailwind during the pandemic. If you didn't, you were catching up and maybe we're flat footed with AI, we're seeing that you can do AI wrapper apps, no problem, an AI native and other stuff. But if you're in data and you're doing data right, labeling, doing the investment on compliance and data management, you're kind of positioned well for AI. Do you agree with that statement? Yeah, a hundred percent. I actually look at it as almost like a pyramid of pressure coming down. That's what I use it like. So if you think about this idea what I was mentioning around synthetic twins and simulation, right? Look, if I have a problem in the business and suddenly I want to understand like what happens to the supply of natural gas increases by 8%. The question I want to ask is, well, what is the impact of my business? That's really the high level question that a C level person or even VP or even further down in organization, people would want to ask. The question today, that is like a very complicated probably answer that has to go through the entire organization. In the future, that may feel like a generative AI experienced through a chat session that is orchestrating a ton of work behind the scenes that is seamlessly rendered back as a response from generative AI. What does that mean? Well, to produce that outcome, there needs to be these specialized use cases that understands the vernacular of that industry. So we got to look at the industry use cases, constrain the language that's being used to have those conversations in that industry. And then we need to say, okay, well then what are the knowledge bases that are used to actually have that conversation? So what are the natural knowledge bases that organizations have been using for the years over the years to actually capture what is important in the organization and what are the facts and answers and finished intelligence world, finished Intel. What's the finished documents that are used to communicate through the business? Now, to do that well though, you have to have great data management. So that's why I believe there's a thesis out here that says that generative AI is going to incentivize a whole resurgence in data management because you cannot have a solid generative experience without incredibly good data management capabilities underneath the hood. You don't want to restrict the data you want to have it built in from day one with the right policies and compliance built in and let it go. Correct. Yeah. And think about today, how many organizations have knowledge scattered throughout the organization that if they could put it in the right tech stack could benefit from generative AI experiences. So that's where I'm saying is there's such huge, there's a huge opportunity to go in and really incentivize that work to incredibly huge returns in the executive experience in the organizations that they're doing that work in. You know, that brings up the next kind of thread which is customers preparing for this next wave, their workforce capabilities. You know, enterprise, you mentioned, you know, data everywhere scattered. Reminds me of the old data problem of enterprise search. If you look at all the successes today with large language models, the proprietary, whether it's open AI or anthropic and others. Search is, chatbots, co-pilot and search are big impacts. So it seems to be with vector databases and other technologies where you have embeddings and this new kind of capabilities with data stores. They're sitting next to each other. They're changing the retrieval aspect of data. This is becoming much more of an open data model. You're seeing open source with Parquet and Iceberg is making data warehouses change, a little democratization of data warehouses. This is a technical change. So the next question of how customers are preparing their workforce to take advantage of generative AI capability is an interesting one because it's up and down the stack. It's not just the workers. It's under the hood, it's IT. It's how you buy, which chips you buy, how you organize your data, how your APIs are constructed, what's your security posture, all these things are impacted. How do customers prepare for this? Well, I think that I'm always a believer in starting with, first of all, there's probably a couple of dimensions to this question because one is what are the business questions you're looking to seek more efficient answers from? And you got to start there and go backwards, right? We have saw all the failed data lakes, right? Strategies where people just put data into a pile into IT, a data lake, and then they wonder why they can't get value out of it. You know, I said it's like putting all your data up in a, you know, in your home, you don't put everything up in the attic and go look for it, right? It's like you organize your house according to your access patterns to how you're functioning in the house. Enterprises need to do the same thing. They need to start about with, first and foremost, what are the key questions you're looking to answer at any given time throughout their business and then work backwards and let the tech stack be derived from those questions. And that ultimately then becomes optimizations. And really what I'd look at is generative AI. It's the generative aspect of this takes this impedance mismatch that we saw for years in UIs and interfaces that we have to build to communicate back that answer to an exec or anyone else's decision maker. It now becomes this human language experience. And so start with those questions first that you're trying to answer and let that drive that down the organization. But we also need to make sure that we up the game on the responsibility of these capabilities in the organization. So you've heard us talk about responsible innovation as a big topic at SAS. And it's because there are risks to this technology. I mean, there can be errors with it if you don't treat it with care. So we focused a lot on understanding first, do you understand the technology? So training the workforce of understanding at a high level, what is this technology doing? And if you understand that, then you understand where the risks are. And we wanna make sure that the organizations our people are conversed, even people outside of the tech part of the organization, they understand where at a high level, how the technologies use, where are the risks so that everyone can identify when a risk is emerging and escalate that if needed. It's really, really important because the part of the generative AI story that is not always told is that you can scale bad information quickly. And we don't want that to happen. So it's really important that we uplift the knowledge in the organization about this technology across all functional units in the business. Dave Vellante loves when I bring up guardrails. Actually, I don't actually like that term personally, but I have to bring it up. Here it's important because you mentioned governance. What guardrails and governance principles are companies putting in place to ensure responsibility use of the data and AI specifically, cause that's a good point. I mean, people, they want to let it go and run a little bit, understand it. I don't want to constrain it to the point where it's not innovative at the same time, you don't want it to go off the rails, so to speak. So what are the guardrails you see and governance principles that companies can put in place to ensure the responsible AI? Well, obviously for us, there's two main points, but that's all part of this larger kind of generative AI policy we have. We actually released this to the company that talks about human centricity, right? Generative AI is to help us be better, and people have values, we have ambitions, and we need to make sure that the values and that we have as an organization and what we're trying to help our customers with, we're staying true to those with our technology. That's why we talk about human centricity, cause data, generative AI studies data, studies the past to understand how to predict the present or future. It doesn't understand our ambitions or what our values are as an organization. So we have to overlay that into the execution of these new technologies and organizations. So security is critical when we're implementing this new technology stack, it's around making sure that customers, part of the generative AI story is that your large language models, you're studying large amounts of data to inform this generative AI experience. Obviously that's in the public domain. When we start talking about combining these internal data sets inside of organizations with the embeddings of large language models from the public domain, security is paramount, it has to be addressed. So we want to talk about security. We also want to make sure that any answer that is provided through a generative AI experience is explainable, transparent and fair to the outcomes we're trying to achieve out there. So for us, we take this incredibly seriously. We are actually, in my opinion, one of the leaders in the world on this conversation. We are being asked by nations, by heads of states to actually engage with help them understand how we're applying this actually throughout both internally at SaaS and how we're helping our customers do this as well. So very, very important part of the puzzle. We don't have a lot of time, but we should follow up on the whole knowledge graph since you brought some of those tops. I want to just jump in quickly. Whole idea of neural networks, vector retrieval, all this is cool stuff. And you mentioned knowledge is scattered all over the enterprise. One of the trends that's coming out of this event from the experts speaking here is that we're moving to a world of walled gardens of data sets. And you know, you go back 20 years ago, that was a bad word when everything open and free. But when you're dealing with data, you got pre-existing conditions. You got data warehouses, you got old school models. But with neural networks and graphs, it's okay to have maybe a pocket full of data that's proprietary, interesting words, walled garden, proprietary data. And that's intellectual property. So you don't want to mix it, but you can integrate it with other data sets. So we're moving to a world where it's looking more and like an AI system, where it's okay to have pockets of data distributed, but managed properly. And that's not a bad thing, it's a feature, not a bug some say. So what's your take on this? Because this kind of brings up the notion of how do you organize your knowledge graph or knowledge system? Is it a rewrite, is it a new system? Education certainly has LMSs, which is an old school thinking, but now you've got graphs. What's your thoughts on this? Well, I think the rag architectures are really just still early innings. Like I think they're great. They work to an extent, but there's some scale limitations of certain types of questions you want to answer. I mean, the fundamental thing is the large language models like the LMS have studied public domain information to provide human-like responses to the data they have studied, right? And there's inferencing and all that great stuff. But at the fundamental part of it, they're pretty dumb though, right? It's really a probability machine for the next word given the state of the context it was given. The trick is you have to be able to take that learnings of that language and map it to the probabilistic models that are inside your organization to do predictive things like forecasting in the quantitative space in your organization. So if you've got transactions for retail in a store or if you're looking at telemetry data and a manufacturing plant or if you're looking at, let's see, maybe a manufacturing defects off the line inside of like, say, solar panels, like you're going to be wanting to marry this generative experience with predictive modeling capabilities that we know of today with this traditional machine learning, which the faculties that were traditional machine learning in the same sentence is hilarious in itself. So I think, but the reality is that there's going to have to be this wall guard because really the large language models are bootstrapping the ability to have conversant experiences with software, but all that data is private in the organizations. So really you want the outside, the public domain knowledge to inform one way to the inside of the organization where appropriate and where companies want to feedback out for the greater good, they can do that but they need to be in control of that, right? They need to know when data should go back out to the organization. So there is this idea of data sovereignty that keeps coming up, which is that it's not that people just want to go and hide it. It's that the customer or the organization needs control of when that data is going to be exposed to a shared large language model process or when it needs to be contained and isolated away from a large language model process out there. So that really is the big, in my opinion, architectural pattern that everyone's trying to figure out right now on how to apply large language models into their organization. And so you're the school then of models will talk to each other, APIs will be the lingua franca data is going to be in there. See a lot of sequel, a lot of machine, machine. Oh yeah, yeah, like we're not, I mean, we're not going to go and boil the ocean here. I mean, like we, everyone talked about how we were going to map and reduce the world and that basically, you know, that burned and that died and burned a few years ago, thankfully, right? So I mean, we're just not going to do that. By the time you get done that you'll have a whole new technology to work with. So I think we got to be pragmatic. We need to be able to say, how do I leverage my existing investments in SQL databases, existing data lakes, Delta lakes, whatever you're doing, that all has to be integrated in with these conversational generative AI experiences. And that's the architectural, repeatable architectural patterns that people are trying to figure out how to scale that. That is really the art right now that everyone's working on that's the state of the art research that everyone's trying to get to. How do I do that in a way that's repeatable across industries in a way that I can allow my, I can have products that can be transparent, explainable, fair, that they ensure that they're trustworthy but they also can speak to the vernacular of that industry. And that's really what SAS is focusing on. We'll have those architectural conversations. It is early innings, it's evolving real time. Next Gen Cloud is here. Data, the role of data is changing. You see it in that big time. I think the generative AI has highlighted the fact that it generates stuff models will work together. There'll be a few large language models but there'll be a power law. All this is to be unpacked. We'll continue that. Brian, final minute we have left. What's going on at SAS for you right now? What's on your agenda? What's your focus right now for the next year? Well, for us, for me, it is really, we are in the heat of implementing generative AI use cases with customers who just had, they're coming in left and right. I'm meeting with customers left and right on how they want to apply this technology and I'll be focusing on real game changing outcomes for the business. And it's really about what I just said. How do we combine their existing investments with SAS where we're doing modeling for in the banking side of the house for C car calculations. How do we bring those calculations into generative AI experiences? I got life sciences companies who are asking the same from their perspective on this. So for us, it's really about how do we make this generative AI thing a real tangible business return where it's a 10X return for these businesses. The world's experiencing inflation. And as I said before, what's associated with inflation is inefficiency. And really for us AI, we believe is what takes out those, takes out the inflationary pressures in the market. So we can help customers run faster, leaner and with more efficiently that we can start seeing prices and inflationary pressures coming down in the world. That's what we're trying to do. So you guys as a company has a lot of data obviously analytics, data drives organizations. AI is a dream scenario for you guys. Well positioned if that's going to change and give you guys a tailwind. Certainly in the business. Other companies, what advice would you give other folks out there who have a data strategy might not be as immersed into data as you guys are with your customers. How should they be thinking about leaning into this wave? Just quick best practice advice for your peers. You got to get your data straight, right? So if you're going to go and start to use generative AI capabilities or any AI, like you have to have your data strategy understood. It doesn't need to be perfect. You just need to have a strategy and have the discipline to go forward. And it can take multi years. It's okay. It's not going to be like you're done in your data strategy. It's a consistent, persistent effort in the organization. And those that could honestly pay that properly with their efforts and their focus will ultimately win because these AI, these enhanced AI capabilities that are unlocked when you have a good data strategy are so game changing to the business that you want to see that ultimate achievement. And that starts with data. And so you got to get the data strategy, right? That's why again, data management for us is such a high focus area of value for, even when we talk generative AI, data management is right there with it. Brian Harris, EVP, CTO of SAS. Again, I can't agree with you further. It's perfect to storm. If you're involved in data, this is a tailwind. Like the pandemic, people should learn from, if you were in the cloud and you got in the cloud before the pandemic hit, you rode that well through and came out better the other side here with AI. Still, I'd say not even the first inning, it's early pre-game, it's pre-game AI. So I think if folks can get in there, that's what we're learning here on the show. Get in with your data fests and not just doing the normal data stuff, make it central so it can scale. Brian, this is your key point. Thanks for sharing. Appreciate it. John, thank you so much. Hey, Brian Harris, EVP, CTO of SAS. They're in, they got data there. They've been a data company from day one. They've been serving a lot of customers. Now AI is changing the game. This is a competitive advantage if companies can get generative AI right. It'll change how they build applications, software, infrastructure, and how they change the workforce. This is theCUBE's live coverage here. We are in Palo Alto for our in-studio SuperCloud 4 event. We'll be right back with more after this short break.