 Welcome back, everyone, to theCUBE's live coverage in Las Vegas with SAS Explorer. I'm John Furrier, Dave Vellante, a CUBE host. Got two great guests here, Brian Harris, is the CTO of SAS and Reggie Townsend, Vice President of SAS, Data Ethics Practice. You got it. Gentlemen, thanks for coming on theCUBE. Really appreciate your time. Thank you for having us. Great keynote you just delivered this morning. Great Tailwind AI gives you guys at SAS. Well-known software practices, customer base is huge. A lot of practice in industry verticals. Well-known. Now, in comes the AI gift. Okay, you guys have to look at this and be like, this is right in your wheelhouse, skating through where the puck is going to go. It's right there for you. This is an interesting time. Share what it's like for you guys right now. What were some of the conversations? How did that play out for you guys? And share a little bit of a keynote you just heard. Well, I mean, it's important to note that for the last 25 years, SAS has been deploying neural networks in production with customers. So in some ways, we feel like welcome to the party. We've been doing this for a long time and we're extremely proud of that history and really frankly, the results we drive for businesses where we're mission critical to so many businesses today. And so for us, seeing the market recognize the broader appreciation of what AI can do is now for us it's about just productivity and speed. So how do we make it as easy as possible for the broadest audience to take on AI integrated into the businesses and do that at a price point that's cost effective so they can obviously either drive top line revenue or improve their margins or both. So for us, it's just about how do we capture that momentum and translate that into our software to enable companies to become incredibly mature with AI? So when the AI heard around the world comes out, how did that affect you, your customers, maybe not at all, because you've been doing it for a while. But Reggie, I would imagine that it maybe changed the accelerated sort of the conversations, but I'd love to understand the before and the after. Yeah, 100%. So I like to say last November, everybody learned how to spell AI, right? So that's- That was invented, right? Yeah, so that's good. But to Brian's point, we've been in this space for quite some time attempting to do it as best we can in a very responsible way, right? So we like to say, we're about responsible innovation. Yeah, some things seem like they're happening really quickly, as Brian just alluded to, a lot of what we're talking about now has been around for a really long time. So now the question is, for us, how do we, as you said, capture this moment, but at the same time prove to the world that AI can be done in a responsible fashion and an equitable fashion in a way that honors human agency and equity and all the rest of it, we think we're on to that. I like how you guys call that AI life cycle kind of gives this feeling it's not just the one and done. Not a shiny new toy, but also in the keynote, what was clear was that you guys had a practical view, but yet optimistic and enthusiastic about the realities of where it's going to go. That's what customers really want to hear. They're enthusiastic, the confidence has to be there. Yeah, and I think that you got to have a lot of capability to have confidence. So when we talk about, there's three things I often talk about are softwares, productivity, performance and trust, right? So if you're going to be productive, it's about efficiency, right? I use the phrase like, right now the world is dealing with inflationary pressures. Well, the other I word that comes with inflation is inefficiency. And so the world is seeking out technology to drive out inefficiencies in their business to reduce the inflationary pressures. And so our software serves that purpose through AI and analytics, right? So the life cycle is about how do we enable companies to essentially learn as fast as they can against the data environment that they're actually analyzing. So if they can learn faster than their competitors and gain insights out of the data faster than their competitors, then they're winning. They're going to have a competitive advantage. They're going to make better decisions earlier and faster. And then you can't go fast though without trusting what you're doing. And that's why it's so important when we talk about trust. We heard end to end governance mentioned in the keynote. I think that was mentioned multiple times. And then also developer centric view on both sides of your new announcements you had the workspace and the app factory, which seems to be a nice addition to a platform to abstract away the complexity of the data out there. Now, I want to dig into this because we've seen like everyone tried to do data analytics for a decade. The promise of big data and democratization has been there but with foundation models where you don't have to recreate things. You got different kinds of training and inference capabilities really brings more productivity to both platform and developers. This is a unique situation. You might share your opinion on that. Yeah, so I think you got to look at it first that one of the big areas for us is that we work in industries and we focus on industries we have to bring a very effective outcome in healthcare, in finance, in banking, in manufacturing. So when you walk that back, it's about all right, well, who are the people building this stuff? So you got to look at there's developers and data scientists and there's statisticians. If you look at the US market or the Census Bureau it says there's about 500,000 data scientists who are statisticians out there. If you look at the number of developers it's 4.4 million in the US. So what we want to do is engage that developer community and help them find paths towards AI so they can participate in that as part of the larger growth story of this entire market. And we want to provide, by providing SAS Viya Workbench we're giving them, we think, an incredibly efficient cloud development environment. Lozman, get in there, choose the language of choice, Python, SAS, and R, right? And you can execute any language you want and then you can also fully integrate that into our entire AI and analytics lifecycle. And then on the other side, say you've built models, you want to get those into production. Well, usually I want to put that to a purpose-built application. So how do we now bring a productivity story to the developer who's got to build an app, right? And that's not a trivial, that's a pretty significant tech stack. So we enable you to automate the entire process of building a React, kind of tool chain with TypeScript and Postgres database and everything on that and you can bring your model straight into that and get it out into production as fast as possible. You're a billion-dollar industry investment. You could have said, oh, it's a billion-dollar AI investment and just try to ride the hype, but it shows in many ways that very much related to the AI activity. But I wonder if you could talk about how you're allocating that in terms of industry specificity, societal impacts, how are you guys thinking about that? Well, I'll tackle some of them. I'll pass it off to Reggie for the other side. First and foremost, I mean, we got to look at it all the way. It's a total value chain conversation. I know it's like business speak, but it is really about that. It's what was our go-to-market strategy on it? What is the overall pre-sales enablement strategy? How do we then take that, tackle that and bake that down into an overall R&D strategy of technology? And then obviously layered in all of that is the part that Reggie really is really focused on, which is how do we ensure we do all that in a way that's responsible to that specific industry, which is something that you can talk endlessly about. Right, so first thing we got to recognize that where we apply our technology, in every single instance, it's a socio-technical issue, right? So AI gets applied in context and we can't lose sight of that. So if we've got solutions focused on healthcare, we have to think about it in the context of healthcare. If it's being deployed in financial services in the context of financial services and so on. And so where our technology shows up, then we've got to make sure that we're bringing along with it the necessary capabilities within each of those contacts, given the levels of risk that are appropriate within those contacts in the right way. And so that's what our team focuses on and to make sure that from a principal perspective, we are showing up people, process, technology, and you name it within those contacts. You know, one of the follow-ups on that is that people want trust and they want to know that their data is protected because in these verticals, in these industries, that's their IP now, that's an illegal property. And so leakage is huge. You think you mentioned that in the keynote. Explainability is another one. So you mentioned supply chain. It feels like we're entering a new era of data supply chain security and management. I mean, there's a whole, I mean, there's so much to unpack on that. I mean, I think we're, right now, what we're contemplating is we've been dealing with supply chain disruption from a good perspective. I think we're dealing with the supply chain disruption from human perspective. The skill sets that are needed to transform right now to tackle this new technology with AI and generative AI, prompt engineering, these are new roles. The amount of new roles and titles and organizations are changing rapidly and we need to get organizations to scale up quickly. That's why we focus so much on how do we educate people, even internally, to companies on do you understand all these components of AI because you have to, in order to do this responsibly, you have to be versed in it and thoughtful about what does it mean at the data level to treat it properly? What does it mean at the modeling level to treat it properly? What happens when you make decisions and you don't have an answer that's not intuitive and it challenges you? How do you deal with that? Reggie, I want to go to you on the creative class that's emerging. Humans are so important in augmentation with AI. In fact, we are riffing on theCUBE in the past. The chess industry's been through this years before. Computers playing against humans, humans and the computers playing against humans. Humans plus AI is greater than AI. We've been talking about that on theCUBE. What is this going to do from a national societal impact because if this continues, the creative energy that will be unleashed will be unbelievable. The new capabilities that will be emerging. A new creative class could emerge in the tech scene enabled by AI. How do you see that? How do you manage the dialogue of the guardrails and the doomsday and AI is going to, I mean, there's a lot of conversations around this. Yes. There's a ton of conversation around it. It's important to note that there's a ton of hype around it. And I think one of the things we want to do is be a little more pragmatic about it. We want to kind of leave the existential doomsday into the world conversation to others and focused on the middle of the bell curve, which is where most folks live, right? We want to, to Brian's point, make sure that we are bringing the performance of productivity and trust to the conversation so that most people can get their business done, get their lives done, et cetera, et cetera. Now, should we have some conversation about potential existential threats? Sure, right? But we don't need everyone having that conversation. So as it relates to industry, we want to make sure that we're on the front side of the industry to your point about data. We want to make sure that we're having conversations with folks so they appreciate where their data lives within that overall AI life cycle. And importantly, AI being a life cycle. The data is the lifeblood of that life cycle. I love that. But you're having the hard conversations, I'm sure you are, but what do they like with customers in terms of what roles go away, what roles emerge, how can those existing roles that are going away feed essentially through training or transformation those emerging roles? What are those conversations like? Well, I think it's important that we don't give away to what we call techno-centrism to suggest that technology is always better than human. There are some cases where I don't want to self-serve myself at a restaurant using a QR code as an example. I much rather talk to a server, tell me a little bit about what the menu offers, right? So I think we have to be really careful then. We've been trying to counsel our customers as well as our employees on how to have those kinds of conversations. We don't have to automate everything. Now, to your point, there will be some conversation that we have to have around displacement and those sorts of things. I think there's a place for us to play as a provider of the technology, but there's also a place for, say, government to play and we're trying to make sure that our platform is flexible and pliable enough to accommodate any of those particular adjustments that might come from a regulatory perspective. And I presume you would agree that the public policy can't just be there to protect the past from the future. But at the same time, you're right, the user experience has to be considered as well. I mean, you go to an airport now and you see all these kiosks and I get it. Is it the best thing for the customer experience? Maybe, maybe not, but obviously goes to the bottom line. I think what's happening is we're being forced to rationalize what does it mean to be human? We are interacting so much when the barrier to adoption of this stuff is so thin or so low, we're now asking, what is our value? And I think businesses are going to pressure test that in many circumstances. And so, one thing we always talk about is modeling is studying the past to predict the present or the future. So, it does not know about our aspirations. It does not know about our goals, our societal values. We have to infuse that into the modeling process. So humans need to be informing the modeling process to where we want to go so that the models help us achieve our goals, not get in the way of our goals. Do you think AI will become at some point in our lifetimes a true learning system? I mean, it's doing it now. I mean, it is now. I mean, you can watch a YouTube video when we can put computer vision on a YouTube video to watch something run and then bootstrap that into a model that actually creates a dimensional object that can run. That's pretty amazing. Now, it needs to be focused though. I mean, it's not like wide open. It doesn't have the consciousness that we have, but it certainly can on a per task basis learn 100%. And I think it's a matter of how do we build these integrated systems that can be, and that kind of comes back to a little bit of the simulation stuff we're talking about. I mean, I think the human angle is huge. What does it mean to be a human? Also, it changes the UX of applications. So this comes back to full circle to, we've been living in the digital transformation for a decade. Now with AI just drops in like this with all the goodness, it's going to change things for sure. We know that. 100%. What changes? And what changes for the customer? Again, to get technical into the hood, a bunch of database, a lot of systems in place, legacy system, brownfield, greenfields everywhere. There's no free lunch out of that, by the way. That's still there. So we, there's two strategies. You bolt on AI, you build an abstraction layer and make it native in the application. How do you see that evolving in the systems? Because you're kind of laying out a vision that says, hey, workbench interface into existing set of systems, a fleet of servers and all kinds of stuff out there, legacy hard stuff. You don't want to be taken down and rewiring. And then app factory to like make it native in applications. That makes a lot of sense. Am I oversimplifying? No, no, that's a great way of simplifying it. I think what we're saying is, for those that are going to be cloud native AI natives, or if there's a whole concept now of generative AI natives that are being bandied about, we just throw all the buzzwords out there. But there's a bunch of, for the natives that are coming out who are saying, hey, we can start without all that legacy. They're going to be disruptive players in the market. And you're going to have to deal with that. So it's really important that companies that have legacy infrastructure that have been maybe dragging their feet on, moving to the cloud and dealing with AI, you have to do this. Because there are going to be folks starting with none of that. And they're going to be disruptive businesses that we've seen like some of these other large businesses. So new brands are emerging. You think I've got to come out of the water? Yeah, yeah, I think it's going to be, it's going to be very, very important. In all ways, someone comes up, right? Yeah, yeah, yeah. But I think in this, what's happening now, the pace at which something can come up. Well, the word disruption is a big deal right now because the rate of change seems so fast that it's disorienting. And so it's really, really important that basically companies and leadership think about, all right, how do we protect this business, right? From a place of being disrupted? And that means you got to be fast. You got to be productive. You got to be efficient. What's the one thing you'd say that people watching that are kind of sitting there, hey, you know, I've been sitting on the fence. What's different now with AI and foundation and all this goodness that wasn't available before that they have to move on now? What's the one thing that they can take advantage? What's the low hanging fruit? Well, I think, well, first of all, the cloud computing paradigm is what has enabled us to ultimately achieve these new outcomes, no question, right? So cloud compute has been a big part of that. But you know, the paradigm change is just going to be, it's the barrier, the barriers to consuming information is lowering and consuming the right information. So you have to be thinking about yourself competitively. You are competing on data with the rest of your industry and whoever unlocks value fast enough to make the best decision the fastest is going to win. That's it. The app factory really resonated with both John and me. What is the vision for the app factory? How does it affect your TAM? Is it to be the app store of data apps? Is it something different? For us, it really, I believe it expands the total adjustment market for us greatly. Both workbench and app factory does because, or do I should say, the reason for that is because we believe we're driving efficiencies on both ends of the curve for AI. One is just getting in, the entry point is lower, easier and then basically making a real in production is now easier with that factory. And everything in the middle is kind of the sausage-making manufacturing process of AI. And we have a great AI lifecycle that is very, very seamless and completely integrated. But now with that factory, we're going to enable people to build, purpose-built AI-driven applications very quickly that it's going to be very disruptive out there in the market. Your bookends are, I think you call the creators or? The builders of AI and the consumers of AI. Okay, and how does that affect your world? Well, clearly if folks are out building application, we want them to build them in a trustworthy way, right? So we want to provide a trustworthy platform for people to build responsible AI applications on the top, right? So I want to make sure that as we are building out app factory, that it's full of apps that folks can rely on, right? So if you're building models, we want to be able to assess the health of that model over the course of time. So one of the other things that we showed during that same presentation was the model card, right? So this is about us being able to say, here's how you govern and notice the health of the model over time because we recognize that they degrade and so on, right? So I want to make sure that we're building out an application factory that is full of trustworthy applications that developers can feel confident that they can go and grab and utilize with full confidence. And just real quick on that, I mean Reggie is involved, like integrated into our development process. I mean, his team engages with their product managers to look at how we're approaching the problem, taking the market perspective of what's expected of us as a company from a responsible innovation perspective and infusing that into the product management requirements set. So it's a very active participation in the technology stack as well as the governance and kind of the ethics side of the house of the outreach as well. You guys think a lot about data quality, obviously something that affects you. And when you think about things like synthetic data generation and LLMs, we were in the cube with a friend of the cube, AI expert said, entropy is winning, we got all those are randomness. So are you seeing that? How do you deal with that sort of increasing randomness? Well, one of the things that, what we've done with the simulation of data, right? It's synthetic data is that we've actually taken generative adversarial networks and we've extended them. We have a patent for this to really create exceptionally statistically congruent data that reflects the complexity of the real world. This is, we were doing this three years ago, by the way, and we were doing this as an internal research project to get this going. And the idea is that because there are times when creating the data or getting access to the data or actually collecting it or processing it is just not possible. So if we can synthetically generate it and it's accurate and reflects to our world, we can actually improve data privacy. We can actually deal with a major issue in AI, which is rare events. So if you're trying to do like fraud detection, it's a rare event in the corpus of all the data. So as a result of it, we can create more rare events that then showcases the robustness of the models we create. So synthetic data generation is a huge part of, we believe is the overall generative AI story because it's going to allow for more robustness of the models. And a perfect example of this is no better than autonomous driving. The improvement of AI models should not be a function of how many people get killed on the road from autonomous vehicles. That is unacceptable, right? If we have the ability though to create environments, worlds that are so similar to the real world, then we build better models and prevent harm. Are you an optimist? Can I piggyback on that for one quick second? I just want to make sure that folks who are listening imagine this for a second, right? We all probably have loved ones who have dealt with some rare disease or something along those sorts. Well, we work with a lot of the big farmers and many of them use SAS. And one of the things they do in clinical trials is they've got to be able to identify not just physicians who are prepared to take their therapeutics and use them, but also patients within the physician's proximity who have this rare disease. So imagine being able to use synthetic data to generate the profile of a patient without having to go down to Buenos Aires to go and find them or to Singapore wherever the case might be, right? So just the power of that synthetic data to be able to help us bring therapies to market much more quickly could be potentially huge. Well, the impact is great, but also the trust piece plays beautifully into that. I have to ask you, because I know we have done a lot of time, but I want to get it out there. You're doing a lot of work on policy and the company also getting all the devs, but also on a national level. What advice do you give companies that are out there trying to have a framework or a policy without sounding like they're just mailing it in? Yeah. Many are mailing it in, but like. That's a great point, yes. So first thing is start with a set of principles, right? We are going to see some regulatory activity throughout the world, right? We see stuff going on in the EU right now. We're having conversations here in the US or stuff in Brazil, you name it, literally all around the world. A lot of that is human-centric, a lot of it is principle-driven, that's great. We like to say, however, that this compliance regulatory environment is the floor. Our principles really is the ceiling, right? So yes, we're going to abide by every single law that exists in the areas where we do business, but we want to make sure that in those markets that we're continuing to stay true to who we are, which is again, back focused on human-centricity, inclusivity, privacy and security, robustness, like that's our thing, right? So the first thing I tell companies is let's start there, right? The next step is how do you bring those principles into practice, right? And that's where a lot of companies are struggling right now. We've done a pretty good job, I believe, in terms of establishing our governance regime, if you will. We refer to it as the quads that we're focused on matters associated with oversight, the platform that Brian spoke to earlier, we're focused on making sure that we've got risk controls in place, we're also focused on importantly our culture and make sure that all of our people within the company are able to be fluent on matters associated with the trustworthy AI. So we've started there, we've done a ton of work there. If folks want a resource, one of the places that I would point folks to is the U.S. NIST, right? So the National Institute of Sustainable Technology released the AI risk management framework. Is it perfect? No, but is it a good set of instructions or a guidepost for where folks can get started to start to actualize and practice some of the principles that we espouse? Yes, so I would encourage folks to go there. If they have some questions, encourage them to look out or reach out to us, happy to have those conversations. We were fast followers for some of the others who are already in the space. And now there are a ton more people behind us who are fast following our lead. So we're happy to share as much as we can. The hard part of those frameworks is operationalizing and that's where you can come in and help. What do you want from governments and do you think you'll get it? So obviously consistency is important, right? We're a global organization, so we want to make sure that there is global consistency, right? So AI as a life cycle is also a huge ecosystem and there are a lot of handoffs between us and other organizations and us and other technology capabilities. So I always give the example of electricity, right? We generate electricity roughly the same way no matter where you go. But here in the US we have a certain plug. If I go to the UK I need an adapter, right? So there will be some adapters if you will that exist as it relates to AI but to the extent that we can get uniformity and consistency out of these major economic blocks or governments, the better off I think we'll be. Reggie doing great work and it's early still super important. I love you digging into the product as well as the side impact. Brian, let's end it up with you on this segment. Great keynote across the board, great demos. We didn't give enough justice to the demos. Really highlighted that advantage. For the people watching, what's the most important thing they should take away from the keynotes this morning? What's the main message? If they didn't watch it, they got to go watch it. Why should they go watch it? What's the summary of the keynote? Last minute we have. Sometimes people think, first of all, what people don't know is that SAS is innovating at lightning speed. That's, it's a fact. And I think what people don't know that we are embracing all the programming languages out there. It's really important that we're not just putting words to this. We're actually with Workbench, that's a follow through and commitment to saying we want you to actually be able to leverage Python or SAS. Our languages compete like the other languages. But the libraries we're building for Python and are, are sometimes 10, 20, 30, 40 times faster than the libraries that are out there in the open source world. So we're adding value to these spaces, right? So I think our commitment to programming languages and open source is a big important mark because I want the developers to understand that we are here to help them. Help them be productive, go faster, lower their costs and ultimately help them get AI into production. We want the data scientists and developers to feel like they can be a hero with our software and their organization. And App Factory is that last mile on top of our AI and analytics lifecycle that allows them to take that model they create with our software and deploy it into production for their business and generate returns. More models, more models, more model management. That's right. 10 AI. It's a new asset inside the organization that has to be managed. Brian and Reggie, thanks for coming on theCUBE. We appreciate your time. We're here live on the show floor here at the SAS Explorer. Dave Vellante digging into all the action here from generative AI and business models, technical models, foundation models all here on theCUBE. Thanks for watching.