 Hi everyone, welcome to a special presentation of SuperCloud 4, our focus on Genevieve AI. I'm John Furrier, host of the CUBE. We are here in beautiful Cary, North Carolina at the SAS Studios and impressive studios. I'm here with Reggie Townsend, vice president of data ethics with SAS CUBE alumni. Reggie, thanks for spending the time with me today. Thanks for having me. First of all, this is an amazing studio you guys have here, an amazing campus here at SAS. The team does a tremendous job so thank you for that. So I want to pick up our conversation we had at SAS Explorer at the ARIA a few weeks ago where AI and AI ethics is what you do. It's been the center of the conversation for the whole year and even now with the hype cycle still ratcheting up fast and hard. The substance is matching the hype and the industry is leaning into as part of the whole program at Explorer but it's gone beyond the in tech industry. It's hit the living rooms of every home out there. It's in the world and everyone's talking about AI. Is AI going to be good? Is it going to be bad? What's the role of the government? What's the private public partnership? We tease a little bit out at Explorer in our conversation there but it's getting bigger and DC's been doing a lot of discussions around how do you regulate it or not. What's the role of tech? There's something that you're leading here at SAS. What's the current status post-Explore? What have you been working on? Post-Explore, what have I been working on? So making sure that we get ourselves in a good position to deal with some generative of our own not just the things that we're kind of creating but the things that we are also looking to consume quite frankly. Putting the right policies in place we hope. Just like a lot of folks right we're all experiencing this together for the first time right and in the absence of you know strict regulation which some could argue is is a good thing. Others would argue of course not depending on kind of where you are in the globe. In the absence of that you've got to put good policy and practices in place for your organizations to ensure that your organizations are not just leveraging the the upside of the capabilities but also mitigating some of the potential liabilities and risk associated with that and so trying to strike the right balance is what we've been up to. You guys have a great keynote on that. Before I get into some of the questions around how you see things and the role of SASS and the role of the new role that you're in what is your job? Take a minute to explain what your role is as Vice President of Data Ethics. What does it entail? I know there's a lot of outreach that you're doing with with government and then trying to put these guardrails in place as I figure that out but what is your your day-to-day role here? Yeah so my day-to-day I have responsible innovation for the company effectively. From a data ethics perspective that entails all things you know data and how it uses how it gets used I like to say just as a matter of short-hand job my job is to make sure that wherever our software shows up that it doesn't hurt people right so what we've done over the last couple of years is put a governance model in place where we focus on matters associated with oversight particularly as it relates to AI or trustworthy AI. We focus on the controls necessary for AI so all of the risk management activities require we're looking at the global regulatory environment and how that's adjusting over time so we need to obviously make sure that we're compliant based on the countries that we sell it. We are actively working to build features into our platform so that our customers who inherit our platform or purchase our platform will have all of the necessary responsible AI features that they'll need in order to do things like mitigate against bias or govern data, govern models and those sorts of things so we're actively involved in that activity with our development team and then the last piece that we focus on is culture just building a culturally fluent organization so that folks are able to have conversations about trustworthy AI with a degree of credibility and confidence and also embody what we like to refer to as ethical inquiry in their day-to-day activities. It's interesting you got words like ethics, AI, innovation. It feels like this new position that's of merge in the one you're kind of leading it's still early innings by the way I don't think the game's even started yet it's so early on. It's not policy it's because the innovations here and because the digital transformation has AI behind it. AI has become a real accelerant for applications and you mentioned some of the things there. Is it going to be something that the government can grok the speed of the industry because the innovation side so fast when I hear governance I hear policy I hear slow I hear like glacier speed kind of slow and the governments tend not to be that fast in terms of adoption. The private sector is and you see that with cyber security of the way too so cyber has more privately led initiatives, government involved. What's your take on this because the innovation is not just policy and being involved in it doing basic education. The innovation is a real real issue because it's fast. Yeah 100% let's unpack that because I love this conversation. I'd like to say that AI is it's a socio-technical capability right and so much like any technology we can argue and so my job is to sit at this intersection of policy, governance, the actual technology capability itself and how it intersects with people. Right? Yeah, governance does slow us. I would argue that taking a beat from time to time is a good idea. How many of us get to drive as fast as our cars will allow us as an example. We have speed limits for a reason right and they are to keep us from killing ourselves and other people right. We can find a number of parallels and so I I push back against the notion that fast is always good. You know I know the popular sentiment fail fast and I'll yeah. I never liked that turn by the way. I hate that turn. No one like I failed to the head. No one likes failure. Right. But you learn. You learn and I understand the sentiment behind not dragging out a failure because that feels like drudgery and agony. I get that sentiment. However, I think if if taken to its absolute once it just let me just go fast and break things and that's not good for anybody. Right? And so I do think it's okay for us to say the word governance out loud and not shudder. Now we all live in a broader society either in our communities and our nations on the globe. And as a society we have to have some operating principles that we all abide by. Else we have chaos right and granted you know we have to push the limits from time to time. That's how you get progress and change. But the notion that innovation has to be as I said kind of absolutist and and almost reckless is I think a false notion. I think most innovators that I speak with anyway attempt to be as responsible with their innovations as they possibly can. It's funny you mentioned the move fast breaks up that was the famous Mark Zuckerberg quote Facebook which by the way he walked back and said move fast and innovate because that really sends the wrong message and you know the famous Andy Grove I thought who had the innovation from the right had an expression that we use in the cue ball of time let chaos rain then rain in the chaos and I think you kind of teasing it out with the guardrails and speed limit conversation. There is a formula you can apply to innovation to let it run a little bit. Sure. Watch it. Sure. Reel it in. Yeah. There are levels of risk that one can anticipate and based on that anticipated risk you can enlarge or narrow your your guardrails. There are ways to mitigate the the impact. Right. And so I think in the in the broader social conversation in the public dialogue sometimes that gets lost almost as as if there's this thinking that innovators don't have an appreciation for nuance and I think people are highly nuanced and so that's been my experience anyway. And so one of the things that we're really focused on here is to make sure that we broaden the aperture so that we can deal with some of that nuance on a technology basis on a policy implementation basis on you name it. You mentioned aperture. I want to talk about government role of government and the role of open source. So two areas that are highly active conversations right now with AI are open source and global global access to the technology is the U. S. thing is AI our Manhattan project or is a guy going to enable freedom and peace around the world. So this is a really active conversation. Love to get your thoughts and still just now forming you starting to see people talk about it because AI can be competitive advantage for company but also for a government and or for adversaries with say cyber attacks or whatnot. So it's a kind of a broad question but it's two areas open source has been a great developer playground and right now the activity and the enthusiasm is so high and yet on the global side is still open questions. What does that look like? How do we approach it? What are some of the things that we can do with other nations to share. Yeah. Let me let me respond to your question this way. John I think is important for folks who are listening to this to understand what I mean when I say AI right. AI to me is not a thing. Right. It's not a product. It's a process. It's a life cycle. It is really about the acquisition and aggregation of data. How that data then is used in models that are used for automated decision making and then how those models get deployed out in the world. How they trigger a decision either as an input to another system or an input to a human who makes who takes an action. Right. And so when we think about AI in that regard then the question that I think you're really leaning on is the AI application. Right. And in any given context. So you're right. AI as an application can be used in a business setting for competitive advantage. We could take those same sorts of automated decision making capabilities and use them in a military context. Right. Have have drones that are automatically triggered based on a certain catalyst. Right. At any in any scenario whether it be business military health care you name it we have to have humans involved who are determining process and procedure early on well before the first line of code is written so that we have an understanding of how we want to deploy the AI. So it would be a horribly bad idea to say we're just going to create a bunch of AI drones to go fight wars for us. Right. However it may be a perfectly fine idea to say I want AI to I don't know turn on my coffee maker in the morning. The levels of risk there. And also your point about AI is not a product it's an input into like you said lifecycle so AI can enable something to happen. And by the way you can actually put AI's into drones. It's not AI. It's AI is an ingredient. That's right. Into something the other product is the application whether it's a drone or coffee maker or software. That's right. It's AI enabled. That's right. Yeah. And all too often today the conversation about AI and many people's brain is is large language models. It's a GPT that's that's AI. It's like wait a second. That is a form of AI but it is not AI. It belongs to my cute partner calls it AI heard around the world at that moment when when chat GPT came out to kind of woke up the average person. Yeah. Wow that's that's legit because you can see it you feel it and we are on a generational shift and that's a good good segue into my next question which is it's it's clear that AI matches things like the PC revolution the web internet and then mobile these are key inflection points where the applications change the user experience changed hence the user expectations have changed. Right. So you're starting to see this generation emerging and it's clear it's almost as if it's the the fashion just changed. Oh my god. It runs wearing new clothes. It's the AI clothes. So so it's like it's it's younger developers are more engaged but applications are going to look different and it's general consensus like the web that everything's going to be AI enabled at some level. This is still early innings. I mean if you believe that to be true then we're not even starting the game yet. So this is why this conversation around ethics and governance because what you guys showed at your Explorer conference was if you get governance right AI scales if you don't get it right it kind of breaks down because you're not factoring in the data management piece of which feeds the AI. Yeah. This is like a nuanced point but you know this is still this is why this idea of ethical inquiry becomes so important. Right. So again you kind of break down where you just went with that. First of all I think ethically if you can imagine and so I recently started serving with the common with the British Commonwealth doing some work for the 33 small states. So just follow me if we're saying these are nation states oftentimes. We're talking populations in the tens of thousands not millions that don't have infrastructure to support say a data center that don't have the skills capacity to support building large language models and so how do those sorts of people participate in a digital economy. Right. Of the future. Chances are unless we are very intentional about pulling them in that they won't. Right. It's hugely important that the capabilities of being able to derive code by talking as opposed to having to know some you know know how to write Python. Right. Is is very necessary to bring folks like them into the digital economy. So ethically speaking I think this is a huge game changer because what we're effectively doing as you kind of pointed out is we're going into a new era of computing now it has its implications. I think you know our previous computing experience was one more one that was more precision based and our new computing experience will be one that's more probabilistic based and we can get into kind of the distinction if you like. But in either case where governance shows up is the consent on the data that is being used to develop the models. Not only how the models are created and deployed but who gets a say in the algorithmic techniques. Right. Who gets to determine what we're optimizing for those sorts of things. So yes to its broadening a set of opportunities for people who might not otherwise have had them. Yes it creates some implications that we all need to be aware of and yes we've got to have proper governance and a good understanding for what that truly means and it by the way doesn't just mean the feds are showing up and say no you can yes you can. Right. But it is it is about that as I described that ethical inquiry that goes into determining access to data. Who gets to manage it. How models are developed. How it gets deployed. I mean that's a whole nerd conversation around data supply chain software supply chain on and on which are key topics that you guys talk about your your conferences on on the customer side. Reggie I want to get your thoughts. You mentioned you know working the work you're doing and others. The enthusiasm is high. Check to see that no problem that checks the box. Confidence is getting there. That's to me the next transition where the enthusiastic stage. What are some of your customers thinking right now. What are they enthused about most and where do you see the confidence landing. I mean it's going to land somewhere and it's always going to land in probably a low hanging fruit use case. Where the customers go from enthusiastic to I'm in. I'm leaning in and implementing. Where they are confident in not only the AI but the overall AI products that have been rendered. In my conversations most of them are confident now of being at the stage of exploration and experimentation. They are a little less confident or maybe a little more cautious as it relates to long term deployment as it relates to their customers being impacted. So there is kind of the internal view of this which is you know I can I can try to build some things on my own affect myself. That's one thing again lower risk. But when I start going outbound I have to take is a different risk calculation if you will. I was just with some financial services customers two weeks ago and they're dealing with that very things like okay and you know internal IT shop we can use it for maybe some cybersecurity sorts of things. Maybe we can do some some things different as it relates to how we're aggregating data and these sorts of things. But we're going to slow down when it comes to products that are going to impact our people whether they be chatbots whether they you know be triggering decisions for loan decisions and all those sorts of things and understandably so right particularly and we should we should know that a lot of these companies particularly in financial services they've been using AI for years right in their back offices. But now with the generative conversation there's much more enthusiasm I guess is the words it's used about how it then can be used as a as a part of a capability set as they go outbound and touching their customers. I think there is a bit of a wait and see around generative largely because some of the legal dynamics there as well as the technical dynamics with hallucinations and what have accuracies and what have you. But the legal is really concerning right now for most folks and you know you don't want to build on a foundation they call them foundation models you don't want to build on a foundation just see that foundation crumble in next year. If this cracks in the foundation it's not going to be good. It's funny you mentioned earlier I didn't say compute but you said math computation. Compute is going to be a big part of the cloud next gen cloud and and with generative AI it is generating something so it's not it's been actually better AI in the back office as you mentioned machine learning unsupervised supervised been around for a long time. But the generative AI creates an experience with data so there's two issues is hallucinations as you mentioned everyone sees that with chat GPT is not perfect but it's not that that's not the Oracle of knowledge and it's just crawling the web. The big trend is that with these vector databases and these embeddings you can actually go into with compute and make data work anywhere so that's going to bring the edge into fact into play. And so what we're seeing is companies having their own language models. We put out a research that showed a power law with the big proprietary LLMs like chat GPT open AI anthropic AI 21 labs and others but there's going to be an evolution of small language models where there's proprietary information. So we see an error where integration is going to happen with data. So here ethics trustability where to come from and proprietary intellectual property from the company has to work with others this is going to be a whole nother level. What's your vision on this trend and how can customers start setting up their practice now to understand that data their data will be out there in the wild and will have to interact with public models. I mean clearly API's will be a big part of that but this is a whole nother level. Yeah so there's a I'll call it a what primarily is a US centric approach and kind of a European approach. I'll just use those two as polar opposite right now. The US centric approach has been let's go slow on the regs let's appreciate where the tech is headed and then let's use you know common-case law to build the necessary precedents to to regulate that legally right. Then the opposite side you've got the European view of let's let's put a regulatory regime in place based on what we expect out of the technology in the next 10 15 20 years. I want to set that up for my response simply to say you know if you really dig into the European stuff of course the EU AI Act gets a lot of press but they've got a lot of other acts that dart around that one mainly is the data act and and you know they've been in what they call this digital debt in Europe in Europe because it's sovereign clouds of right and country interests right now now here's the thing though right when you really start digging into it they realize that you know data is the lifeblood of AI and they also realize that the hyperscalers are all US countries or companies right so think about this I've got all these citizens with all this data that's now being accumulated in a cloud somewhere else potentially of course they said no you know you know I know we have a region of the area yeah yeah yeah they say that yeah but it's really failover is all right and so where where they take where they go with the data act ultimately as they say well and this is this is my language we want to create some level of leverage such that our citizens aren't just at the you know the army or the mercy of the mercy exactly right so and so where they're headed I think is to create a regime where you and I if we were citizens there had an what have an opportunity to participate in the monetization of our data yeah right and for companies we need to protect our data exactly right and so right now that model doesn't exist that literally the the architecture to support that doesn't exist but you can almost you can kind of see where the ball is bouncing on that one in the US we don't have such a such a plan I do think however we may end up getting pushed and dragged in that direction ultimately I mean the early hearings are pretty much setting up for at least some guardrails around letting the ball bounce between things a little bit faster and looser than well I think it's going to come down to this notion of personal data sovereignty at the end of the day I think if you can imagine an AI optimized world where you know maybe you don't have to work as many days during the course of a week and we can still achieve our growth goals and all the rest of it how do you pay your mortgage if you're not earning as much as you maybe need to or used to well you've got to be able to participate in the economy in some way shape or form and maybe this personal data sovereignty gives you the ability to do that again we're probably talking easily a decade or more out before anything like this even starts to hit the pavement but I do think it's a conversation worth provoking because those who are at greatest harm those who are most vulnerable today are those who aren't participating on the front end of this technology right and we've got a lot of people there and so when people you know talk about you know you're you do AI and it's scary and all sorts of things like yeah the the doomsday dystopia conversation I'm less concerned about what I'm much more concerned about in the short term is one's ability to participate in the economy in the next five to ten years yeah and I think one of the things that's coming out of AI is that it's going to give the humans more hopefully opportunities to be spending more time doing things whether it's creative or knowledge worker base when you start to see AI now today there's only pretty much three great use cases they chat bots some sort of co-pilot assistant augmenting a human and predictive magic or here that's gonna get better sure that's gonna get better and the thing is is that you know I've been saying in the queue we love to get your thoughts on this is that there'll be a new creative class emerging in the tech scene it means that we had a creative class but in technology how much creativity really have we had in technology yeah I mean it's it can get better and you bring up this whole societal thing you know if IT can become more knob pushing buttons and turning buttons that's gonna really give more creativity for being a better cybersecurity analyst yeah a better application developer a better worker mm-hmm or entrepreneurs start a company yeah that's that's interesting I'm not sure what where my mind goes when you talk about a creative class you know think about capabilities like YouTube capabilities like podcasts where yeah it unleashed a lot of opportunity for people to be creative in those spaces but we got a ton of mediocrity yeah right and so yeah we'll see a lot of that and we're starting to see some already with AI you know everybody's got their stable diffusion picks and all the sort of you know but what we'll do I think we'll find the creative geniuses that we otherwise would not have found because they wouldn't have been able to come through the mainstream channels very much like you know you get YouTube superstars that turn out to be good actors or whatever a AI assisted humans I mean AI we were saying the cubes get can scale data yeah we have data in our head that can scale until that's intellect so you know I view it to the cloud days I remember when Amazon started you know when I was starting a company you know had two choices spend about fifty thousand dollars on gear and co-located in some colo which is more cost more money setting it up provisioning it or put my credit card down and pushing my code from my laptop to the cloud and getting a prototype up and running so the friction was reduced I can put something out there get some funding boom I start a company that's how Dropbox Airbnb they all started that way that generation I think now with AI there's no tools for you know in the dorm room to the board room there's activity you'll see a ton of it you know you'll see you'll see a brain explosion potentially kids who like like my kid who is in the music right I'm like you know you know go get into the AI stuff right because that's where this thing is headed and so you start to see a lot more creativity in terms of the use of the the raw talent through the use of the tools right so I think and again this is very much like we saw with you know the last cute computing spark right we saw with the internet right so I think these errors are they they tend to repeat in terms of the way we interact it seems but in the end I think what they all do John is create a level I don't say a level playing feel entirely but they certainly begin to level the playing field as you begin to democratize capabilities for more people to participate well I'm excited by AI I think it generates a skill set that isn't taught much in schools I think everyone can freely participate in level up if up to the to the jobs are needed my final question for you guys I really appreciate taking the time here in your home studios here that's why it's gorgeous love it here in caring North Carolina super cloud for is about multiple environments multi-cloud environments now how can it be global you got not only multiple clouds now you talk about multiple geographies and continents with regions so you have Amazon web services Azure Google cloud Oracle Alibaba you got all kinds of cloud infrastructures it's going to provide a challenge for companies we call super cloud but with Jenner of AI how do you think about that what's your what's your thought process of how to start thinking about the next 20 years next 10 years for my environment to participate and have that super cloud capability because we haven't talked about the edge that's coming you know the humans with wearables and devices and even you know just industrial edge global multiple clouds AI power I mean that's going to be that's a data challenge at the end of the day it is and there are probably some people who can speak to that better than I can from a technology standpoint but you know the first thing that comes to mind for me is that really the need to modularize you know we we've had experiments in the past where we've put all of our tech needs into one bucket I won't name any you know big companies that happen to do big enterprise scale matter but the whole idea here is if you if you invest all in one bucket be that one of the hyperscalers or not that creates a single point of failure for you right so I think the ability to remain nimble and flexible is hugely important for the resilience of any organization from a technology architecture perspective I think that means you've got to be able to modularize if you need to handle AI tool sets and disperse those across multiple clouds that's that's got to be the play if you've got one solution that's targeted for one geography okay great maybe just land that all on one cloud but you know I think understanding the the gravity of data understanding the need to be modular with the technology so that you remain a nimble and oh by the way don't get locked in on some of the cost dynamics associated with some of these but is just hugely important yeah and then government's piece is so critical making that data more frictionless and sharing Reggie thanks for spending the time here in your home studio it's like a home game for you thank you man great great to see you I'm John Furrier with the Cube here in Cary, North Carolina at the beautiful SAS Studios for special presentation unpacking we'll have more conversations about data ethics data reliability data governance this is the key to speeding up the AI and responsible and scalable way thanks for watching