 Welcome back everyone, it's theCUBE's live coverage here in Las Vegas, I'm John Furrier, host of theCUBE, my co-host Dave Vellante, head of CUBE Research, healing the vowels here, strategic advisor at SAS, podcast extraordinaire here on theCUBE, podcast with SAS, as well as other endeavors. Thanks for coming on. Thank you for having me. So I love this topic conversation, we're going to riff on AI's influence on the human experience, pondering AI. That's the topic, you do a lot of podcasting and talking to leaders. Let's first talk about the podcast, give a quick coordinates, where do you find it, how many episodes you're in on it, how far you went on it. Yeah, the podcast is the best job I never wanted. I think I expressed my skepticism about it when it first started, but it's the best job. You guys know this, right? You get to talk to a lot of people, about a lot of things and ask them a lot of questions and it's fabulous. So we're at about 48, 49 episodes now. We were doing this somewhat episodically or seasonally I should say, for the first two years here and we're just about to transition now in next to my weekly. And we're about to dip our toe into video, so this will be a good past run, maybe. It's just audio with a little video. It's like multimodal podcast. Yeah, SAS will bring into their AI machine and guess all the insights. We'll have machines doing it for us. We'll do our best. So what are you working on now? Give us a taste of the topics you're covering and what you're doing. Yeah, so we really do, as you said, talk to a very broad swath of folks and topics. And I know you here today are talking to a lot of folks about the technology. So some of the more recent conversations that I found very interesting or some of the themes are, number one, what really is the impact in the near to medium term of AI and AI augmentation on human work, on skills progression on opportunity? I continue to be really interested about how the language we're using to talk about AI, how we use to describe it, how we use to define it, impacts our perceptions and then our ability to apply that properly. So that's also a theme that's gotten threaded through a lot of our episodes, whether we're talking to the sociologists or folks that are dealing with industry applications. And of course, the gen AI has been a perfect storm. So there's a lot of learning there. The excitement and the enthusiasm is high confidence getting there, right? I mean, people, I mean, you have people who are like pro AI, I'm pro AI, I love AI, I can't get enough of it. Some are like, whoa, whoa, slow down, stop, slow it down. And then you got the international piece. Dave, we've been seeing people debating, slow it down, rain it in, or let chaos rain and then rain in the chaos, as Andy Grove would want to say. So what's your view on that? Because it's different, how do you see that playing out? Is it more pro AI or what's the sentiment? What are you seeing for the sentiment? Because there's two schools, you don't want to stop the innovation, same time you don't want it to go out of control. Yeah, and I know you folks are going to be talking to Miriam Vogel and Reggie Townsend about this, the push pull or maybe the tension that's happening right now as we see a vast evolution and explosion on the regulatory front. And that question of is regulation, does it stifle innovation? My perspective is it doesn't, regulation doesn't stifle innovation, regulation stifles harms. So I think if we're mindful, to some extent asking like, do we need to sort of stop AI, put it back in the bag, we can't do that, it's unrealistic. But I also think talking about AI just as do we stop AI at large is in some ways nonsensical because AI is a portfolio, like anything else is a portfolio of tools, each of those tools have their own capabilities, their own limitations. So for me it's really more about are we being mindful about when and where we are applying the right tool to the right application and putting the guardrails around it. You mentioned language, using language. Do you have an example of where you feel like the language that we use for AI is not correct or it's misplaced or maybe not aligned with reality? Yeah, I think so. As you said, I do think that when we are loose with the language we use to describe or just in our excitement, to hype up these applications, we set expectations that we cannot meet. We are either setting unrealistic expectations about the system's capabilities and or about their limitations. And this has a lot of potential detrimental effects, both broadly in public and society, but also for how an organization adopts AI. So one of my least favorite phrases right now is someone will say open AI, chat GPT or an LLM of its ilk has access to all of the knowledge ever created. All human knowledge that's ever been created, we have access to that. And that is categorically false. So we know that's not true. We know that's not true, but the problem with that is that the corollary, the unspoken corollary that happens for a lot of folks is like therefore it must know more than I do and therefore it must also be right. So when we're putting in things in like a standalone LLM or chat GPT and something comes out, it lowers our inhibitions and it causes us to maybe question our own instincts, our own knowledge and that's problematic. It's particularly problematic when we are expecting these things to give us advice in very discreet applications in places, whether it's healthcare diagnosis or it's a social services or whatever that might be. So. It does a good job of writing it. The false information. It sounds like a, I feel like education. Exactly. Do you think that AI has to have access to all the world's information to be for something like AGI to happen or will AGI just be smart enough to figure it out like Einstein with the theory of relativity? So I will leave some of this to the folks that are more technically adept than I am, but I don't think so. I'm not sure that this is the path by which we get to AGI. I think we also have to have a broader conversation about what does AGI really mean and look like. I mean, what is it that we actually want that to accomplish for us and not to accomplish. But one of the things I think that's interesting and back to the note we just made about language, when we look at something like a large language model and we have this expectation that it's going to spit out an answer that is perfect, it also sets this expectation that there's very little work required, even today, to be able to implement that system in a productive way, right? And today, again, all these aspects, language language, that's one piece of the puzzle, right? And so we need to be much more cognizant of the fact that it's not just going to be taking the current technologies, giving it access to all of the knowledge that we had, even if that were possible, right? Because this is not a technique, even if it had access to all of the knowledge that was ever created and had perfect knowledge of it, these are text synthesizers. They're not knowledge management systems. So if you want to, you're in a situation where truth is paramount and facts are paramount, you are going to have to build other guardrails and use other components in concert with that. So... I think that's why the models that SAS is selling was impressive to me because it's the first time I saw someone say, hey, we're going to have models, lightweight models that you could use for situations to either cross-connect other data sets to get truth, because LLMs, little top ones, aren't going to know everything, this hallucination, Dave calls it Swiss cheese. And then now the vector embeds are doing so well, linguistically, the language that we speak or we write, the ability to convert that into math changes the whole retrieval game. That's why we're seeing the rag retrieval augmentation generation booming, because hey, you can use AI with your data and then do that, like for us, our vector embeds, we do a lot of speech, text. So we use a lot of jargon, serverless, Amazon, AWS, SAS, you know, lingo, jargon. And the AI picks it up beautifully and matches conversations that you'd never type in a search engine. What did Kimberly say? What Kimberly said on theCUBE and what Brian Harris said, I want to search on that. Like, no, it can't make that leap with keywords. Math goes, wow, we talked about the same thing as Brian Harris. So these two videos are near each other. I find that as illuminating because that's the beginning of what we're starting to see with, with how AI is going to work. It's going to make things better. So the question is, work, right? Back to your thesis about changing the work environment. So, okay, work's going to get better and smarter, faster. That should elevate the game of the human. That's what we're saying. We believe that. It will. It surely will. And then our last segment, we just had a mic on from SAS who runs customer intelligence and all the ad stuff. We were riffing that, hey, four day work week. No, you proposed, you mentioned someone. I mentioned Steve Cohen who said we predicted a four day work week. So on the fifth day, the machines do the work. So, okay. Well, I get that, we debated that, but. You're saying that it's coming. No, for me it was an eight day work week still. So, but okay, but now that brings the question. These are provocative intoxicating questions and saying, okay, what will be the role of the workplace, the workforce, expectations role? Does it change role ambiguity? What's the output performance look like? I think everything flips upside down. And so, I think there's going to be a real disruptive enabler coming in the workplace. What's your view on that? Because it seems like it's an opportunity, but if not watched, it could be weird. Yeah, I think two thoughts. One, we have to be careful not to assume that the productivity or efficiency gains, for instance, for using AI to augment human work are guaranteed, right? And that they're necessarily going to be experienced equally by everybody. There is work involved in making that happen. So that would be the first thing and we can come back and talk about experience there. And, secondarily, we need to, productivity efficiency, improving productivity efficiency is always going to be good for the bottom line. It's going to be good for business. But if those are our only priorities and we just stop there, I think this will be detrimental for human workers and it will likely be detrimental. But it could have a detrimental effect actually on business innovation long-term as well. Yeah, you mentioned Reggie earlier, Reggie Townsend, he runs the trust pillar. SAS has the, as we've reported on the opening segment, productivity, performance, trust, and responsibility. That's a huge part of the trust equation. I want to bring that back to the workforce. Will the rules of engagement in the workforce change if you have now combination of at home, hybrid and office and AI? Do you see any data out there, conversations that are having around what the trust factor is? And, oh, the honor system, I guess it's the honor system. Hey, I hope you're working. You know, and then California has a rule where you can't text, your boss can't text employees. No, there's a lot. It's a bill. It was a bill that was submitted that says you cannot text employees. Because we have a lot of California employees. I'm texting hours. You're going to be in jail. You're going to be in jail. You're in trouble. It's a societal signal. It's like, hey, we have to start thinking work-life balance. Now, some entrepreneurs say, there is no balance in entrepreneurship. That's a whole other topic. I don't want to go there. But stay on the work, the trust piece. Is there rules of engagement emerging? Are people talking about this, or is it still too early? I think it may be still too early in some. We know that it's important. We definitely see in different applications a worker's ability to apply these tools, even if they're using for in a recommendation type situation, is highly variable. It's highly variable, depending on their level of experience. In some applications, less experienced workers. So there was a study from, I think, Boston Consulting Group about making a chat bot trained on their domain knowledge available to their business analysts. Yeah, I'm sure you guys have seen that. Probably have talked about it, right? And they found that, yes, it raised productivity overall, but really, who really benefited were the sort of less experienced analysts. In fact, the more experienced analysts saw a decrease in productivity. Just today, I saw something go by, and you might say, oh, well, this is a good example of, if we augment in all these ways, all of a sudden we're gonna get really productive and efficient. We need less people. You need to be less experienced. You can go farther in your job. I saw a report about radiologists using this, and what they found was results were mixed, using it for diagnostic, right? As a diagnostic tool. And it had absolutely nothing to do. There was no correlation between the radiologist level of experience and their accuracy prior to using the tool as to whether they were able to use the tool effectively or efficiently or not. And one thing they found I thought was fascinating was everybody, every radiologist across the cohort, was more prone to accepting an erroneous conclusion from the machine. Really? Right? And so when we talk about trust too, it's about understanding how the human perceives and interacts with it, and the tool's capabilities and limitations, and making sure that we're designing processes and services that take both of those into account. We cannot just take, you know, look at the process and say, at step B, insert AI to provide the output and then continue with your regular process. You have to redesign the business process, the business service. I don't think this will impact a whole lot, to be quite honest, whether we are more productive at home, is it hybrid or not? I think this is a tool like any other. I think it does impact though, how do we give people opportunities for improvement? How do we make sure that people have an opportunity to develop the baseline knowledge and skills, right, just that foundational knowledge that then allows us, as we move forward, and we suddenly have, you know, time to spend. And it's also a huge moving target. I mean, it wasn't that long, it was post iPhone. I believe I'm correct on this, that robots couldn't climb stairs, right? I mean, things are changing so fast with GPT-5 versus GPT-4, who knows what that's gonna be, but I'm sure it's gonna be a massive uptick in capability. Sure, maybe. Oh, I'm very confident that it'll be a massive uptick in capability. Now, how that gets applied, I mean, I think it'll be able to take tests better, it'll be able to do better math. You know, I think there's no question about that, but how that affects new ways to work. Well, you said earlier, well, maybe we'll have a four-day work week, or we are prone to people saying, this is gonna be great, because it's going to do all of the jobs that we need to do, or that we think are not great jobs, and therefore, you're welcome, you don't have to do this thing, that was maybe not the best job, but also, you know, hey, you're welcome, we've taken away this undesirable job entirely, go find your bliss. Well, I don't know about the last time you had some enforced vacation, this sister of mine just had surgery, and she had all these big plans for all the fun things she was gonna do, and within four hours of, you know, being on enforced medical sort of restrictions, she wanted to go back to the job, she doesn't even like, right? Because, you know, we're not good at that, and so, I think this idea that we want to get rid of human labor, right? That looks as labor, that talks about labor as a cost, versus humans as a valuable resource, humans who have skills that we should invest in, and humans who, again, we have to provide pathways for us to develop certain types of skill and foundational knowledge, because otherwise we are not going to, we're not going to have anyone who can come up with the next innovation and application, and that's really important. Well, RPA is instructive in a way, but it's also, AI is going to go way past that, so in other words, you talk to anybody who's used RPA, and they'll say, oh, I took away all these mundane jobs, I'm so much happier. So to your point, that's fine, AI is going to do that, but if AI all of a sudden really does completely replace you, that's going to be very, very disruptive, and I personally believe that that's going to happen. And so we have to start thinking about, all right, what does that mean for the workforce? How do we retrain people? What are our skills? I think my point was is that I'm fascinated by the impact on, to get the infrastructure, recognizing you're walking into the building, doors open, all kinds of self-driving cars, autonomous physical hardware, IoT, that Jason would love to talk more about, but it's really the operational impact, which is efficiency and productivity, but then how it shapes leadership. How do you manage? Is AI going to change the game for what an executive looks like? Because if all the heavy lifting is done, or prompting for suggestions, that opens up creativity, it gives me something creative, more creative, someone who's not creative, creative, ideas for a dinner party, or ideas for product launch, how do I, so you have new ways to democratize. Again, we're back to democratization. I think we're going to see huge change in what leadership looks like and operations. So I think the new leadership challenge, though, is not how do we manage the business when all of these tasks and roles, but probably more tasks have been migrated to AI. It's now how do I operate the business in such a way that I look at my humans as a resource and I provide them opportunities for development and that I'm actually actively engaging and thinking about if AI is doing these things, what is the next job that I can apply this person to do? And this is a really, there's a really simple example and I have been gone back and tried to find the reference, so this might be a bit apocryphal, but I think it's a good exemplar and I hope I read it right. And it was talking about IKEA, so using basic chatbots to automate a lot of the really rote customer service elements, right? And we're talking about being able to take on the work of thousands of fairly low-paid, low-skilled, I mean, I think that's the worst job in the world, so to call that low-skilled is, this folks have a lot of tolerance for a lot of things. God bless them, I mean, honestly, I couldn't do it. So I don't know that we respect that work enough, but the point being that instead of saying, okay, we now have this thing that can really take down the amount of work and therefore we don't need these people anymore, they took that whole cohort and trained them to be customer designers. So now we are a free design service for our customers who want to come, redesign their room, home decor, et cetera, et cetera. And that opened up a whole new business I think if my memory serves it was like a billion dollars in some outrageous amount of revenue and profit. That's a good example of as leaders moving forward, our challenge is not to figure out how to just operate a machine, it's to figure out how to find and create opportunities to use humans in a way that then creates new business opportunities. Yeah, that's awesome. Kimberly, great to have you on here at theCUBE. You got the pod going on, it's got Lyft, you got some topics, what's your favorite topic guest you've had on so far? I knew everyone asked me that, so I had to ask you. I don't even think I'm gonna be able to. I can't remember who I interviewed yesterday. No, honestly, I'm really bad about that. I think my favorite guest is always the last one I spoke to, which this time just happens to be Kate Moran and Sarah Gibbons from Nielsen Norman Group. I'm really bad about this because I think every time you have this conversation, I don't know if you have this experience. I walk away just thinking differently about something with so many questions, and so they'll say to me, pick one snippet for promo, and I come back with like six, and they're like, no, it needs to be 30 seconds in one, and I'm like, I can't do it. And you learn a lot too, so it's hard to say one's better than the other because you learn on all of them. They build on each. They do, and I've been really pleased and really humbled that a lot of our guests recommend other guests to us to come on and speak with us, and we're able to talk, I think around, we're not talking about the technology in terms of the hardcore, but what are all these things that Cassie this morning talked about, the soft skills or the soft considerations for making sure that the technology is actually good. They want to come back, because they had a good experience. Final question for you, or last word. Sure. What's the future of the pod? What's the goal? Is the title pondering AI? Is it the name of the pod? It is pondering AI. Okay, you're on the 50 episodes up almost. What's the goal? What's your objective? What's the outcome? I think the objective, you asked me at the start of this, do you consider yourself a futurist, right? And I said, no, I really don't, although people who hear me pontificate about things might argue about that. I think honestly, to some extent, the goal of the pod, especially for business, whether you're a business leader, an executive, or a data scientist just coming up, is to allow you to be sort of futurist in your field. So you can think about not just the technology, but seated in the social context, seated in the corporate context, all of those components, so that we can start to look ahead. And if we help even one person do that, then I'm completely aware. It's really a great service. Podcasts a great format for riffing, great for conversations, great for getting data and facts. A real good value for users. Thanks for coming on. Thank you so much. I appreciate it. I'm John Furrier with Dave Vellante. Day one of two days of wall-to-wall coverage in Las Vegas for Innovate24. We'll be right back after this short break.