 Good morning, Cloud Community, and welcome back to fabulous Las Vegas, Nevada. We're here in the start of day two of Google Cloud Next, theCUBE's three-day coverage this week. Absolutely fantastic. My name's Savannah Peterson, joined by analysts and superstar, and first pitch-thrower Rob Streche. Good morning, Rob. It's great to be here with you. At least you got my name right, you know? But no, it's great. I think what's been awesome, and I think we've talked about this a lot yesterday, is just the energy of the 30,000 people here. But also, the Google really has embraced its partners. And I think, again, it's, you know, we have another one with us right now. I think, again, it just shows that getting into AI, getting into data is really, it takes a village to get there, but there are people here to help you, and I think that's awesome. And a lot of people are partnering together. It's why we've got fabulous folks from PWC, so Rob and Scott, thank you so much for being here. How's the show going for you guys? It's been fantastic. We get to talk about AI all day. I was going to say, yeah, you get- It's my favorite topic, AI and engineering and how we can build stuff, and so it's fantastic. There's a ton of our clients are here, so we have a lot of relationships that we have the opportunity to build, and it's just a great energizing show to get to spend time with our partners and with our clients. Yeah, and I think one of the big things that I always ask, because I think getting into AI, we have our own LLM, but we're Uber geeks, right? So we look at it. How do you help customers- Speak for yourself, Rob. Well, yeah, well. I'm kidding. I wear the hat proudly, right? The nerdness is probably right here. But how do you help really customers get into the first Gen AI solution? What is the gateway Gen AI solution, and what are some of the use cases that you see? So I can talk about some of the things that we're seeing in the industry out there as it relates to Gen AI solutions. I think companies have successfully been able to execute use cases, and those use cases are showing that there is an odd of possible in terms of what you can do with your Gen AI large language models, and Google has amazing models in their Vertex AI model garden, and companies are embedding that into multiple solutions. But what companies are struggling with right now is how do you take those use cases, and how do you really scale them into value-based solutions that they can drive across their enterprise? So when you say first set of use cases, we're seeing it across the board, in marketing, in customer service, in product, in engineering, in operations, in procurement, in finance, but it's the scaling that becomes challenging for these companies. I think that, and also, it's how do you find the actual value out of those use cases, right? So the implementation effort, the energy and time and money that you put into building out that solution, what are you getting out of it for real, versus what is the hype that's not real, and that doesn't actually create business value for your organization. I mean, it's got to hit the P&L eventually. We all love a shiny MVP or a show car when it comes to this type of technology, and a lot of things, but if you're not actualizing that, if it's not actually improving the business, it's really just a distraction at that point, and a cost center, so I can imagine you're really helping people navigate that. You've created an AI factory to help your customers. Can you tell me about that? Sarabha, go to you first. Yeah, so we try to do this on ourselves first, saying we want to be client zero, and we said, here are our business models, here are our businesses, and this is what we want to do to reimagine the way we function in the future, and we said for us to centralize all of this and have a seamless delivery model that's actually scaling these solutions, we need a COE, and we just called that COE AI factory, which basically consists of an overarching governance model with a governance team, and then delivery pods that are continuously delivering these solutions and deploying them at scale within our business. That AI factory team has come up with a couple thousand use cases that we think are likely ones that ourselves and our clients would find value in, so they have a long backlog of opportunities to explore. We've invested in this AI factory for, I want to say it's like seven or eight years, so it's not all been gen AI, it's been AI since the beginning of that being a really important capability, and that's enabled us to be really quick to come to market with solutions for our clients because we had invested, we had explored, we had already been thinking about it, and like Sarab said, we're doing it to ourselves, so if you look at the other businesses that PWC has, like I represent engineering, he represents our data science and analytics team, but if you look at the businesses that we have in tax and accounting and auditing, there's a lot of process oriented workflows there that are ripe for automation and AI and generative AI solutions are core to that, so those are some of the big areas that we're applying it first for ourselves. For engineering, we're applying it to our software development life cycle and trying to take advantage of every realistic use case in the SDLC, so that's another example, so that the AI factory's a critical enabler to make it possible for us to do that and then scale it out to ourselves. In fact, we have a partnership with a company called ETR that does tech spending intentions all the time, quarterly, and what we've seen is that really, to your point of being on the engineering side, we see that as kind of one of the gateway use cases for companies along with the marketing, along with some of the others. The one I want is tax because like doing my taxes this year was a nightmare, but- Oh, I'm here for it, amen. So I look at this and go, it's really helping from a transformation perspective, especially within the workforce. What are you seeing around the transformations of the workforce and how it's happening and where it's going? And that's really the crux of it for us being in the business that we are in, consulting our clients on how to operate their businesses more effectively, so the workforce transformation is the front end of whatever we end up getting to do with generative AI and engineering and building products for our clients. So to me, it goes back to that scaling problem, though. Like workforce transformation is not a nice topic when you're automating jobs that are in a business process today. So the conversation about how you create value and how you structure your workforce and how you talk about that outcome of an automation or an improvement in the process is a huge conversation to figure out how you position that in an organization. Are you able to unlock people to generate more value? Are you really cutting staff? And especially with the kind of slow economy right now, it's a pretty sensitive topic when it comes to how do organizations best run their business and how do we as a society take care of our people, take care of people. It's a big topic for us, but I think it comes down to you have to be able to unlock value somehow if you're going to invest in that kind of a transformation. So where does it come from? That's a really important point, and I think we love to talk about the shiny things when it comes to gen AI, but not the ethics and the governance. Sir Rob, I want to ask you, because you were probably talking to a lot of different customers about this. How are you approaching conversations around ethical AI or even the messaging within orgs as they roll out different AI programs? So we've identified nine different factors that they need to look at when they're looking at risk around AI, and that's just not limited to the bias or model accuracy. It also encompasses things like regulatory risk. Huge right now. The whole regulatory market is so immature and underdeveloped, and everyone's kind of over here doing different stuff. Yeah, anyway. Yeah, usage risk, like how are people going to use it? Going back to Scott's point, right? You're doing your work in a certain way right now. Tomorrow that's going to change. So how are you going to harness the usage and the power of the actual tool to do what you're supposed to do much better, faster, and in a more efficient way in the future? So there is a risk around adoption that you're dealing with when it comes to large enterprises, when people don't understand how to use it. There's risk that's associated to the way that you're picking the models. Which LLM are you going to pick? Cost. Costs are going up. There is risk. There's risk. And so when we talk about scaling solutions, we actually put a lens of ROI on every solution saying, hey, it's going to take you six months to build this. It's going to be 300 people that's going to use this in the organization. Your estimated cost is going to be X, and what's the business value? Why? Is it less than X and why are you even going down that path? It doesn't even make sense. So there's a- You don't want to solve a $5 million problem with a $6 million solution. Exactly. Yeah. I think that's, I mean, again, that goes back to the ROI needs to be there. And I think to your point in what use cases people start with, I mean, do you really need a chat bot that's going to do X, Y, or Z? Or is it really helping your customer success people to service your customer better? And I think that's a lot of the conversations we're having this week. But one of the things you both have kind of touched on a little bit is the regulatory environment as well as changing. And I mean, you have the AI Act that just passed over in the EU. You have a number of different things coming down from a governance perspective. That has to impact. I mean, PwC works globally. That has to impact how you look at deploying these solutions. Give us a little color on that from that perspective. Yeah, absolutely. But especially as a regulated business, right? We're acutely aware of the regulatory associated, the regulatory risks associated with the things that we do, right? Like we advise our clients in many ways on how to manage their regulatory environment. It is incredibly challenging because the regulatory, the government's basically lags so much on what's actually happening, what's actually possible. So a certain part of responsible AI or ethical AI is to like think about what the right answer is and not wait for the government to make it a regulation, right? Like what is a responsible way to behave with this kind of powerful technology? And how do you do that as a consultancy we're advising our clients all the time on? Like this is something that's responsible. This is something that would not be responsible, right? We have to take that position and basically take the high road ourselves and make sure that we're ahead of those regulatory outcomes, whatever they end up being. I mean, that's why we keep telling our clients to have a robust governance model in place that balances risk versus ROI. Because if you don't have that, you're not going to be able to prioritize what solutions you want to focus on and you're not going to be able to generate real value out of it. I think it's important that saying no is also not an answer. It's like it's not an option to say we're just going to block people from access to all of these large language models and we're going to make sure nobody uses it by the firewall, that's not going to work, right? So you have to have a governance model that's actively engaged and enabling people to take advantage of the technology that's sitting there in front of them without doing things that are irresponsible, right? I love that. I think that is one of the keys that I'll take away is no is not an option, right? I mean, that to me is so true, yeah. Yeah, it really is true. And how are your customers responding to this? Are they receptive? Are they sponges right now? Are people nervous? So what we've observed is different customers are at different stages of evolution as it relates to adoption and actual value realization. Can you imagine, yeah. Yeah, there are some customers who are basically in infancy where they're playing around with or experimenting with use cases. There are some who are really scaling it in a pretty big way and actually generating value. You know, our goal is to ensure that every customer that we work with is able to derive value out of these solutions in the best possible way. So we advise them on governance, we advise them on usage, we advise them on AI factory development, we advise them on business model reinvention, all of those components, leveraging the power of generative AI. I think one key point there that we were talking about on the way over was a lot of boards have an OKR or KPI or whatever on generative AI adoption in their organization with no rhyme or reason to it, right? Like adoption isn't really that important but they realize it's an important enough topic that they need to move, they need to do something. So I think it's good that there's a motive to go explore it. We have clients though that are, like he said, at the infancy stage of it and we have clients that are pretty advanced and we also have, I've had a few clients that are taking, I'll call it a top down or outside in approach and they're putting it on us, right? They're looking for us to cost less to perform the services because they're expecting us to have automated our own processes better. Like make us faster, cost less, like give us a 10% discount because it's going to be less level of effort for this project, right? So, right. The other thing I'd like to add to that is, you know, AI has been there for years. Just because generative AI is now being used by people who don't have a data science background, it's just all the hype that got created. I think companies should think about governance, risk responsibility, regardless of whether it's a generative AI solution or whether it's a machine learning solution or whether it's a deep learning solution. It's really data, right? It's data governance, that's the core of it. Yeah, I think actually you just hit on something that triggered something in my brain here around. How are you seeing organizations organized? Because to me, it's not just a data problem, it is an engineering problem. And a lot of times, all in vogue last year was, we're going to have a chief data officer and they're going to solve our AI problem and they're going to come up with the use case. Are you seeing people kind of step back from that because it is not just a data problem, it's an engineering and a software issue as well. How are you seeing your clients kind of organize themselves? I'll go next. It's a little bit all over the place. I think that's a bit of the, where are they on the spectrum of adoption? I mean, I think in the concept that we're talking about with governance and having a properly organized governance, you really have to pick a place to centralize some of the decisions about how you're going to do things. And that gives you the right place to do data strategy and governance the right way and to put, I don't want to use the word controls, but put some emphasis on the right way to do it. Same thing with the engineering end of it, right? Like through our AI factory work, we've come up with a certain architecture that works really well for LLMs and building components on top of them that are usable for different use cases so that you get some scale with what you can deploy. Like you get a couple of use cases going and you get a snowball effect, right? So that, getting that architecture right, getting the governance right, I think you have to kind of centralize it before you can democratize it, right? And I know everybody wants to democratize the data and democratize the tools, but you have to build that foundation before you can scale it. That's what I think I would like to see. I'm not sure I'm seeing that everywhere, but. I really agree with that, though. I mean, John and I actually joke a lot on the show about democratization and what that actually means in practice, and there's so many things that would have to happen before this is actually democratized if we're talking about giving people access to stuff, and you do need a core group that can decide. I mean, these organizations you're working with are massive, so it's not like you can, if folks aren't all on board, there's just no way to do it. I'm curious since you see so many different types of applications, and I realize there's probably some confidential stuff in here, so obviously share what you can. But are there any use cases that you're really excited about personally? Like when you talk to someone about it, or this customer, you're just like, oh my gosh, I can't wait to see when that is real? I think both of us are very excited about the application in software engineering. From the inception of creating user stories based on requirements, all the way to getting the test cases automated, and then even all the way through CICD deployment. So I'm very bullish about that because I'm a software engineer by background, so I am biased, but. You're allowed, I was asking your opinion. This is not a moment of data-driven decision making, this is it. Yeah, I'm in the same boat, and I think we have started, like we said, to deploy these tools on ourselves in our software development life cycle, and there's a real measurable benefit, not only in time saved to do work, but the quality of the code that's produced, and the quality of the delivered application is quite a bit noticeably higher, like without putting statistics on it on the show, but there's some real savings and real benefits there, and I'm as excited about the quality benefit as I am about the time to complete a task benefit, because the biggest problem we have in software engineering is quality at the end, right? So it's a huge win for us to get that benefit, so that's the one I'm excited about. I will say though that I think there are three things that need to come together for all of these benefits to be realized. You need to understand the patterns that you're dealing with, and when I say patterns, I mean LLMs, vector databases, embeddings, and how you're really chunking and embedding your data sets. Now when you have common patterns across the organization, you need to have reusability that you need to identify, and that's the job of a chief data officer. We're talking about that, right? But with that, there are so many other personas. Personas is the second thing that I think is super important. All personas from business users to data officers, to legal counsel, finance, all of these personas need to come together to actually deploy a solution that works well within an enterprise, and then the last piece is controls, I think, which is synonymous to governance, which we were talking about, but yeah, I mean, we're both very, very excited about the applicability in software engineering. I think there's a change management element in there too, because even, I mean engineers, we think we're good at engineering, right? But like if somebody tells us we need to do it a different way, we're no better at changing than the next business user in their process, right? So we have seen places where half-hearted attempts to bring tooling into the process without really changing the process around it and changing the role that people expect to play in that process, basically it could have counterproductive situation, right? Where it actually takes away, or hurts the outcome that you're getting. So you don't want to deploy, for example, a backlog generator for your software engineering team without changing the role of the product owner in that system, because now you've got a backlog that nobody's responsible for, like you're not going to get the right outcome, right? So you can't just throw the tool in there, you got to like go change the roles in the process to get it to work. The people still matter. Yes. And we've actually created a solution on Google Cloud that we've been walking people through, it's actually in our booth as well. We're going to have to go check it out then. Yeah. That's the most watched demo in the booth. So what is it about, how is it? It basically, well I wouldn't say quantifies, it basically rolls out an entire life cycle of software development from requirements gathering into user story generation, into acceptance criteria documentation, into test case generation, code generation, test scripting automation all the way to the end. And it also gives you an estimate in terms of how many people are required for building that specific user story, and how are you going to allocate your capacity to do it in a bot structure? So it's a semi-automated solution, but I think it's pretty unique. That is, yeah, that's great. Well, we're definitely going to have to come watch it. Join everyone else. Last question for you as we wrap. When we have the opportunity to talk to you at this desk or at the next Google Cloud, next a year from now, what do you hope to be? Is it going to be a year from now or seven months from now? I'm hoping it's a year. I honestly almost just said, or eight months from now, or whenever. Who knows? Who knows? I wonder if Google decides. Maybe it'll shorten it to six months. I wonder if it helps when it's going to be. Anyway, we'll have to figure that out. What do you hope you can say that you can't say today? Scott, I'll start with you. Oh. I hope we'll be able to say that we've figured out how to truly create business value out of this technology without, and I kind of hope Google will be able to say more than speeds and feeds kind of stuff about the technology, right? Where's the business impact that really got created? I think that's going to be key and I'm less worried about the co-pilots in the applications kind of area and more worried about the real hardcore business processes where we've built applications to enable people and now we're going to make those applications better and we're going to make the processes better and create value. I hope that's the story that we're talking about a year from now. What about you, Sirab? I'll be happy if I have a few clients that come and tell everyone that PWC helped us reimagine our entire business model and reinvented it leveraging the power of generative AI and AI on Google Cloud. That's all there is to it. I mean, I'm very focused on a finite set of. Being focused on the customers is a very good place to get focused. I'm sure you can share with us and we'll get that. Less secrets, more fun. I love it. Sirab Scott, thank you so much for being here. It's fantastic to have you on the show. Rob, always a pleasure. And thank all of you for tuning in wherever you might be on this beautiful day. We're here in Las Vegas, Nevada at Google Cloud Next. My name's Savannah Peterson. You're watching theCUBE, the leading source for enterprise tech news.