 Good morning, cloud community, and a welcome back to Google Cloud Next. It's day three here in beautiful Las Vegas, Nevada, and we've been covering all of the days here on theCUBE. My name's Savannah Peterson, joined by my fabulous co-host, John Furrier and Rebecca Knight. Both of you have just been absolutely smashing it this week. It's been a pleasure. How are you feeling? I'm feeling great. As you said, it's day three, the final day of the conference. There's still a lot of buzz and excitement on the floor here, so I'm feeling, feeling good. We got a great, great day of guests ahead of us. Yeah, absolutely. John, you feeling hydrated? Great, so good. Yeah, I'm ready to go. Good to do five days of this. Yeah, baby. Yeah, baby. We love this. We love it, I guess. Then, our fabulous SCTO box, thank you so much for joining us. Thanks for having me. We love having you back on theCUBE. You're a pro now, proper alum. How's the week been for you? It's been great. Lots of good announcements from Google. Always good to see all the different companies and all the things they're offering, so it's been great all around. Did you have a favorite announcement? We, of course, box, you know, we deal in unstructured content, and so any time that you see these, like, model announcements, like from Gemini 1.5, we get excited. In particular, the token window size. Yeah. Million tokens, and the multimodal aspect of it, those are super exciting for us. What does that mean for box customers, then? So, one of the things that, whenever you're dealing with this kind of AI and you're dealing with it on your content, so like, you know, companies, they have marketing files, they have their contracts, they have their video, they have their images, like, all of these AI now, like in the last year or two, has been able to start to operate directly on the content. It used to be in the world of, like, ML or some of the older AI, you had to kind of structure your data first, and then you saw things like, you know, the big data revolution of, and the big AI, the revolution around how you did that. But now, you can actually start to have AI understand things the way that humans would. Yeah. And so that then changes, really, what people can do with their content overall. So for us, this is great, because that's what we do. You know, we have 100,000 enterprise customers, we store hundreds of billions of these files, we let people get anything you want to do around it, and now AI is a big part of that. So with the new token model, or with the new bigger tokens, we can do things now where let's say that you had a very long and complicated set of content. So one of the things that was hard before was like, I have these two contracts, and it's like, what's different about them? Because it used to be that the AI models could only look at them a little bit at a time, and they couldn't see all of them together. And so with the new bigger token window sizes, you can go through and spot things. This clause seems riskier in this one, or when you have this long marketing material versus this one, this one has a different tone than this one, and this area, you can change these things. So the AI basically got smarter in a way of being able to handle more context, and that's really powerful for a lot of use cases in the enterprise. Yeah, it absolutely is. My gosh, I'm just thinking about every time a contract gets marked up by a lawyer, you don't want to reread everything, you just want that eliminated. Such a simple business case, but so impactful. Yeah, yeah, definitely. Especially at scale. Absolutely. What else? So I was just going to ask you about your clients, do they have this understanding that they can now work with this unstructured data in this way, or is this something that you are helping them see and understand and say, there's so much more we can do here? I think it's definitely a little bit of both. Of course, every one of our customers we talk to wants to know about AI, and they want to understand really what is going to happen next. There's still definitely a world we see in many enterprise companies where they don't quite, they're surely not using it across their company's skill the way that we think that they could, because I think they're still learning a bunch of different aspects of what's possible. And in particular, one of the scariest things about AI for many enterprises is that how does data security work in this whole thing? So for instance, let's say that your AI and your company has access to your employee information, or it has access to an upcoming financial report that's confidential, like what happens if somebody else in the company is like, I want to know about the earnings for next quarter, like what happens? And so this idea of security and permissions around AI is really critical, and this has been holding a lot of customers back from just even embracing it, and certainly if they go to try to build stuff themselves has been a challenge, so this is why at Box we provide that kind of capability, and then we're able to help customers understand how to use it in a secure and safe way. And one of the things we've been saying on our analysis segments is, Google's got the full package coming to the table here, up and down the stack, performance, smarter software, more intelligent data, with BigQuery, vectors built in, all that good stuff. The question everyone's asking is, how do I operate this? Now, you're a lead, Box is a leader, you're in the ecosystem, Google's got workspaces, they have applications, does this ecosystem have the formula to accommodate the integration? Because customers have Box, they have Google, so cloud has to have an ecosystem. What's your feeling of how to operate at cloud scale with ecosystem? Do they have the package? Yeah, so in general for me, when those customers ask these kind of questions, to me one of the things that a lot of customers are doing and should do right now is to pick their platforms. Who are they going to trust to not only give them AI capabilities right now, but also in the future? And of course, when you're looking at the infrastructure level, the AI model level, Google's one of the best. But also on top of that, we believe that you should be looking at ways to get fundamental capabilities in areas you care about from vendors that you think are going to be doing this well. So for instance, when you're doing AI on content, this is what we do at Box. When you're doing AI on email, there's a bunch of other vendors. When you're doing AI across your, let's say your structured content, like there's other vendors that are really excellent out there. So we sort of see the world evolving where companies are not just going to use the AI capabilities of say, like an infrastructure provider like Google, which we think they will, but also in specialized areas that are complex, they'll be able to use different platforms. We're on structured content platforms, so this is where we focus our efforts. Awesome, and the other conversation that's come up, and this is more technical, I'd like to jump in the weeds a little bit, if you don't mind. ML Ops and AI Ops, pre-genitive AI was a big part of operating things. You saw Kubernetes containers that scale, orchestrating workloads. Now you got ML Ops changing with Genive, because the data's changing, the role of data, the software's in more and tells like I mentioned before, what is the new definition of ML Ops, or how do you operate the language models and the multimodal models? Because cross-modal reasoning is happening. Yeah, yeah, I think it is an interesting evolution that we're seeing from the world of ML Ops, which was very like structure data oriented into the world now of AI and the new large language models. And it's interesting because they're similar concepts in many ways, but at the same time, the way that the operating it works is just different. So for instance, one of the challenges that we often are worried about is, if let's say somebody is, you know, interacting with their content, they're asking a question. They want to know about it. They want it to ask a question. Like how do you know that what the AI is doing, which is very free-formed, very general purpose, is it right or wrong? And so one of the challenges that you have to face there is to try to figure that out. But there's no, like in the world of ML Ops or in the world of like historical ML, you kind of knew, like you had a training set, you had a set of like ground truth. That's not always possible when you have this very free-formed interactions. So one of the capabilities that you're seeing emerged from some of the AI Ops vendors, which we think is great, is this idea of trying to manage not just the more sort of academic ML capabilities, but more of like how to observe and how to manage quality overall. So for instance, the one thing that we do is that, if you want to know about like how the models are doing, you can ask a different model, like how did this model do? And you typically want to use different models, you want to use like different quality of them. And that is something that helps you then figure out whether or not the answers are good, because as a, like for us in enterprise space, it's not like you can have a human read these. This is confidential customer data. And so if you have another ML model sort of be able to evaluate that, that gives you a sense of whether or not you're helping your customers get the quality answers that they want. But so where is the human in that loop though? Well, so for us, the way that the human would be, they would be, let's say they have their content in different forms, they either need to structure it so they can get some key info out of a contract, they have a complex question for a complex document. So the simplest type of use case where they'd be like, can you help me find this info in this document across these documents? And then, so then they get the answer and then we cite it and like, you know, most companies should do that, I'll always tell you why the AI thinks what it thinks. But then like, so the humans get an answer and they can check themselves, but then as a vendor, we always want to ensure quality overall. And this is where the world of AI ops or in different techniques you can use to make sure you're giving your users and your customers the right answer rather than just whatever, like in the world of AI, of course, you have to worry about whether or not the AI is actually understanding the person and giving them the right answer. That's always a challenge with the large language models. I'm curious, we've had a lot of conversations about everyone applying Gen AI at scale, ish. It's kind of a proof of concept moment. 2024 perhaps being the year that we really make AI real. How do you prioritize within your organization which AI applications you're going to go after first? It's a good question because to the point, like it's historically in my experiences, it was always like by the time you got an interesting looking demo, you were kind of 80% of the way done with the software development. Like, and so then you're like not that far from being able to release something in production ready. With Gen AI, it's kind of weirdly backwards where you can actually usually get something that functions really well, like very quickly. And this I think has fooled historically people because they see these great demos and then they're like oh, I must be so close. But the thing that a lot of companies I learned over the last year is like you have to really spend an awful lot of time making sure you deal with the edge cases. This concept of hallucination, this concept of like making sure that you have quality responses overall. And if you don't worry about those things, then what happens is you have really cool demos and you release these cool things but then when people start to use it, they say like this doesn't actually work the way I want it to. And this will help forever be like, we don't believe the AI models will ever be 100% accurate, but you can get them to be much better if you do the prompt engineering, if you do the like retrieve augmented generation and you just kind of help them guide to exactly what the customers want overall. So for us, this is an ongoing march that we're going to see like use case after use case, capability after capability incorporated through the same pattern of test it out, see if it works, get some feedback and then be able to go and test the quality in production. What's the biggest shift you're seeing coming out of the show that's on your radar now that wasn't coming in? Is it the scale of multi-cross modal integration and reasoning? I mean, some of the things we're seeing, the mix between unstructured data and real time assembly. I think the world of multi-modal AI is going to be a big deal. It's funny because we even talk about large language models, you know, the language part is in there, but then now the large language models have image recognition capability. We as humans do this all the time. If you're reading something and there's a picture there, you sort of see both at the same time, you understand both. But today, most of the different AI has been on just the text. And so merging those together like a person would, I think is the next big thing you'll start to see, not just cool demos, but in production, people using them effectively. And so images, of course, are where there's a lot of really good production class models, but then on audio, video and other things so that you can get to the point where the AI understands things and can interact with well beyond just the text of it. That's awesome. And I think that's coming out clear. I mean, look at some of the tools. It's just getting easier. The question is, what's going to change for developers? If you get to look at this, besides some of the code assistant stuff, what's going to be great? Where's the change in the workflow of, say, a developer, obviously, security's baked in. What's your view there for the enterprise developer out there? I think, so there's a typical enterprise developer AI can both help you a lot, like you're saying with the code assist tools, and we've seen even a box, a lot of productivity benefits, but also in the world of like, you really have to kind of keep up with these latest trends because if you kind of stop for like, even three months or six months, like new models are released all the time, like new capabilities are coming out. And the quality just overall is increasing. And if you hadn't thought through like what a bigger token window could do for you, then that is a big challenge. So I think one of the challenges to being a developer in the AI field right now is just to like constantly be trying to figure out the change in the situation. It's like different, like databases don't change that often, operating systems don't change that often. Nowadays, like every few months you're getting not only a new technology, but a new whole set of capabilities that you never had before. Well, I want to ask you about that because you are describing this landscape where the pace of innovation and change is dizzying. And that was one of the questions that we've had guests asking here at this table. Our enterprises, they want to innovate quickly. They wanted to do it yesterday. And so are the vendors, are the people who are able to help them get to where they want to be? Can they do that? Are the customers trying to do things too fast? I think the key will be to, I do draw some analogies in the early days of mobile, but also particularly cloud where like at some point, like you have these capabilities that are available to you as a service that are available to you in this quality production class way. And then so you can start to use those or one thing which we're constantly talking our customers about is like we've talked about before is should they rely on another vendor to provide it for them? And so like, cause anybody can take the off the shelf AI models and begin to use them for productive reasons. But at some point you start finding yourself recreating a bunch of capabilities that somebody else is offering you. And we see this all the time because one of the obvious things you can do with AI overall with on your unstructured content is ask questions your content, right? And for us, that's a key capability that we just provide. And then, but it's tricky, it's hard. How do you get it to work on spreadsheets that's different from images from different from different types of files? How you split it up? How you do the vector embeddings? How you take the database? How you do retrieve logman generation? And some companies are going down this path where they are starting to implement that and they can and for different reasons it helps them. But for us, we say not all like we spent all our time doing that. And so vendors throughout here are offering you these solutions and we think that you should consider always should you use those like for box, unstructured content that's what we do. All right, last question for you. Since you're a cubulum, the next time you sit down at this desk, what do you hope to be able to say that you can't say today? I think for me, what I'm interested in is when you start to see not only early use cases, not only early talk of this capability, multimodal, large context windows, but you start to see really big productivity and or streamlining business process benefits that are working in almost like, I hope the next time we come and tell you about all the big benefits, they're almost boring because there's a new thing coming out. So for me, the realization of a lot of the promise of AI is really what's going to happen now in the next, till the next time we meet. And excellent framing. Yeah. And this is fulfilling the promise that we've all been sort of expecting of AI coming in the last six months or a year. I love that. It's going to be actualized. We're going to make it real. Ben, thank you so much for being on the show. Rebecca, John, always a fabulous time. And thank all of you for tuning in for wherever you are on this beautiful earth. We're here in Las Vegas, Nevada. My name's Savannah Peterson. You're watching theCUBE, the leading source for enterprise tech news.