 Welcome back to theCUBE's coverage here in New York City. I'm John Furrier, host of theCUBE. Wall-to-wall coverage is part of MongoDB's local events. This is the inaugural kickoff of a 20 city plus tour where they're going out to the street where the developers are bringing it back to the people, it's where the developers are. And we've got two great guests here. We've got Andrew Davidson, SVP Senior Vice President of Products at MongoDB and Steven Orban, VP of Migration at Google Cloud. Guys, great to see you. Steve, great to see you again. Give alumni here. John, always a pleasure. Thanks for having me. Thanks for coming back on. All right, let's get into it. You guys got some news, congratulations. Google, MongoDB working together. Andrew, what's the news? Ton of good news for you, John. Good to be here. You know, I'll tell you. For me to be here in person with you and with the whole community, talking to developers, understanding what's resonating. A lot of news is resonating with developers. I think the most exciting thing of the day is Atlas Vector Search, launching it to public preview. You know, there's a great time with generative AI which of course we're excited to talk about with our Google partnership. But also the public announcement of our private preview for our stream processing capability, bringing that wonderful richness for developers of the power of the document model to date in motion and just unifying and making it easier to build elegant real-time applications. Also the general availability of our relational migrator, lowering the activation energy required to modernize off those legacy relational databases and move to a modern posture on MongoDB. And so many other announcements too. I mean, it's just, I'm so excited about it all. I'm seeing great resonance out there. Well, you get the keys to the kingdom of products because you know, you got to see the customers, you got to look into engineering and you got to be happy. MongoDB owns the developer experience whether in the dorm room or the board room. You got the same motion. I'm writing software. And as you get better and bigger, you're impacting more critical systems. You got to think like a system thing. You got to start designing stuff. And then when you get into the open source and scales like you guys are, now you're impacting the enterprise with the data platform. It's not new. It's been around for years. It's just a total trajectory up to now the gen AI which is like the tailwind of Epic proportion. Totally. And you guys are perfectly positioned for this. Yeah, no, I like the way you described it as the dorm room and the board room. You know, I was talking to some customer executives here today and just pointing out how we, there's always this tension. Do you focus on the sort of buttoned up enterprise developer in the bank? Or do you focus on that hoodie wearing developer in the dorm? And the good news is they're essentially the same person. Maybe the enterprise developer isn't going to be using what the hoodie wearing developer is building with right now. But they're sure as heck going to be doing so in their hackathons and they're having those hackathons. That's how they're staying ahead of the curve. And so the key priority for us is to find out what's the commonality? What's the superset for both of those types of personas? That's the top priority. And that's our North Star. It always has been from the very beginning as a developer to the platform. Let's talk about the relationship with Google. Obviously, Steven, your history, you've run big networks, you've set up operations, you know the developer market. Right now we're in a world of developers that's trying to figure out how to build these general AI apps. They're data apps. Data is a critical part of the application, not just some department organizing content and building dashboards. You have a key ingredient to the application, but it's also got to run. So you got to learn how to build them, know how to build them, get it right, then run it. So if you got to get good enough to run, but also the playbook, like say in MongoDB to run it, this is the genitive AI opportunity right now. Every single alpha developer who sees opportunity is going to go run fast and create value fast. You got to set that up. What is your gen AI strategy in the ecosystem? So we're super excited first of all about the partnership we have with MongoDB and kind of like you said with the developer and the hoodie or the enterprise developer. Well, my wife asked me, do you want pizza or chicken wings? For did I say, why choose? You should just be able to kind of have both. So we're super excited about the partnership that we've been building on for a long time, frankly, but now with some of the announcements we made today around Mongo deploying some of our generative AI capabilities within their systems, it really sort of makes it even more accessible for developers of all ranges. So just to highlight a couple of the use cases we're working on together, the obvious one that everybody thinks about is chatbots and customer service. So now when customers want to ask MongoDB a question, they're going to be able to have a chatbot like experience on the website that again has a lot of the Google Cloud vertex AI capabilities underneath. I think one of the things that is also really kind of helping this be more approachable technology for developers is how they're also, Mongo's also developing a way to take natural human language and translate that into MongoDB queries using some of our vertex AI capabilities. And MongoDB is already a very easy to use database that developers love, but sometimes when they want to create a complex query that they may have not had to do before, having a natural language interface to do that is super special. So from our perspective, we're committed to creating the most open and innovative ecosystem that includes partnerships at every layer of the stack, which I can talk a little bit more as the conversation progresses, but having partners like MongoDB on the forefront of that's amazing. I'd love to first congratulate you on the news. The word language has come up a lot, large language models, training languages. Before we get into that, can you take a minute just to explain what is the vertex product for developers, what's the positioning of it, why would they be interested in using it? Happy to do that. So we think of the, at Google, we think of kind of our generative AI strategy in two main camps at Google broadly. We have consumer use cases, which we have our bar product for, and that's for consumers, our Gmail customers, for example, who want to help planning a trip, or writing a song, or writing some sort of essay, or help with sort of like a consumer-based question. And then for the enterprise use case, we have a lot of, all of our enterprise customers really want to be able to take advantage of these generative AI capabilities, but be able to continue to have control and security around their data and their own intellectual property. So what our Vertex AI platform allows these developers and companies to do is to take, think of them as copies almost of our large language models, and then tune and adapt them with their own data and their own inferences, and then make them available as APIs inside their own products. So that's essentially what Mongo did as example to create the translation layer between natural language and their queries, and that's what all of our enterprise customers are asking, because they want to be able to use these large language models, but do it atop their own data, and a way where that data is not shared back with the model owner. Yeah, and that's good for their IP rights, and the vector search stuff that you guys are working on, that ties in, because it's native to Mongo platforms. Is that right? Yeah, so if you want to build one of these expert system chatbot type use cases, you're going to have to have that large language model in the mix, so you can take advantage of Vertex for that. You're also going to have to have these inference endpoints where you summarize your own custom knowledge with numeric vectors, and vector search allows you to use those numeric vectors to find relevant meaning to what the user is prompting you for, and to feed that into the large language model to get a cogent response back. It's this wonderful loop, and at its core, it's not just vector, it's vectors that allow you to quickly find the appropriate operational data. So we think it's just such an obvious expansion to have your operational data store also support vectors. It's a perfectly rational thing to add to our developer data platform, and those software applications are going to be able to store so much metadata about this. What kinds of responses are people getting? Who's doing what? Is it something that they wanted to see? Is it useful? Is it not? And this allows you to continue to have that loop of training. And by the way, I just want to call out, for us to be able to partner so deeply with Google on this, I mean, talking about the real first movers in this space, just think about some of the seminal moments where we all realized what was coming when DeepMind demonstrated that go, the machine was going to win it go. We didn't think that was coming for decades. I mean, each step of the way, I know Google's doing things that we haven't even seen yet. I can't even imagine that Steve can't talk about. And so it's exciting to be part of it. And they know data, they understand data, they understand open source, understand software. They have a lot of language stuff. I mean, the feeling is mutual for sure. Andrew, we appreciate sort of you being such a early adopting first mover in a lot of these places. But for me personally, it's just a super exciting place to be. It's the reason I joined Google is I could see how far ahead they were in analytics and AI in particular. And you don't have to look too far back in 2017. Several Google scientists collaborated on the attention is all you need paper. Look it up if you haven't read about it already, but that's where the transformer model science was invented, which is now underpinning all of these generative AI models that we see today. No doubt Google's got a treasure trove of brains, talent, and expertise, and data, and code. And now the next question as we get kind of our toe in the water with AI, I pipe aside, it's a real legit trend. People are going to start integrating. This is a big topic here at the event, integrations. The integration with you guys, you got the vector database with you guys in vertex. That's going to be a big deal, integrating. You know a little bit about that. You guys do, how do people integrate? As they think about what might be down the road as they start investing in building out AI apps, ML AI, and running it on an AI ops. What are the integration touchpoints? What do you guys think? How do you frame that? How do people think about going down this new Cambrian explosion? Yeah, I'll take a shot at that one. To me, this is, there's another way to ask that question. You can flip it on its head and say, what are the appropriate abstractions that you're going to compose with and combine together to take advantage of this next boom, this Cambrian explosion that's happening. And I would argue, we've been moving towards this moment for years that you want to be able to have something that services all of your operational data. You want to build anything platform for operational data. And that's what we conceptualize ourselves as with this developer data platform. We want to be at this layer of the stack that's really focused on the data. The software that the data interacts with, excuse me, the data that the software interacts with. Guess what, the software stack, that's where all the integration points happen. New frameworks, new stacks. You know, we announced partnerships with Langchain and Huggingface and others today, all of whom allow us with an open source framework to plug and play with Vertex and others. And so I think we're just at the beginning here. We're going to see new trends, new frameworks, new stacks emerge. And it's absolutely the name of the game here that's going to be open composable. So Huggingface is the embedding model piece of it? Huggingface is the place in which you can find off-the-shelf models to play with and operationalize them in any number of services, yes. Langchain is a popular framework for kind of doing that loop of generative AI, large language model, writing your code, prompting it all the way through. There's other players, Lama Index and many others. All kinds of LLM ops coming quickly. 100%. I mean, one of the funnest parts about working in this space and this moment in time is that the use cases are so diverse and there's so many of them. One of the most challenging parts about working in this space at this time is that the use cases are so abroad and sometimes people have trouble wrapping their heads around them. So, you know, we spend a lot of time with our partners to try to help make those use cases much more real for our customers by putting them in context of other tools like MongoDB. And you're seeing the signals in the market. You guys have an incentive for 25K in credits. So, there's a lot of evangelizing, getting people understanding the playbook which is get building. Yeah. Okay. And don't worry about the other stuff. Just get in there and let the universe take care of itself because this is what people make mistakes in these early markets. They overthink it. Yeah. Yeah, I mean, not just get building, but get in there and share what you're doing. We're all, this is a massive new movement. People are learning how to build in new ways and they need to share that with other people. You know, I was speaking to a customer of ours that's pretty far along using generative, actually generative accelerated a roadmap they were already on. And they had described building an entire domain specific language to be able to express the essentially the software agents that they do. And without that language, how could they test new versions without being confident that it wouldn't potentially do something problematic or unexpected? So you realize we have to develop entirely new paradigms for testing and thinking about this. It's just the beginning. Well guys, thanks for coming on. I do think there's going to be these new use cases that might emerge. My final question is kind of out there. You can decide how you want to answer it. I believe there's going to be some use cases that are going to be unthought of before that are now gettable with some of the configuration changes in the enablement coming with AI. Stuff that could come out of left field that was never insights that were never around before or product market fit opportunities might emerge from developers playing over here and all of a sudden, boom, something happens. What some of the things that you guys think about that might emerge? Because I just see that the value creation with AI is being so fast. Yeah. I think we're here today already, John. I think we're seeing that unravel right before our eyes right now. I'll give you an example that I'm super excited about. You have a lot of companies who have, for a long period of time in their particular industry domain, been collecting lots and lots of different data. So let's say you have a legal database or a news database or a financial database that has a lot of context-specific information that a lot of these companies have had to write their own very complicated query languages around to kind of trove through all that data. Now what we're seeing with things like enterprise search is those are able to not just be indexed very quickly using sort of search capabilities, but also summarized. So you could imagine, you know, tell me if you're an insurance, you're collecting a bunch of insurance information and claims data and somebody says, show me all the policies I could potentially buy that would cover me for over a half a million dollars. Not only could you get all the policy documents with search, but you could summarize why which policy might be better than the other with natural language in a way that a human who maybe isn't an expert and doesn't have the time to go troving through 100 policy documents that might make up that summarization. And I think it's just, we're going to see that across every different industry and there's going to be some extraordinary transformation. It's a different kind of data extraction. For example, someone could say, what does Stephen mean by his comment on the cube? That you can't type that into a search engine. It has to go through our linguistic language via our large transcript. So domain-specific data, if you have data, that's the value, right? Totally, and I think oftentimes we can over-rotate in a sense into data, but where's the data born? It's born in software, or in our case here, on this wonderful program. And so, if we look at empowering new experiences, you're asking, what's coming? What's coming is, we're just at the beginning. Right now we're seeing these relatively slow, human-centric chatbot experiences, for example. That's cool, let's just start. But imagine the layered software on software, machine-to-machine, sophisticated applications that are coming, the versatility, scale, performance requirements of all of those, it's going to be extremely exciting. You know, Dave Vellante, we're going to end because we're going to know for time again, again, this is unconscious and great. Dave Vellante, I debate on our podcast all the time, and we're going to do it again tomorrow. Every inflection point, someone's disrupted out of the business. Mainframe's got booted, mini-computers by the PC, but now we're in a market where agility is an equal opportunity employer. You don't have to be an old guard to be disrupted. You can be back in the game like a startup. So there could be a young Steve Jobs, 20-something-year-old Bill in AI apps, the next Bill Gates now in the industry. Yeah, the barrier to entry. And still the big guy can still win, too. Yeah, the barrier to entry for disruptive transformational tech is so low right now. What an exciting time to be alive. You can pivot if you're big, and you can disrupt if you're small. So the dilemma, is it a disruptive enabler or a sustainable enabler? Interesting question for another time. Steve, great to see you. Thanks for coming on. You guys are great. Andrew, thanks so much. With the head of product, and we got Google on theCUBE here, breaking it down, the future of AI is going to be about the data, domain data, building the app, software is the key, and running it on the cloud is going to happen. It's theCUBE coverage. We'll be back with more after this short break.