 Welcome back to theCUBE's live coverage here in New York City. I'm John Furrier, your host, packed house here at the beginning of MongoDB local 26 city tour starting here in New York, so you'll see it throughout the year. Going out to the streets with the developers. I got great commentary. We got two great guests here talking about the insurance area in the industry verticals. One of the hot announcements here during the event, Marcelo Granados global insurance leader, Data Bricks and Jeff Mead from Principal Industry Solutions at MongoDB. Thanks for coming on theCUBE. Thank you for having us. Thanks for having us. So industry verticals is the hot area. Everything's affected by data, AIs here. Foundational platforms are giving enterprises more and more developer experiences you guys announced today. You guys are right in the wheelhouse insurance. It's one of the key verticals. Why is it so hot? Take us through some of the hot areas going on right now. Sure. Fundamentally, insurance is a data processing organization has been forever. So for an organization to be able to process data more efficiently, more effectively with less hands on keyboards is tremendously compelling obviously to the industry. Yeah. And I think that within that there's the concept of real-time data, right? So oftentimes insurance companies care about their reports. Instead of like, of what I need to do is like financial reporting, regulatory reporting. I can do it on a monthly basis. I can do it on a quarterly basis. What real-time data. And we often, we partner together very well and we often think about or give the example of telematics, right? So telematics is not a dream anymore. A lot of insurance companies know that with the prohibiting the use of credit score behavioral data like telematics is key. And if you don't have that real-time view of how people are driving and how it would affect not only pricing but also claims, it's absolutely critical. And then the second area where we think it's important is everything that's going on with climate change. Climate change is happening, right? We all know that. But think about a catastrophe, right? So all of a sudden you have a hurricane in Florida and if you do not have that data coming in real-time. So as you know, how is the catastrophe spreading? Where do you need to send your claims adjusters who handle the claims? The impact that I would have in your balance sheet in terms of losses can really bring a company bankrupt. You guys have been so successful on data pipelining, you mentioned that. Real-time is key. Yeah. What are some of the other use cases that tie this in and how do you guys work together on things? Yeah, so I mentioned telematics. We actually have a solution accelerator that we build together around usage-based insurance. But the concept of the semi-structured data on telematics can go beyond just personal auto. I mean, commercial auto is now heavily used in telematics. A lot of the OEMs, like four General Motors, they know that the safety features are definitely important and you can just expand. Homeowners or commercial property, you can draw a parallel from telematics to just sensors, safety devices in the home. They're definitely critical. And even for life insurance, we're seeing the use of Fitbits to just get a more accurate view of mortality. So in our view, I think every single line of business and insurance can be affected by semi-structured and unstructured data as well. I mean, let's talk about the industry for a second. When I think of insurance, I think big IBM mainframes, old school, a lot of paper, bureaucracy, slow-moving glaciers, what's the change? Because also now you've got cybersecurity threats, ransomware, a lot of targeted attacks on some of the older systems. All that going on, they want to re-platform and refactor some of their industries, you're seeing a lot of that. That's right. What's the strategy? What are you guys seeing happening? And where are you guys winning? Because you guys are doing very well in the space. Doing very well. Is there a reason why it's successful with Databricks and Mongo right now? I'd say that's probably one of our biggest use cases is helping customers modernize away from the constraints of legacy systems. And the challenge really is, how do I build those digital features? But I don't want to spend 10 years and $100 million replacing the legacy systems first. So how do we federate that data out? We build an operational data layer. We build a single version of the truth in a modern data platform that developers can work with easily, build APIs against, build digital features. And it's that data, that operational data that's now cleansed, kind of de-duplicated, single view of the truth that just so happens in our opinion to be really advantageous to the data scientists. The machine learning models also want that clean single version of the truth too. Yeah, yeah. And something else that we're very passionate about is, you know, think about large language models, right? Everybody, everybody's talking about that. So the concept of taking data from drug sanction all the way into feature so that it can be consumable for LLMs, latency more than ever matters because when you're using large language models for underwriting, for assisting calls, like customer service or like call center, waiting a couple of minutes versus having a lot of decimalization of a lot of that data in seconds is so important. So now all of a sudden, you're not only talking about like the real-time, like why real-time data is important but just what are you going to do with that data and how are you going to empower functions to be able to do AI and get insights faster to all of your functions within insurance? And I think that's an area by the way that's going to be another upside for you guys and insurance. Definitely. You got a lot of language, got a lot of data. Everything's kind of teed up, if you will, for the large language and the foundation models. That's right. The computer vision, you mentioned telematics. Yes. It's multimodals, not just LLMs. You got the other foundational models. Computer vision. Exactly. Another one. That's right. Yeah, yeah. For computer vision, of course everybody's talking about Tesla, right? Or more kind of like the disruptors in insurance space to say customer experience, right? Like you as a viewer of Netflix, you expect to have the same amazing, delightful customer experience from your insurance company. So the dream of having an accident and using your phone to upload a picture of the car accident and getting the bail of the claim right away. You know, that's what our customers are expecting, but like you need to make sure that the model- And you got to make sure it's just accurate. Correct. So we're all okay. All right. Now we have truth. Correct. But so that's a very good point because something that we've seen in the industry is that, you know, leveraging on a structured data, you know, NLP has been doing that for decades, right? So now it's nothing new there, but now you're not only talking about text, you're talking about video, you're talking about images, you're talking about audio. So how can you not only digitize all of that information, but do a lot of that feature engineering so that it's ready for consumption to build those models, to take those models into production. That's the key that's very, very important. That's right. And by the way, the vector database announcement we saw hits in the line with that because now you can get those patterns out earlier. Yes. It takes advantage of some of those old-school NLP techniques to get in saying, okay, we've seen the pattern before. Correct. This person, this insured thing, you can get that both preventative and post. Correct, correct. So multimodal, a lot of data. Okay, so in the show here, the big talk is data apps, app analytics. I'm just going to throw it out there. If you're a developer, data is someone else's job. That's right. So if I'm a developer, I either don't care about data, okay? That's one thing. So put that inside, come back to that. And then you've got companies who are saying, I'm going to be the next big thing in AI, but have no data. You've got to have the data to be successful. It's kind of coming out in the hype of LLMs and foundational models that there is no value unless you have data. LLMs, like OpenAI, they scrape the web. Okay, they're strip mining the web and charging for it. Okay, little rant there. Put that aside. Data value. And I don't care about productivity for the developer. I want to just program. How do you guys see that? Because I can imagine a lot of app developers in the insurance industry are old-school app developers or new school, not infrastructure nerds. And I think Marcel and I, together, that's why I think Databricks and MongoDB together are so compelling. Because you look into these large insurers, there are the pockets of deep developer experience. There are the pockets of deep data and deep data experience. It's really bringing these two parts of the business together so that they can work more in unison and stop kind of fighting with one another and the friction that we typically see, that's the value of these two products together. If you can bring app developers, the best app developers with the best data engineers and data science together, then you're going to start to see that flywheel start to move. Yeah, no, exactly. And it sounds a little bit of a cliche, but the collaborative nature of, for example, like our platform Databricks. So when I was interviewing with Databricks, like I'm going to actually turn into a data scientist. So, you know, I know how to do it. You're a data nerd, too. Okay, I'm going to borrow you, okay. I am John. I'm a data nerd, too. So I was trying to see, like, what does Databricks workspace look like? And what are the programming languages? How would, like, my Python skills or my R skills would fit into that? And I was pleasantly surprised to say, like, you know, from a target operating model, people process some technology, sometimes insurance companies that say, what is the tool that I can use that my data scientists, my data engineers, my developers, my time business analysts can all learn quickly? And the answer is, you don't need to impose a particular programming language, a particular tool with platforms like Databricks, of course, powered by MongoDB. Like, if you want a programming SQL, you can do it. If all of a sudden you want to use Python for, like, you know, some of the machine learning algorithms, you can do that. If you're an actually and you have a package in R, you can do that as well. So, all of a sudden in workspace, in a notebook, people can use a programming language of choice and you can see, you can do a lot of the sharing on that transactional award and analytical award or not just the data, but the models, the data pipelines and the ML Ops best practices. That's a great segment and you're bringing together the developer and the data together. Now the next level of question is, okay, people want to just learn how to build apps, the new AI apps or the new analytical apps or data apps, whatever you want to call it. You got the lake house on your side, you got the platform. And then I got to run it. Got to stand it up somewhere. Right now there's more of emphasis on building it because that's what people are doing that they could discovery on. Less so much on how to run it, although you do care. I mean, you got to have some good enough infrastructure to train and run, but all the focus is on how to run it. So the question is, what have you guys learned as a best practice in the industry that you're in that people can get their hands around and start building up to discover what the value is of the new data? Because it's like a new value proposition emerging that wasn't gettable before because the AI now can extract more new kinds of insight. That's kind of being discussed. Okay, what is that? And how do I integrate into the application versus some dashboard? That's a great question, John. I think we fall back on some of the best practices that we've learned with domain-driven design and agile delivery. Where what does an AI want? It wants a corpus of data. What does a domain-driven design team want? It wants a domain's worth of data. If we can blend those two things together and say that you're the transactional delivery team, but now you also own the responsibility of calling the AI model, incorporating the AI model into your domain so that the data is similar between analytics and transactional. And then at some point, this thing merges into just one entity. So Jeff, you're saying make a horizontal scalable layer with some vertical domain expertise built in together. That's right. Easy way. I think so, John. That's a dream. I think we just nailed the architecture. I mean, this is what you guys are basically doing together. We are. And for us, for the longest time, people were wondering like, I have a data warehouse. It works well. Why do I have to change? So you mentioned that we created this concept of the lake house. Taking the best features of data warehouse from a performance and governance perspective, but also from a data lake, flexibility and cost are important. So we created the lake house. But the part that I think brings it all together is governance, right? So if you look at the data pipeline and you look at ingestion transformation, governance is applicable to everything. People want to know who created the data, who has access to the data, who can modify it, and upstream and downstream dependencies. So having that view, especially with just graphics that would allow people to go back, it's not only important for an organization. I think it's very important for regulatory purposes, right? Think about GDPR, think about CCPA. All regulators are going back and saying, like, hey, if you started with this data and now you end up with like 10% less of the data, what happened to that data? Why did you exclude it from the analysis? Because again, it can all go back to biases, right? The data that everybody's talking about biases, racial biases, you know, discriminatory disparate impact, but it's about being proactive. And you hit a little bit on that because it's not about model draft and say like, why is my monitor accurate anymore? It starts with the data and how you process it into the pipelines and apps. You're only as good as what your input is, right? Correct, garbage in, garbage out. Well, no, but this is a good point. This whole debate, we've had this on theCUBE many times around compliance. Is it a drag to innovation? And if you get that right, it's interesting because it's almost like going through the airport, TSA, you know, it's like, what line are you in? I'm in TSA free to go right through. That's a good line, right? A little line, okay, pipeline, and take off your shoes and belt and that's kind of like the compliance line. That's right, that's right. If you look at what's going on now, if you can get that freed up and kind of let that shackles go away, then you're talking about freeing the data up. So what I'm seeing here is that, okay, you can have the compliance and that reinforce Amazon's security conference. They're talking about authorization scale, not so much access, because there's no perimeter in cloud, right? You want to pick the best features of the data lake and compose your own data freedom to let the apps program the data, so you check the compliance box, get global entry, that's the metaphor. This is kind of a new philosophy because now the value proposition with AI changes the game because there's insights that are now available that were never there before so you don't know how to get them unless the developers code. That's right. The developers are going to be unlocking the value. That's right. Not the DBAs. That's right, it's in the real time data. What happens when these AI models that are data hungry, they eat all of your data warehouse and your data lake data. Where are they going to come next? The real time application data. All right, well I think we're pretty stoked about the future, I guess, and come back and get back to what we were talking about. The insurance kind of went out tangent there, but this is a future scenario that's playing out like right now. Yes it is. And smart people in the insurance and these industries are figuring out how do I take my domain expertise, refactor those with the data, put a container around the apps, keep it quarantined? Yep. I saw a company here sponsored unblocked, they actually can build COBOL code with AI. That's right. They've actually replicated COBOL. That's right. So like, hey, large language world, build me a COBOL connector. They can modernize, so you're sort of seeing a lot more legacy integration and insurance has got tons of legacy. It's ripe for transformation, absolutely. So what's next with you guys and your partnership? What are you guys working on? What's the coolest thing you worked on? I think we're pretty excited to be launching more solution accelerators together. Jeff and I, we agree on many things, but the one thing that we agree the most is that there's a lot of information coming to our clients and anybody can come with like a PowerPoint deck talking about trends and use cases, but showing them the art of the possible, walking them, giving them something. We're like huge open source company as you guys know and oftentimes you're like, why would you give something away for free? And for us it's just like, we want to accelerate the time to value on these solutions and giving clients, for example, for underwriting or for claims, all of these artifacts with the data, with like the data pipelines, with just some ideas on just how can they be thinking about the data differently? And what are some of the, even like reports on or dashboard visualizations, it's not a solution that would be plug and play, but it would get them 70% of the way, right? And then from there, every single company would tailor in a different way because the value that two companies may have with the same solution accelerator is basically two things, their people and their data. Every single company will have their own data, their own strategy, their own mix of business of what they think is like, what they want to be in the future to stay successful. That's a bit of advantage. What's the most exciting thing you're working on? I think the partnership with Databricks right now is really, you know, Dolly and large language models are the tip of the spear. What we're really after is to transform these organizations to bring AI into applications. We know that they're going to require high performing teams. High performing teams and highly effective teams aren't slowed down by the noise. They're not slowed down by the fact that the data is sprawled all over the place. We really want to solve that problem and bring developers and data scientists closer together, working together, as well as the underlying data that they share. Modernization. Modernization, John. Thanks so much, Commander Marcello. Thank you, Jeff. Thank you. Thanks for coming on. This is theCUBE breaking down industry successes, horizontally scalable, vertically integrated data and AI is all here. Part of the renaissance and this Cambridge explosion of AI developers driving the change and the standards is developer led innovation here in theCUBE. Of course, we're open source too. Live streaming content. I'll go to siliconangle.com for all the coverage. We'll be right back. For coverage for the rest of the day, we've got a bunch more interviews left. The CEO, Dave is going to be on next day with us. We'll be right back.