 And welcome back to theCUBE. Here, live from the Lakehouse, we're doing wall-to-wall coverage from Databricks' data and AI summit. And I'm so excited to be joined by Jamel Brown from First Orion. Welcome to theCUBE. I know it's your first time, and really excited, because I met you first yesterday at the press briefing, and you just absolutely had such great answers and great information. I think that people who are coming to this summit could really benefit from. And so, why don't we first, what's First Orion about? All right, well, thank you for the kind words, Rob. Yeah, so my name's Jamel Brown, CTO at First Orion. First Orion, we're all about bringing bringing transparency to the phone call. So we are responsible for scam and fraud, spam protection in the phone call, as well as brand identification for business to let the caller know who they are. We want people to be able to trust their phone and answer the phones again. Yeah, and it was funny, because we were all saying that it's so many times, I just push, you know, send to voicemail, and I love the fact that what you're doing, and I think what was super interesting is that the use case and the amount of data you have to actually go through to identify that the caller is legit, is amazing. And can you take us through kind of like, what does that look like from it? Because it was like so many per second, and how fast it has to happen. Absolutely, so we received, we analyzed about 100 billion phone calls annually. For each phone call, right, the number of packets, think about the, if you think about like an email, right? When you see an email, you only see kind of the UI, but behind the scenes there's all this information in this packet. It's the same thing with the phone call, but a little different. With the phone calls, for every one phone call that we receive, there's at minimum three packets that we receive that we then have to tie together. So we're talking about, you know, hundreds of, a factor of three at least for each phone call. So over 300 billion records that we then have to basically tie together, join, and then we analyze that data. So we source data from a number of different places in addition to the data that we see, and we tie all that together, and we basically fingerprint each phone call. By generating this fingerprint, we're able to determine statistically whether we think this call is a scam or not, or a different type of call. We can tell other, like the category of the call, whether we think it's a telemarketing call, or a marketing call, or a banking call, right? So we have a whole host of data that we apply to each individual phone call, and ultimately we have to respond to that phone call within 100 milliseconds. Wow, yeah, and I think that was the amazing part is that you have to do this at such a rapid pace. And I know that the Databricks folks were saying that you really push them on their technology. You're really using it, you're always leaning into all of their technology. And we were talking, you said you'd use Dolly, now you're looking at some of the Mosaic ML stuff that's coming down. Talked about how that journey is gone. Yeah, so Databricks has been a phenomenal partner. You know, like I mentioned in the press briefing, I've been familiar with Databricks since 2015. Actually, 2015 was my first Spark conference as well. But, you know, we've had a long history together. And at first to run, you know, we hit a point where we weren't really able to scale very well. So then we switched over to Databricks, and it's been, you know, a blessing ever since. We've been able to accomplish a lot of things with a small set of people, because we are a small company. We don't have infinite resources to go boil the ocean. And so, you know, for example, you mentioned our Dolly use case. With Dolly, we've been experimenting, we built out an internal kind of knowledge base for allowing our engineers to onboard more efficiently. So instead of going out and searching through confluence, we're now able to, the engineers come on and they can start asking questions of the things that they need, whether it's a product, or it's actually setting up their machines. This has presented a phenomenal degree of efficiency. You know, we've seen, we did some measurements, and we've seen over seven and a half times productivity gains to a point where, you know, we tried to extrapolate, it's kind of hard to ask or answer the question, how much time does someone actually spend looking up information during their job, right? But we got to a point, you know, with a very rudimentary, very conservative estimate of around, we were able to generate around $900 in savings per employee. And so that's been a God's sense to help us. And then also, we also found, you know, they talk a lot about these LLMs, these AIs hallucinating or presenting wrong information. And while that's obviously true, there is a component of that. We've also seen situations where an engineer goes out and he has a question about how a product works. He goes, he looks it up, and even he can be wrong, right? He can derive some incorrect assumptions based on the documentation that he's reading. And so, you know, humans are just as likely to be wrong as an LLM is, but, you know, so there's always going to need to be that level of vetting concerns. But, you know, otherwise, it's really helped in our rolling out of this internal project. Yeah, no, I think that hits on what Ollie was talking about, you know, hey, you have to be able to trust it. You have to be able to. And that's been a big theme. And I think when we look at it, it's not only the ethical use of it. It's, this is how do you work together? How does machine and person work together to become, you know, one plus one equals three? And I think that's been a big key to what we've been seeing in all of the different events we go to. It's not about replacing people. It's about making them more efficient or go fast. Exactly, augmenting, yep. Yeah, and have you started to use any of the other stuff out there, like co-pilot or from Git or how deep into AI, like going that direction, have you guys gotten it? So our team, we're not using co-pilot. We're like a Bitbucket shop. So we're not using co-pilot at the moment. We, most of our AI usage is in either, we use AI in some of our scam prevention. We use some AI in our vetting applications. So we have to vet businesses. This project that I just referred to, we call it Maestro Internally. That's kind of our first internal tool that we've developed to help with the employee side of efficiency gains. Yeah, and I think that's huge. And having been, you know, I was at AWS and I remember getting there and trying to figure out how do I set up my environments? How do I get really going? How do I know what to do? And this is all during COVID as well. So I think it's, you know, going through and making people more efficient, like you said, I mean, quantifiable to $900 per employee. That's huge. And that was just one use case, right? Yeah. Like, so yeah, we're really excited about what we're seeing early on and, you know, Databricks, especially with all the announcements this morning, it's going to get even easier for us to deliver on these things and make people's lives easier and us ultimately delivering better products. Was there anything that stood out to you, any of the announcements today that stood out to you? Yeah, everything. But especially the Lakehouse IQ, I think that's going to be a goal mine, especially being able to, you know, ask real questions. Hey, what happened to this process? We see that a lot where a job may fail and it's really, you know, it kind of takes someone who really has experience with Spark to be able to dig in and understand at times. Now being able to ask these questions or just automatically let us know and then be able to ask further qualifying questions about the issue that happens is going to allow us to make our products much more stable. Yeah, I thought it was really neat how you could figure out where it was, the data was coming from, how the failures happen. I thought the woman who did the demo did a really great job with it. I thought that was fantastic. And in fact, I thought all the demos that were done today on the stage were really deep enough that somebody who's technical could get a little bit out of it, but not too deep that somebody who wasn't overly technical couldn't follow along. Oh yeah, they've really mastered that heart for her. Seriously. And it always leads to a lot of extra questions. Like, so I've got my lead architect here and immediately, you know, since the keynote was over, we're sitting in the back of the room kind of strategizing on who we're gonna go seek out and start pinging about, you know, what they're presenting and try to get some more information out of them. Start to push on getting into the previews that they announced. Exactly, exactly, exactly. Oh, I think that looked like key, I was just based on our short conversations yesterday. I started kind of keeping that in the back of my mind when I was watching what they were announcing, how they were announcing it. They didn't really get into the Lakehouse apps that much, but I think what was neat was, I like how they talk about keeping the data private. And I think to me that would seem like a huge thing for you guys. So huge. Conversely, the apps has been, I mean, right on time, because we've been discussing, right, we develop an app on the Lakehouse marketplace. How do you keep someone from just taking it, reading the source code, or, you know, for example, notebooks, the data you share, reading the source, and just recreating it themselves. You would be essentially open sourcing your proprietary information that would make the business, right, and so them announcing that further details around it, perfect, Chef's Kiss. Yeah, yeah, Chef's Kiss, I love that. I look at it and go, it's, to me, it just makes so much sense because your IP is your data. I mean, that's what you do and how you do it, how you, you know, make sure that it's not a scam and, you know, get people to trust the phone calls. I mean, that seems like a very similar message to what Databricks is bringing from a data perspective is how do you trust the AI and have transparency. And conversely as well, right? Like, it's an arms race, like I discussed yesterday. And so the bad guys are out there using these same technologies. If we were to then give them, right, where they, the app wasn't secure enough or they were able to see, hey, what actual data elements are they using? What can we obfuscate? What can we mask to get by, right? We're giving them the blueprint on how to scam people. We don't want to do that. So, you know, again, our goal is to protect people, protect customers, especially the elderly because they're much more inclined to fall for these scams, these phone scams. And, you know, ultimately, right, bring trust and transparency back to the phone call. Yeah, and I think that's so key is the trust and transparency. And we were even talking and chatting about how they're even using the AI voice enablement. They're getting, like you were talking about yesterday, they get, they went from having the first six digits of the number right to having my child's phone number or something like that, and a voice of that. And a voice somehow, right? And it's crazy. And I think that's one of the things that I'm really excited about what you're doing, what, how you're doing it with. And again, I asked the question in the press briefing, how much, you know, how did you get started and you kind of went through that and then what percentage was on Databricks and you were, I believe, said 100%. 100%, yep. Which is awesome. And I think that's, they should be very excited about that. That's for sure. So you're a good portion of that two exabytes a day processing that Olly was talking about. For sure, yeah. You know, Arsalaam mentioned yesterday, hey, who's this first Iran company in my internal reports that are using every, the top of every single category of all these skews, right? Yeah, we've always tried to push because we have to, right? We have to try to keep up and keep protecting people, so. I love it, I love it. Well, I want to thank you for coming on board. I really appreciate it. I mean, you're fantastic in such a wealth of knowledge and I know the people who are out watching, we actually have a lot of app developers and data developers that watch the show pretty religiously and I think getting your view on what's going on with Databricks has been fantastic. So I really want to appreciate it. Absolutely, thank you for having me. Yes, thank you. And thank you for watching and we're theCUBE, we're actually taking the signal out of the noise and presenting it to you to give you what's going on here live from the Databricks Lakehouse and the Data and AI Summit. We'll be back shortly with our next guest.