 Hello, and welcome back to theCube here from our Lakehouse press room over at the Intercontinental, part of Databricks Data and AI Summit. And I'm so excited to have Raj Baines with me, CEO of Prophecy. Probably haven't heard of them, but I think that they're in a space that's going to become extremely hot, especially with all of that's going on with AI and ML, this whole idea that being able to bring data engineering to the masses and being able to do it in an easier way, because there's just skills gaps everywhere when you look at this whole ecosystem. So welcome on board, and why don't you give people kind of an idea of what Prophecy is about. Sure, thank you so much for having me here, I'm super excited. Prophecy is a low-code data transformation or data engineering product. So this is, you can think of it as the next generation of ETL, but it's different because I think what we looked at was that early 90s everybody was writing code, so you had SQL and Perl scripts and Bash, and then people decided, hey, that's not how we do ETL, let's all move to tools. And then we moved to the cloud and people gave that up because those tools no longer work. So what Prophecy is, we provide a low-code environment which makes people productive, makes everything very simple, but underneath it, we are generating high-quality code. On Databricks, it's Spark code, or Scala code. On the SQL side, it's SQL with DBT code, and so we are making people productive, but without giving up code. And this is also helping them keep the data where it is and transform on top of it, right? That's really the idea of data engineering. Yes, the data stays where it is, it stays in Databricks, it could stay in Snowflake. So it stays there, we transform it, we make it much easier to clean it, put it all together so that every people, and there's people in data engineering who want to be more productive because they're serving many line of businesses. On the other hand, you also have the line of business people who are frustrated, right? They're stuck behind the central data engineering team, and they want to get work done, and they've been talking about data mesh, et cetera, so platform because they want to not be blocked, they want to get their data transformation done themselves, and for that having a low code, a visual drag and drop environment fits right in. So that makes data a lot more accessible to line of business and removes their dependency. I think even when we were just catching up before this, I think we saw this collapse in the infrastructure layer, like down at Kubernetes and stuff. It went to, they had SREs, and you had IT ops, and a whole bunch that kind of collapsed into platform engineering, and we seem to be seeing the same thing happening with data engineering and data analysts, and kind of a data, you know, I guess you could say almost an app engineer kind of collapsing into one with the data analysts and data engineering coming closer together. It's coming closer together, we're seeing slightly different distribution in different companies, a little bit more data engineering, a little bit more data analysts, but it has to come together, right, and I was part of the Hadoop movement when back that happened managing Apache Hive, but I think where that went wrong is it said people who are experts in data, experts in their business, have to come to us and learn all the nuts and bolts of our technology. We look at that and say it's just wrong. The technology has to go meet the people in the business where they are. They don't care about your technology. They want to get their business jobs done, so we have to enable them to do that. So I think that's what we are focused on as we make it easier and easier. You can be a data engineer, you can be a data analyst, everybody's empowered. I think as important as data is becoming, I mean, you've got to enable everybody, so yeah, they should be experts in their domain, not in the underlying technologies. Yeah, it makes total sense, and I think, again, I've dabbled in this world so for quite a while now, and I think when you start to look at how transparency, making it easier, low code, because again, I think even some of the announcements today, some of the Lakehouse IQ stuff that they're doing, seems to come together with what you guys do as well and help to bridge that gap to them. Yes, definitely, and Generative AI is going to play a big role, and actually, that talks about a new product that we've released. So what we did is, so we have the low code product that makes data engineering on Databricks and other Spark, Open Source Spark, very productive, simpler, and so we have a lot of really large enterprises moving on to us and running these tens of thousands of data pipelines. So you won't hear of us because I can't advertise. I can't advertise because you have all these modern data stack companies with these tiny point features running out with influencer startup engineers who talk and talk and talk, and my enterprise customers, I can't even use their logo. We are in the modern data stack of all the other tooling companies. Right now, we are running the largest workloads, tens of thousands of pipelines for like Fortune 5, Fortune 10 companies, right? And we can't even use their logo. So that's one thing, so we were doing that, right? But then, on the line, yeah. But for that, like, how, even though you can't say who they are, what is the use case you're solving for those Fortune 5, Fortune 10? I mean, the basic ETL use case is that, like, let's say I'm, you know, the simplest way I explain it is, I'm a product manager, I need data, right? So let's say I have a credit card and I need data where I want to see who my profitable customers are, then I want to say what makes them unique. I need demographic data. Then I'm like, which rewards do they like? Can I find more customers? So all of that data is specific to my use case. Then I look over my shoulder, I've got the mortgage person. They need a completely different set of data. So let's say you have a bank. Everybody, every team needs a different set of data for their product, for their analysis, and they end up with these production pipelines that are like tens of thousands of them running across their businesses, right? And of course, they're running right now on legacy on-prem technologies. And what they need to do is that they want to get productive, but also move all of those over. Yeah, and you were saying, I think, again, I think I love the fact that, again, I'm an open source as one kind of guy. You're doing with Spark and you're doing with DBT Core. For those who don't know the difference between DBT Core and DBT Labs, DBT Core is the open source version of DBT. And so you work with, I would assume they could use it with Labs too if they wanted to. But yeah, they could. But I think we are very much focused on what our customer wants. Not so much in like, yes, we will support an open source format. So in Spark, you build pipelines, everything is open source. SQL, we wanted to do the same. DBT Core becomes popular. OK, we'll adopt it, right? But then our customers come in and say, you know what? We are coming from Altrix. Data analysts have been using Visual Drag and Drop and using Altrix. And they are like, we don't want to write complicated SQL. But some of them do. And now we are like, OK, with this dilemma, we are like, OK. So we listen to everything the customers asked us to do. And it was clear that DBT Core is a good starting point. But DBT Labs is not much. It's a UI on top of DBT Core, right? So we said, can we give the DBT Core open source community a product that is 10x better than DBT Labs? So where do customers go to get started with Prophecy? Do they go to the Databricks marketplace? Is that a good place to start out? Yeah, for Prophecy, you can go to app.prophecy.io. You can sign up for it. You can start using Prophecy on Databricks. You can start using us on a SQL data warehouse. You could go to Databricks Partner Connect. If you click that, the products will handshake, and you are into Prophecy. And already everything in your Databricks account is connected, and you can get started. And then the other thing is, yeah, so it's so exciting stuff there. Also, we're doing a bunch of stuff with GenerateBI. That is very interesting. Excellent. No, I think it's, again, super interesting that you're going down this path. And I think that, again, it's a big market for you guys, I'm sure. I'm sure you're looking at this and saying, hey, the GenAI stuff is super interesting, and that's where the future is. Yeah, and yes, GenAI is the future. And coming back to the SQL product, right? For data analysts, now we released, actually, last week, which is very exciting, is Datacopilot. And Datacopilot is now you write something in English. You say, hey, give me top five spending customers. And it will say, OK, I'll get the customers, their orders, aggregate orders by a customer who are the top five, sort them. It will build the whole visual pipeline for you underneath this dbt core that will get generated. But it is really democratizing. So I think, yes, data engineering will become great with GenerateBI, but the Datacopilot will make the business data users much more productive. And trying to wrestle with a particular SQL or this format is just the last generation. So it's all moving to GenerateBI and visual development. If that can do 70% of your work, why would you wrestle with code? Absolutely. No, it makes sense. Having been there and done that and done all the Perl scripting and stuff to do ETL in my past life, I'm glad there's things like Prophecy coming around the corner because nobody wants to be sitting there doing that. It was tough. It used to be tough. And the other very interesting thing is we've got customers saying, hey, I'm using you for my ETL on structured data. I want to build GenerateBI apps. So now that becomes very interesting because there's been this Databricks snowflake dynamic playing out. And so we said, hey, can you ETL all your unstructured data into something like Pinecone or Vector Database? So we built that. So now you can build a chatbot, right? You can just ETL all your data into a Vector Database. And when somebody asks a question, look up the relevant documents, send it to OpenAI, and say, here they are. Given all these documents, what do you think is the answer? You can build that in a week. It's amazing the new kind of applications you can build, given all of that. No, totally exciting. And I want to thank you for coming on. It's been great. I think this is a space we're definitely going to be watching because I think it's just starting to get super interesting with everything that's coming. So Raj, thank you from Prophecy. You can check them out in Partner Connect. This is theCUBE. We're here from our Lakehouse press room. Thanks, and we'll be back in a few minutes with our next guest.