 From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. We believe the future of intelligent data apps will enable virtually all organizations to operate a platform that orchestrates an ecosystem similar to that of Amazon.com. Now by this, we mean dynamically connecting and digitally representing an enterprise's operations, including its customers, partners, suppliers, and even competitors. This vision includes the ability to rationalize top-down plans with bottom-up activities across the many dimensions of a business. For example, demand, product availability, production capacity, geographies, et cetera. Now unlike today's data platforms, which generally are based on historical systems of truth, we envision a prescriptive model of a business's operations that is enabled by an emerging layer that unifies the intelligence trapped within today's application silos. Hello and welcome to this week's theCUBE Research Insights, powered by ETR. In this Breaking Analysis, George Gilbert and I explore in-depth the semantic layer that we've been discussing since early last year. And to do so, we welcome Mullen Araf, who's the CEO of Relational AI. Mullen, welcome to the program, sir. Great to see you. Great to see you, Dave. Great to see you, George. It's always nice to talk to you. Okay, so let's just do a quick review here. We talked about the shift from an application-centered world to one that is data-centric, where logic is embedded in the data and not locked up in application silos. For decades, software's been about all about automating processes. And now the age of AI, we're automating decisions and AI is taking actions. And to enable this, we have to have unified metadata model and we're moving from a world of historic systems of truth to a real-time model of an organization. Now, to get us where we are today, we had to separate compute from storage to get to cloud scale. And now we think we need new technology to capture all the intelligence that was locked inside of the application silos so that we can coherently work with that shared data across the enterprise. This is what we call the semantic layer. So question for you, Mullen. What exactly is a semantic layer and how is it broader than the semantic layers that define metrics and dimensions for BI tools like we know from DBT, AtScale, LookML, et cetera? Yeah, great question. So I, as you said, people have been moving all their data under these new super scalable modern data stacks. And sometimes people refer to them as data clouds. I know that Snowflake likes to refer to itself as a data cloud. And others. And others, yes, the Salesforce as well. So that has solved a ton of problems for folks because it gives them the ability to go one place and find all the data they need. Like we, as you said, we haven't had the scalable data management technology up until now that lets us do that, okay? And so we put all data in one place and now everybody who wants to consume data can go to this one place and get it. But what people realized is that data in those places is fragmented. It still reflects the models of the database models, the database schemas from which the data comes from, okay? And those databases are usually driving applications. And it's often the case that we see hundreds, thousands, in some cases, tens of thousands of applications being driven by similar number of databases that feed into the data cloud and the modern data stack. And so the idea of a semantic layer emerged from trying to solve the next problem, which is how do I avoid having everyone having to do the work to understand that item in one database really means product in another, really means skew in another, okay? Can we unify terms and can we create a unified model of the business so that business people can speak in one vocabulary, one taxonomy, one ontology, okay? And then we also realized that there's a lot of stuff that we left behind in the application logic that were driven by those databases, okay? So you might, for example, compute a gross margin calculation in a bit of Java or a bit of C sharp or a bit of Python. What people were using the modern data stack realized that is every dashboard builder, every spreadsheet user, every BI tool, every data scientist was recreating in an ad hoc way that definition, you know? How do you compute somebody's age from their birthday? There are just many things like that, okay? Now there's a category of things and metrics that people want to use when they're doing business intelligence. And I think of semantic layers as coming in, you know, three different buckets, okay? There's the BI centric, SQL centric technology using DBT and look ML and at scale and some of the technologies that you mentioned where it's really creating semantics around almost like a headless BI and you get to define the metrics once and those semantics show up in the database and instead of having to figure out what gross margin is on a use case by use case basis it's just in the system and you just look it up, okay? And that is obviously very important, very valuable. It creates a, it makes those semantics shareable. It makes them accessible to non-programmers because now you have them declared. It makes it so that typically it takes a lot less effort, a lot less lines of text and code to express them. So they're easier to develop, maintain, to debug, scale out. It also makes it possible to reuse them so that you don't have to replicate them all the time and it makes it possible to optimize across them, okay? And it makes it possible even to take that definition and map it to different underlying technologies creating a little bit of vendor independence from the enterprise to the set of vendors and technologies that they can use to support all of this, okay? So that's all very good and we're seeing, you know, a lot of people move in that direction. There's a second much smaller community, the first community is doing this mostly in SQL or in domain specific languages that map to SQL, that translate down to SQL. There's a second smaller community that's trying to capture semantics using the semantic web standards, you know, Al, Sparkle, Shackle. And those technologies are not nearly as popular, but of course the semantic web standards were created to capture semantics. And what's nice about them is that their standards, you know, that aren't tied to a DSL. And so we see some people moving in that direction. And then we see a third type of semantic technology that's based on a modeling and modeling languages. One that we're very familiar with is called Legend. It was created at Goldman Sachs and it helps them unify, build models basically across all the data silos and abstract over in specific technologies. And so a lot of people at Goldman Sachs will interface with their data assets through the semantic layer based on legend. And they'll ask questions of that system and then a system can then generate a SQL query or a Python program or a Java program or whatever to get that those questions answered. Legend has been open source. So there's some attempt at building community around that. Other folks in financial services like Morgan Stanley have developed systems, a system called Morpher for example. There are conceptual modeling tools that you can do this with. And we see actually quite a few enterprise application companies develop their own semantic layer. Blionder has one, SAP has one, others have this semantic technologies. Rumor has it that Salesforce is developing one to create an interface across these data silos, okay? Most of these technologies to date are backward looking semantics, metrics, et cetera. And I think the future of semantic layers is gonna be to incorporate intelligent semantics, things that predict, things that do predictive analytics, prescriptive analytics, things that answer harder questions than you can answer just with descriptive analytics and SQL and so on. Yeah, and we're gonna unpack that today. So George, we have these various paths that Mollum just kind of described. Take it from here. You and I have talked about what that future looks like and we're gonna get to that, but let's talk a little bit about the DBMS as we know it, George. So yeah, Mollum, just you talked about different approaches to capturing or to representing semantics, but explain to us the technology that needs to be, that can unify all those approaches and be forward looking, can capture the predictive and prescriptive, but also describe all the entities and activities in a business, backward looking and forward looking. Why do we need new technology for that? Well, we certainly need new technology to capture intelligent semantics, okay? Most semantic layers don't really speak to that. At some level, some of them let you call out into Python code and so on and call procedural systems. I should also, and we should also distinguish between semantics captured as code versus semantics captured as a model or relationally as equations. Semantics captured as code suffer from a variety of problems, including the lack of shareability, business people don't really usually understand code and it ties you to particular technology so that portability that I was talking about earlier becomes more problematic. So a lot of the value to semantic layers comes from the fact that you're capturing the semantics declaratively in a technology independent way and so on. So anyways, I think the right abstraction for these richer semantics that power intelligence is a knowledge graph, a relational knowledge graph specifically so that the knowledge graph, if it's relational, is compatible with the underlying modern data stack and data cloud which is usually based on the relational paradigm using SQL and relational query languages, okay? So when you have that as the abstraction for the semantic layer, you can now capture much richer types of knowledge including statistical and probabilistic knowledge, deterministic logic knowledge, symbolic knowledge and you can start to think about answering questions that might require graph analytics or rule-based reasoning or prescriptive analytics like integer programming and mathematical programming, the things that drive every supply chain network on the planet or predictive analytics, things from time series forecasting to simulation to probabilistic programming to graph neural networks. These things are not possible in the semantic layers of today and ones that are not based on a relational knowledge graph that sort of brings the intelligence to the table. Yeah, so we've talked about this with our audience before that knowledge graphs are very powerful, they're expressive but they're hard to query and so that's something that relational AI believe is attacking but I want to step back for a bit and just to help people again set context. Today we have a brute force way of getting at least part of the way there and this dramatic graphic that we're showing has Jean-Claude Van Damme, he was basically straddling two 18-wheelers and the trucks, you know, the truck on the left represents, you know, today's data platforms, right? The left side which are mostly SQL DBMSs and as the platforms incorporate more and more of the application semantics that we've been talking about, it really gets harder and harder to straddle them and so of the major modern data platforms you see data bricks with unity, very aggressively trying to add semantics to its platforms. Snowflake likely going to follow suit by building its own metadata catalog. AWS's data zone maybe gives us clues as to the direction that they're headed, you got Google's data plex, seems to be going down this path, Microsoft Fabric in the power platform or maybe a likely path for them, Oracle's going to try to attack this from its very DBMS centric point of view, you know, we'll see but today we have this collection of bespoke tools that are spanning governance, security, metrics, data quality, observability, transformation, they have to be cobbled together. So this graphic from ETR's emerging technology survey to survey of more than 1500 IT decision makers about which emerging tech platforms they're using, these are all private companies. So it shows some of the tooling that is representative and gets us part way to our vision of the future. The Y axis is net sentiment or it's a measure of intent to engage, the X axis is mind share, you know, Grafana stands out a little bit, you see DBT and Fivetran, they're prominent as is Calibra but there are many, many choices that organizations have that requires them to stitch together different elements of the stack. But the Salesforce data cloud and Palantir platforms are somewhat instructive with respect to the future. At least we think so, George, can you explain that? Yeah, we actually, we talked to the EVP of the Salesforce data cloud recently and what really is unique is that they're up leveling today's data platforms by creating this metadata driven set of semantics that also borrows the application semantics from the Salesforce operational ops so that setting up a pipeline that ingests, transforms and unifies all the data, it's now a configuration problem, not a code problem. And Palantir takes that somewhat further because they can model entities that represent, you know, the rest of the business. So, MoLM for customers that are today fully invested in, you know, the big data platforms, could relational AI become a platform that hosts, it simplifies and enhances and even unifies this cobbling of tools that is an attempt to start adding semantics? Yeah, absolutely. So one of sort of our fundamental principles here as we go to market is that we wanna meet people where they are. So we don't, you know, the way we go to market is we go to market, for example, as a co-processor, an AI co-processor for data clouds, okay? Starting with Snowflake, which we think has a very compelling data cloud offering, but, you know, we think highly of other data clouds like, you know, Fabric and Salesforce and others, okay? So we are already meeting people where they are in the sense that our system runs inside Snowflake, it runs inside the security perimeter and leverages all the governance machinery that's gonna come with that system, okay? It's also architected like Snowflake, remember, we're adding support for workloads that these data clouds don't support, okay? Graph analytics, prescriptive analytics, predictive analytics and so on. So we have to, you know, eliminate friction as much as possible, like, you know, copying data out, synchronizing it, re-securing it, re-governing it, all of that goes away, you know, in our model. And so we are, you know, architected like Snowflake so that we can provide the same kind of experience that made Snowflake and systems like it so compelling relative to Hadoop and Teradata and these sort of non-cloud native solutions. And then at the paradigm level, we're trying to meet people where they are because the paradigm of data is the relational paradigm, okay, and asking people to do data-centric things in paradigms that were not, you know, that were invented for coding and for application development is not helpful. We've also positioned our knowledge graph as a target for these semantic layers that we're talking about, okay? So when we've worked with Legend, for example, again, their users work at that level and it transpiles down to various systems, including ours as a target. And the point that you're making is a very valid one. If an enterprise ends up with 17 different semantic layer technologies, they're not gonna be helping themselves. And so the opportunity here in this transformation as we go from application-centric to data-centric and having all the data in one place is now, instead of getting your model, you know, on a per application basis or per vendor basis, your model is now of your enterprise, okay? And you have to own that. You have to own those semantics, okay? And so treating semantic layers as just another, you know, set of technologies and having, you know, 10 different approaches to it is not gonna let you capture all the value that you can capture from a semantic layer. The world, I think, wants to have a unified model of the business, okay? Which can't come from a vendor. It's gonna come from the business. It's about your business, your enterprise. And they want to be able to express that model in a way that's not code-based so that it's portable. It can, you know, be expressed in, you know, whatever semantic technology, you know, makes sense. And then those portable declarative relational semantics get, you know, behind the scenes, compiled down to whatever target system or systems you want to use. We love this vision. I think we're very much aligned with it. But I gotta push you a little bit, Mo. Please. You're being very diplomatic here. Because you're thinking about relational AI as complimentary, you know, versus kind of disruptive to existing data platforms. You call it, like an AI coprocessor for data clouds. I can't help but think of NVIDIA as AI coprocessor for x86. But you see it as complimentary, not disruptive. But if this independent sort of semantically enhanced data fabric comes to fore, doesn't the data platform become kind of just a storage engine? You know, look, I don't think these things are easy to build. Like building systems like Snowflake and BigQuery and, you know, Fabric and so on, they're not easy to build. So, yeah, all the big players are gonna have one of these. And, you know, all the enterprises we work with don't want to be held hostage by any one of them, right? We typically see, you know, more than one of them in an enterprise, okay? And so if you tie your semantics to a vendor where you're getting all the compute stuff, okay? You're tying, you know, there are some pluses and some minuses to that, to say that diplomatically, okay? So I do think semantics done right have to be independent of technology. And I do think this NVIDIA actually, analogy from our perspective is a very positive one because it's worked out really well for all the CPU manufacturers. They got to focus on being CPU manufacturers and worked out really well for NVIDIA, as we know. And, you know, in your device, you don't use the GPU for many things. You don't use it for word processing or email or spread cheating. But when you need it, you really need it because when you need intelligence, when you need visualization, when you need graphics and so on, it's much, much more cost effective to have something on the motherboard that can have access to the memory that you have on your system. It's well integrated and you just farm out that work to it without having to leave your laptop or your iPad to go to a specialized device for gaming or for whatever that you're doing. So this is a new way of thinking about working with these platforms, with these data clouds that are trying to be much more than databases. They're really like what Snowflake is doing today, they have the opportunity to be what Oracle was in the 90s, you know, the platform that everybody builds their applications on, right, and so they need to open it up for co-processors and for people like us to augment it. Right, well, look, we've had decades of challenges in building top down enterprise models which you alluded to before. You know, custom built enterprise data models gave us package apps, think SAP, Oracle, NetSuite, Salesforce, BDWs, enterprise data warehouses, rather than, you know, data warehouses and data marts. Even BI has been sort of challenged to get widely adopted with shared semantics. We talked about DBT, at scale is another example we often use, but then you got these bottom up metrics, bookings, buildings, revenue, you got top down dimensions like the organizational hierarchy. The question we have, Mullum, is how do LLMs fit in this whole equation and can they simplify this, you know, what is often seen as a mess or at least a challenge for many organizations? Yeah, I mean, there's a lot to unpack there, but definitely LLMs have been a gift to people in our business, right, and the people building semantic layers, a real breakthrough. And the nice thing about LLMs and knowledge graphs is it's very symbiotic. You can use an LLM to eliminate a lot of the labor involved in creating a knowledge graph, like having it mine knowledge from all the places that exist in an enterprise, from applications, from documents, from images. There's knowledge in people's heads even. So it's a very useful tool for making the construction of a semantic layer easier, cheaper, easier to evolve and so on. And when you have that semantic layer in place, that knowledge graph, that semantic layer makes it, makes the language model much more effective at answering questions that it wouldn't be able to answer if you just pointed it at, you know, a bunch of data that, you know, we typically see, you know, it's not uncommon to see 100 million columns of information across, you know, all these databases moved into systems like Snowflake and so on, right? So in the same way, the semantic layer and the knowledge graph make it easier for you and me to navigate these very complicated data silos. We can do the same for our language model by giving it a cleaner abstraction, representing our enterprise using a few hundred concepts at most, okay? Even though we might have 100 million columns or 200 million columns of data, no, even the most complicated Fortune 10 enterprises don't have more than a few hundred concepts that you need to understand to model them, you know, concept of a customer or a product or an employee or a store or a manufacturing facility. They're just not that many unique concepts in an enterprise. Now, how you relate those concepts to each other, you know, is where the complexity lies in and language models can help us discover those relationships, can help us name them in a way that's consistent, can know that in some cases, item can be mapped to SKU, which can be mapped to product, which can be mapped to article if you're using SAP, they call them articles and help us keep it, you know, lower the costs of evolving the semantic layer as businesses evolve and as the world evolves and so on. Now, you need a little bit more than that. You need processes that let you decompose your business so it's not, you know, a monolithic model where you can think about, you know, business function-specific sub-models that use that same set of standard concepts that we agreed to are gonna be the concepts that we use in our enterprise. Okay, but it's a few hundred concepts. It's, you know, you can get your arms around that. What you can't get your arms around is 200 million columns of data. Right, right, right. So, I guess, maybe it's not the center of gravity shifting, but there's clearly value shifting. You know, where the DBMS was sort of everything whether it's Oracle or Snowflake or whatever, and there's this metadata and intelligence that's defining the business entities that you're talking about, Mullum and George, you've mentioned many times. Salesforce Data Cloud, Palantir, sort of examples, but there's this move to a prescriptive, intelligent model of the business. That's the future, the picture that we're trying to paint. So, and maybe an example is what Blue Yonder is building on top of relational AI and top of Snowflake, actually. Mullum, I wonder if you could talk about the movement across the three stages, i.e. DBMS centric to metadata centric to this intelligent model of the business and how that unfolds, how do you see that? Yeah, I mean, I think Blue Yonder's doing really great work in this area. They made the leap, I think, very effectively there in the process of sort of re-platforming everything on top of Snowflake, okay? And that's already, I think, been very positive for them because it's been very positive for their customers because their customers typically are large enterprises that have data spread across hundreds, thousands of databases, they have application centric systems and they're moving all their data onto Snowflake. And so, Blue Yonder is saying, okay, we're gonna move the applications onto Snowflake and we're gonna be part of this new world as an application provider, okay? And so it's very creative, very clever, I think for them to sort of adopt this. It's understanding the industry trends. Now, in that process, they have a business logic, they have semantics that are today, the starting point was they're all written in programming languages, they have code-based semantics. And in that process, we're working with them to see, can we actually have these semantics be relational? Can they be based on a knowledge graph and have them be relational? Now, as you know about Blue Yonder and SAP and many others, when you're helping to run someone's supply chain network, it's very important that you know, you can do predictive analytics because you need to know what demand's gonna be next week, next month, next quarter. It's very important that you can do prescriptive analytics. I mentioned earlier that integer programs run the world because no airplane or truck or train or ship goes from point A to point B without being scheduled using the sophisticated prescriptive analytics, okay? And so we've seen already with some of their applications, we were able to recently replace about 116,000 lines of in this case, it was C++ code, some old application logic with 3,000 lines of knowledge graph of relational rules, okay? The implications of that are immense because a lot of the costs in building software is attached roughly to lines of code. And when you can cut it by an order of magnitude or even two orders of magnitude, your cost of development goes down, your cost of testing goes down, the quality, the number of defects are tied to lines of code, your ability to evolve your software, your ability to share that business logic with your customers and have them evolve it and tailor it for their businesses is increased substantially, okay? So real game-changing stuff. And I think the sooner most application companies come to terms with the fact that the world is becoming data-centric and we're moving to these data clouds, the more relevant they're gonna be in the future because they have these amazing assets around all these semantics and all this knowledge about how to drive a supply chain network or how to run, analyze a wireless network or sell your network for quality of service or all these things that, all this IP, all these semantics that are trapped in code can now be moved into these knowledge graph-based systems, these intelligent systems and can be deployed on these data platforms that everybody wants to be on. Let me jump in, Dave, and just follow this up with. Mullum, maybe you can elaborate that you've talked about in the past how we've had separate stacks for diagnostic analytics, predictive, prescriptive, the planning, simulation, and optimization were all separate stacks. What can you do with something like Blue Yonder when all of those are integrated? And you can do all of that modeling to create one coherent prescriptive model. Give us some use cases. Yeah, so, I mean, look, in the supply chain world, you were talking about, I mean, that's a very rich area for these types of sophisticated analytics, right? You don't just do descriptive analytics in actually most industries, right? I can financial services, you do the stuff in Telco, you do the stuff in retail, you do the stuff, CPG, you do the stuff, okay? These applications have been running the world using this very sophisticated technologies. We don't see it as consumers, typically. And so think about the alternative. Think about how people have had to do this, right? So imagine the world is moving all this data into these data clouds, and now you want to do, I'm gonna pick graph analytics, rule-based reasoning, prescriptive analytics, and simulation, okay? Historically, these things have needed point technologies that required you to pull the data, like you just spent a lot of time and energy and you're excited about having all your data in a data cloud. Now you have to pull it back out again. It's no longer secured, it's no longer governed. Now you have to worry about synchronization. You have a copy out there somewhere in the world, right? Is it really up to, you know, in sync and all of that? It's typically moved into a point solution that's not cloud-native, so it doesn't have the elasticity and the scalability and the consumption-based pricing that you have in a data cloud, and it's not relational. Like you're talking about gobs of friction and gobs of cost and headaches and risks and so on. So in this crop processor model, by moving support for those workloads into the data cloud, it's as if Snowflake supported all this stuff. It can do now Snowflake magically, it can do graph analytics and rule-based reasoning and prescriptive analytics and simulation. And so all that glue disappears, okay? And it's not just the cost of the developing the glue in the first place. It's not, you don't have to now, if you change a field in your application or your model, you don't have to go replicated across all the point technologies, et cetera. You just put it, you know, you make a change in one place. And so it's not just a, you know, technology savings, it lets you operate at a totally different level of speed. You can evolve your models faster, you can evolve your business faster because technology is not holding you back, okay? So it's a real game changer. If you can bring intelligence to the semantic layer and bring intelligence to the underlying data cloud so that you don't have to leave these systems to go get a prediction here, a language model there, simulation somewhere else, a, you know, graph analytics somewhere else and rule-based reasoning somewhere else, okay? I hope my enthusiasm is clear. Would an example be that you might have, it's something like where Amazon, it almost runs with very few hands on the wheel. So they might, their planning process is tied to operations, but their operations learn and update forecasts which replan and then drive operations some more in almost like a closed learning loop. Is it something like that? You know, that's exactly what I'm talking about enables exactly that, okay? And you should bring Duncan on from Blue Yonder and I'm sure he'll tell you about what they're working towards but that's exactly what we're trying to do is make it so that there's a digital twin of your business, you know, the semantic sort of capture enough of your business where now you can get a feedback loop, you know in business you put in money and you get out money, you can sort of monitor that feedback loop and make the adjustments you need to make in terms of how you allocate capital to labor or to inventory or to equipment or to real estate or to all these things that, you know, give you a return how do you allocate it optimally, okay? So that you maximize the value coming out of your business, okay? If you have a hodgepodge of tech that's disconnected and siloed, you know I haven't seen anyone be able to do anything resembling that, you know, in a siloed world, okay? You can have, you can do it locally, you know you have a price optimization or replenishment optimization or logistics optimization but you cannot do it holistically because these silos can, you know, are just disconnected. Yes. So it's really, it's the, that's gonna be the application companies, the solution companies let's call them, okay? That work on this platform that let you do that that let you create artificial general business intelligence, okay? Self-driving businesses really is what you're talking about George, okay? They need to be on this platform. Yeah, so, so Blue Yonder, for those of you don't know so Duncan is Duncan Angle, CEO of Blue Yonder which comprises a big chunk of their business is the old man, logistics business which is obviously legacy but they're- I too. Yeah. And yes, right. And they're taking those assets and bringing together this modern sort of supply chain system that Mullum's describing. I want to ask you, when you have this coherent metadata-based model of all your data we talked about LLMs and the role that they play in simplifying the legacy processes. A lot of people are really excited about what they consider state-of-the-art retrieval on the generation of RAG approaches. Could this future leapfrog, the RAG it's almost like RAG I'm hearing is, you didn't say this but I'm asking is it a stepping stone to this self-driving business that becomes the new legacy? Look, I think RAG is an important technique for interfacing language models which represent like a statistical model let's say of the world, okay? I mean, these are amazing things but they're, you know, even if you can make language models as smart as you and I, okay, I know you and I are more effective if we have access to a library and a computer and a calculator and all these things, okay? And so RAG is a way of connecting the language model to the sort of the deterministic or symbolic assets and the data assets that you have, okay? It's actually, it's a term for an umbrella of techniques that people used to do that. And what I think people are coming to understand that RAG isn't just about semantics expressed as vectors in a vector database which is a very important, you know, technology and very important component. It's also about explaining to the language model how things work by showing it a set of concepts. Hey, I think of my business as having, you know, these concepts and a set of relationships and, you know, you can say it's different from RAG or you can say it's a different kind of RAG. I don't care. This is a retrieval, you know, augmented. And so what you're retrieving here is not just vectors, you can retrieve, you know, other types of knowledge and other types of semantics that you can feed into a language model and get that feedback loop going, okay? So, you know, it's a, again, these models have been around for a year and I can assure you that all of our customers, you know, who represent some very large enterprises are very actively investing and trying to figure out how do you combine that technology with their existing, you know, classic AI technologies as well as their data technologies to create something much more powerful. Yeah, awesome. So we've been talking about the future. So let's explore kind of a little bit of what's possible when applications can represent this end-to-end prescriptive model of the business which tells you what should happen or what you should do versus what did happen. And George was showing this graphic here that you did using Amazon.com as sort of the metaphor for the future. We talked about Uber for all. We're talking about Amazon for all here, Amazon.com, that is, explain your thinking in this workflow here. Okay, and this is what Mullum was starting to talk to with the blue yonder scenarios. But in the past, our enterprise applications were operational applications mainly and they tracked what did happen. And the big advances over decades, we're trying to integrate processes across functions and divisions and, you know, even globally. And what we were talking about really, this is recapping what Mullum's been saying is that not only are we integrating the processes, but we're integrating predictive and prescriptive models along with the sort of planning simulation optimization that might inform those models so that it works across functions. And I think the technical term is fan out so that you can look at it from different angles even if you didn't originally forecast or plan from that angle, the models fill that out. And that's sort of a recap and Mullum, you've talked to this. I would ask only one last question, which is the legacy packaged applications that did so much work to integrate all these processes. Where do they fit in a world where you start layering a prescriptive model on top of them where they are just one island in this bigger model? Yeah, look, I don't think we're gonna like rewrite all these applications overnight, okay? And some of these might never be rewritten where, you know, semantics will stay in code and you can think of them as services that you can call into and, you know, get answers to certain questions. Again, all the application companies and a lot of the enterprises we work with are sort of in the process of figuring this out. So the nice thing about these data clouds is that there is a process to migrate that, you know, into the data cloud first, okay? Using containers and using things like that, operating in there and then buying yourself time to, you know, capture the semantics that are embedded in there and surface them so that business people, you know, understand what's going on, okay? So, you know, I live in Silicon Valley and here you hear of BC's investing in next generation app companies, okay? That are very specifically attacking this problem because, you know, apps that have their semantics in code are not accessible to business users and business users are only option is like, okay, well, it doesn't do what I want, either buy another app that does what I want or I'll copy some data out of it, put it in Excel spreadsheet, build my own model and, you know, self-serve my analysis, which is very limited, okay? And so if you're building an app company today, okay, you would do it in this new data-centric way and your data-centric architectures, platforms, AI and so on. But, you know, of course, there's a lot of stuff out there. There's a lot of knowledge, a lot of semantics captured in code in these applications that at a minimum, we should be able to leverage and so that we can, you know, over time, you know, refactor and, you know, unify the semantics, you know, as models, as relational semantics. Okay. We've been working on this vision of the six data platform and trying to poke at this semantic layer and it seems pretty obvious to us that relational AI is a key potential component there. Can't thank you enough for spending some time with us today. You know, George and I really appreciate it. Thank you for having me. I really appreciate the time. Thanks. All right, you bet. And thanks to Alex Myerson, Ken Schiffman who are on production and do our podcast. Kristen Martin and Cheryl Knight helped get the word out on social media and in our newsletters and Rob Hof is our editor-in-chief over at siliconangle.com. Remember, all these episodes are available as podcasts wherever you listen, just search, breaking analysis podcast. I publish each week on wikibon.com soon to be thecuberesearch.com and siliconangle.com. And you can email me at davidotbalante at siliconangle.com or DM me at dvalante. Comment on our LinkedIn post. Check out etr.ai, the best survey data in the enterprise tech business. This is Dave Vellante from George Gilbert and theCUBE Research Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis.