 Hello, and welcome to this CUBE Conversation here in our Palo Alto Studios. I'm John Furrier, host of theCUBE. We've got a great conversation here with entrepreneur, founder and CEO of Articulate. CUBE alumni was just on theCUBE at our SuperCloud 4 event talking about the cloud and AI with Intel. Now he's the founder and CEO of Articulate, a hot new company that was launched formerly from Intel, an intellectual property assigned to a startup and funded by Intel and digital bridge, Arun Sabrinian, who's here. Arun, great to have you coming on theCUBE and congratulations on your new venture. John, thank you so much for having me here again. It's nice to be interested as a founder, entrepreneur, founder, because just recently you were at Intel as a big company guy. Formerly Amazon Web Services. And again, you're on theCUBE with SuperCloud. You kind of knew this was happening and kind of smiling all the time. I knew something was up with you. Thanks for coming on. No, thank you. Thanks for having me. And it's been quite an exciting time. We launched on January 3rd officially and it's been quite an interesting reception both right after the launch and see years as well. You know, it's really an exciting time Arun. You had one of the most prolific comments that Dave and I have been repeating from SuperCloud about thought experiment. If, you know, if AGI was here, it would know we know it, therefore it's not here. It's really an exciting time for businesses and entrepreneurs with this AI wave coming because it really is impacting up and down the stack. SuperCloud, you got super chips. The middleware is changing. You see AI Ops, I just see the conversation recently with a VC former executive around how AI Ops is going to change, how software gets built and how infrastructure is going to be run. And with the skills gaps, we're going to see a lot more self-driving infrastructure if you will with AI. And then the paradigms are shifting. You're starting to see things flip. You need infrastructure to run the apps. Now apps is the data, data needs it. So you're seeing a new paradigm, not just on the trends, but in the architecture. And I think the enterprise specifically is most disrupted. This is what you guys are doing. Explain, articulate, premise. Describe the transaction. Intel has this IP that you started. Take us through the story. Absolutely. So we started this, not necessarily trying to go build a platform, right? So we started this because, even inside Intel, to go help solve customer's problems, right? And the problem that we're trying to solve was how do you get JNAI into enterprises safely and at scale? And enable enterprises to move really, really quickly so they can build applications fast, right? So that is really the problem we're trying to solve. Now, when we got into it, it was pretty obvious that you not only have to go enable them to say, build a large model or deploy one model at a time, you actually have to help them go from the bottom most layer of the stack, which is the infrastructure layer, move up to the data layer that takes care of all of the different data elements that are required for your JNAI engine, and then get to the model layer that we think about, it's not just a model, it's a collection of models. And not only our premise, our whole thesis that we've now validated multiple times over is, model is a means to an end. You need to have multiple models working together. We actually call that model mesh. That's our most proprietary technology that we've launched with. And then it is really around giving customers ready to use application level APIs. What I mean by that is, if you ask a question, say, take a simple question, like say, you want to be able to search for something. Now, just that search alone, you go into it, has to hit like 15 different APIs, come or collect all of that, and then give application developers something for them to use. Think of all of the different complexities that has to go through all these four layers, vertically optimize it, that's really what we did. So you're an enterprise software company. Okay, so I'll get to the business model in a second, but you're targeting the AI market application builders who have a lot of data sitting around. They want to essentially get up and running your shortcut, a fast path to getting applications running with data. So data is the source, you build the software, training, fine tuning, inference, all built into the platform. It's a platform. And I deploy it on premise. So my developers can code with the data. Is that right? So that's absolutely right. The only additional make to that is on premise for us is both on-prem, like so an actual on-prem data center, or the cloud, right? So, but even in the cloud, we run entirely inside the customer security perimeter, right? So what I mean by that is, say if you're running on Amazon, we deploy into your VPC, right? Which is slightly different from most other players because we are both a SaaS provider, you talked about the business models, we'll get into it. We are a SaaS-based business model, but then we are kind of an on-prem deployment model. That's one side of the story. The other side is when we say we deploy, we actually enable our customers to deploy. We are not hands-on keyboard deploying into your environments because that also can get intrusive with respect to security, right? So, and we had to do take this hard step upfront because that's where we started with, right? Most of the customers we're dealing with are Fortune 1000 companies. Okay, so tell me the market opportunity. What market are you targeting? So I'll give you a layer of the landscape, right? So I mentioned this briefly the last time I was here in Cuba as well. Think of the biggest and best models that are out there. Any model that's out there that you want to name has been trained on the open public dataset that's out there, less than 5% of the world's data. The 95% of the world's data that is dark, that's our opportunity. And not only that the data is dark, it's going to continue to stay dark. No enterprise is going to go put that out there for somebody else to build models with. But not only that, think of the last decade where the amount of money that's been spent in data and AI transformation activities, depending on which ever report you believe, whether it's McKinsey or BCG or Bain or anybody else's reporter IDC, it's a universal consensus that between 8 to 11% of those projects have actually gotten to the outcome they promised initially. Flip side of that is 90% haven't gotten to production. Finally, we have a piece of technology that can actually get maybe 50, 60, 70% of those projects to production. That's the opportunity that we are going after. And it's also not something where you go and hire an army of data scientists, an army of, say, consultants, and then wait for three years before you get an outcome. What's the alternative? Okay, I'm an enterprise server. I think there's a lot more opportunity. I think the beachhead is enterprises who want to get into AI business. So their alternative is to build code from scratch. Build code from scratch, or imagine going after 100 or 500 different options trying to figure out which pieces actually fit together, trying to understand which models are good. Somebody tells you you need a large model. Somebody tells you you need a small model. But then today the conversation is predominantly around models because they are the coolest thing. But the problem is there may be 10, 20, 30% of the solution. 70% of the solution is actually going and figuring out how do you close the last mile into the delivery gap, which nobody tells you. And then you get into it and realize that you have two years and millions of dollars spent before you can get your first outcome. So we're going into the dark data you mentioned, the enterprise. They have enterprise out there that has all this data. It's a vendor, they have their own proprietary, not proprietary, it's their data. They speak vendor. That's their company language. Then you got, you have information that's their information. Then you got information that people know about in the market, customers, third party, and then you got analysts. How do you guys train that data? What's the, are you training it on? Think of. Are they training it themselves on their own data or do they have to mesh with other data sets across train? Very good question. So let me like talk in three buckets, right? So think of it as the open world data set that's already out on the internet. That is the world knowledge data. So in one way, shape, or form, you need some way to get that information. Most of those models out there, either you get a proprietary model or you get a model of your own. You can have that. And the reason why some companies are proprietary, you mean like open AI? Like open AI or any of the other companies that are out there. The large language, the big ones. The big ones. Or you can do open source models and then fine tune it for yourself, right? So that's your own model. The reason I say large companies will need something like that is the regulatory landscape that's evolving. You need to have more and more control over what you put into your own environments, what you can actually deliver to your customers. That's one bucket. The second bucket that's only now basically nascently evolving is what I call domain specific models. So the model that understands that particular domain, right? So it might still be from open information or nearly open. So we have the cube model. We know we speak cube. B2B tech, jargon, Kubernetes. So that's the domain model for cube. Yes, right? So like for example, everything related to infrastructure, everything related to cloud, that would be the cube specific model. And then, but that's also cube industry specific model. Then you get into actual cube model, which is your own proprietary data, your interviews, your backend conversations, the team that actually puts the show together. That information also needs to get codified into a model for you to then go and say, look, I will be able to go deliver. We could use this. We would be a customer. Absolutely. So I think you have a lot of markouts. I think there's going to be a surprise. It'd be interesting as we progress through the journey, what emerges as opportunistic white space that you didn't see. Someone becomes a data aggregator overnight, uses your software to kind of create a new in fact that I'm smiling because that's kind of already starting to happen. So we were in stealth for a long time, partly because we were also very careful about saying, look, we need to be able to deploy this in production, have customers ready to be able to talk about it before we can actually go, say launch that this is actually real. We've done that in multiple verticals, right? So you can see that on our website as well. But really one of the largest financial services firms, not only took our platform and deployed internally, they actually went to production with their applications over the holidays, right? That's something really fast. Very fast, right? So one of the biggest advantages is we say, we don't do POCs, we do only production pilots. The difference is scale, right? The reason for that is it's super easy for anybody today to go do a demo with a few hundred documents or a few thousand documents. Anybody can go do that. But jumping from that to say tens of thousands of documents or hundreds of thousands of documents or millions of documents, completely different ballgame. Our production pilot starts from there, right? So I'll give you some simple metrics, right? Take the latest demos that anybody shows in their own say conferences, cloud conferences. They're dealing with millions of tokens, right? That's the demos that are being shown. Our production pilots at a bare minimum are 10 to 20 billion tokens. That's the production pilot and that's deployed in six to eight weeks. Do you have headroom? You've got scale headroom, no problem on the future, future-proofing thing. Yes. Well, so we think that it's just getting, yeah, so for enterprises to get in, the reason why you need to go to that kind of scale is any enterprise of reasonable size would have these kinds of volumes before they can start giving meaningful answers, right? So that's the main important thing. All right, so let's get into the business side of things just to get on the record here. So Intel had the IP, you were incubating it, you're doing all this cool stuff. You realize, hey, we can build software to help accelerate people's journey. Pat loves it. They're all looking at this, hey, there's an opportunity. So you guys, spin-outs of weird work has got all kinds of corporate, Intel's big conglomerate is what's Intel's relationship. So just to get it right, let me get the facts right and you tell me if it's right or not. You have intellectual property that was assigned to a new company that you're the founder and CEO of because you incubated it. Intel signs the intellectual property so they let it go and they get equity for that and they give cash investment. So let me take me through the transaction real quick. So first of all, the premise of the transaction, right? Let me just give you a little bit of the premise because so the field is moving so fast and also the need from enterprises is massive, right? So everybody wants to go get this done because they're seeing that as a core differentiation. However, we also have to meet customers where they are and this is a software play first and foremost. Now it will have a massive pull-through effect for all hardware manufacturers including Intel but we have to be cognizant that customers will have to be the ones making the choice. Of course we are optimized on Intel, we'll continue to stay optimized on Intel but we are also hardware agnostic and we're cloud agnostic. That's one of the selling points. So that's the premise of the story. Now if that is true and we're also going and deploying to customers already upfront, we're already optimizing across the board for customers, right? Now, the IP that the software was built in Intel but then Intel being an investor, transition the IPO to the new company, the new company is completely independent, we in some ways have to own our own destiny moving forward. Exactly, welcome to the party. So Intel doesn't really have, it's not Intel companies, it's completely independent. It's a completely independent. Digital bridge is another company that put money in so you have another investor. That's right. You're a director, it's a startup, you've got funding. Yes, we actually have several investors that we've named publicly, some investors we've not named publicly. So for example, FinVC, FinCapital, or GS Holdings or Communitas Capital, lots of the investors came together and it's a completely independent company, it's unlike a venture backed company. The Cube capital never got a phone call. Next time, keep us in mind. Great to have you on SuperCloud and again, this is exciting news. So I want to get back into the business model. Business transaction, it makes total sense. Congratulations, you're an entrepreneur. Welcome to the, you're out in the wild. Get a friend for yourself. You got a friend here in the Cube, we'd like you to be doing. How do you make money? What's the business model? What you're thinking around? How are you going to deploy this? Consumption, how it's deployed? How customers engage with the software? How, what's your charge for it? Take us through the economics. Absolutely, right? So this is, from the business standpoint, it's a SaaS business model. And what I mean by that is it's really a subscription based business model. And we offer what we call two different bundles, so the first bundle is what we would call Express Bundle, which really gets people quickly started and then go off on scale. And this also aligns with our overall philosophy that look, if somebody tells you that the only way for you to get into JNAI with transformation is you have to build your model, I think you'll have to rethink who you're getting your advice from because there's three steps. You think that's bad advice, getting something, build your own model. What does that mean? What does that even mean? Yeah, so build your own model, I would say is the third step in a very long three step journey. And the reason for that is, first and foremost, you have a lot of data. Can you first figure out what you can do with the data if you had a JNAI engine with you? So there is probably about 70% of your use cases that doesn't even require a fine tuning or a full blown pre-training yet. You can tackle a significant amount of use cases there, right there. Now, most other of your competitors would also be able to do that. They have their own data, they can go do that with any number of different off the shelf components to go do it. The next step is okay. There are some use cases I cannot solve just by using my data with an existing set of models. That's when I go start fine tuning for specific tasks. I'll get to maybe the next 20% of use cases. The last 10% of truly differentiated use cases are the ones where you need to pre-train. Now, you cannot jump to that step directly. You need to be methodical about how you go there. And the other piece is you need to be able to get business value as you are moving there. It's not a, and by the way, moving from the first step to the third step, every time you probably jump order of magnitude in terms of how much investment you need to make. So you need, if you're getting 10x in return, you need to know what you're going and applying it for. Yeah, and so let's get into some of the tech and the secret sauce. I think you're the good transition point there. It's not trivial. I mean, it's kind of trivial but not trivial to actually make something work on a model basis. You got to take a lot of steps. There's a lot of things to do. It's like a sausage factory came to get ugly. It gets dirty. So I guess the question I have is, is that what are people doing now? Cause I see that same thing. And you mentioned model meshing. One of the things that CUBE research put out was the power law of how you see specialty models coming out. My thesis I've been putting out there for over a year now is that model integration will be a big part of, okay, model mashups, I call it. Maybe the old web 2.0 mashups. I see models integrating with each other. Is that what you mean by mesh as a neural network mesh? Or I mean, explain the mesh. Absolutely, right? So it's a model mesh itself as one word is our trademark, but really what we mean by model mesh is it's a collection of models. Some of them are LLMs, some of them are not LLMs. And then there is a decision layer on top of it that decides what to do based on the inputs that are coming in and also based on the reactions from the customers, right? So it's that dynamic system that has to decide what to do. And the reason you need that is, the same person even asking the same question out of the same data set, the same corpus, might want different answers at different points in time because their context has changed. And assuming that one large mega model would somehow be able to answer all these questions is, in my opinion, a pipe dream, right? So you need to be able to collect different pieces. The other one is any vertical that you go into, if you're solving the hardest problems, not the simple problems that everybody seems to be tackling, but the hardest problems, you also cannot ignore that particular domain. So I'll give you a specific example, right? So take financial services, for example, and you're trying to go do, say, even actuarial models. You need an LLM or a set of LLMs to go understand unstructured information coming in, combine that with structured data, but then you also have to make sure that the existing models that are validated, verified over decades, don't get forgotten. Now, if we try to go build yet another model for it, first of all, it's never going to catch up with existing models. Second, it's a waste of time. Why can't we merge the two worlds together? Then you actually increase your confidence, increase your pace, and can get the advantages without losing the accuracy. Does that solve the hallucination problem? Or makes it more higher quality? It definitely makes it more higher quality, but the hallucination problem is acute when you try to do everything with a model. When you start putting guardrails with multiple models, when you start putting guardrails with checks and balances, the problem kind of automatically goes away because you're grounding it with data, and then whatever output comes out, you're again grounding it with the existing models. Okay, so let's take, what's a secret saw? So if I understand this correctly, I have data, I got cube data, I got all this data. I drop it into articulate platform, and then I can start building apps. Yes. How does that work? Take us through the secret saw. That's the secret saw of that software. And then how does that deploy? How do I get apps out quick? What do I do? Is it a language, is it a framework? So the beauty of this one is the language and the framework is all as standard as it gets. So whatever application programming language you're using, you continue to use that. You interact with the platform purely through APIs. Most of those APIs are as standard as it gets, right? So it's a standard REST call or a standard GraphQL call, and you're just basically building your application on top of it. We take care of actually deploying the product into your environment. We also give you tools to make sure that the data pipelines that are needed to come into the platform is all clean. However, I also want to be clean that every enterprise we go into also have their own data platforms. We are not here to replace your data platforms. It's purely connection. Like a Databricks, Data Lake, or Snowflake, or whatever. Any of those things. So the connections would automatically be made so there is no. It's just a data pipeline. It's a data pipeline with as minimal replication of data. And I want to be careful there, because if you have a large amount of data sitting in a relational database, there's no reason for you to replicate it. The only thing that'll come into an articulate platform is the metadata around what data you have, right? And that's something else that you asked about white space that's already evolving. That's where it's also evolving. I'll come to that in a bit. But in terms of what customers have to do, it's about connecting the data in through our APIs so that the right amount of information comes in. But the connections back to your original data streams are intact, so we don't replicate anything, right? And then everything you need for your application is automatically pre-configured and then delivered for you. Now, if you go to, I mentioned the Express Bundle, those are application-specific, domain-specific. The other end of the spectrum is what we call a premium bundle, where if you're a large enterprise, you want a platform for yourself where your people can go in and experiment, deploy different models, try to figure things out by yourself also. That's the premium bundle where you can, we use the premium bundle to build our Express Bundle. How much are they paying for this? What's the price? Our promise to our customers, and we've stood true today, and we think that we can actually do that over the long term, is today we are three to five times cheaper than any platform out there that if you have to go do it yourself. And this is the total cost, not just the software cost, but the software cost plus the people cost that you need to deploy the systems in place. Now, the three to five X Delta is something that we have not just saying it for the sake of saying it. We've done this multiple times, where we've told our customers. What are the competitors that are out there doing an Accenture and some of the consultants, or is it more? So not really. So this one comparison is with OpenAI, with, say, with Google Vertex, or with Jumpstart, Anthropic, Bedrock, like any one of those players. If you have to go do the same application, to build an application in production, you're an alternative to Bedrock then. Absolutely. So I would say Bedrock is more of an ingredient, to be honest, because that's the model layer, the piece in the model layer. So who's cooking the meal here? The developer, right? So if that's an ingredient, I'm thinking about it on premise. You have to actually go do all the work, right? So you have to do the work on the infrastructure layer. You have to go do the work on the data layer. You also have to do the work on the API layer. All that is what we are taking away. To borrow Amazon's own terms, we take away the undifferentiated heavy lifting. And you compete with Bedrock and make it easier. I mean, look, I believe in, you know, we've said- It's also a partnership, right? So for example, we don't have a problem partnering with Bedrock, where the model deployed in Bedrock actually gets deployed into it. That's not the problem there. Well, the problem that we're seeing in the market is production workloads aren't yet in the cloud. They're still figuring out what do I got here. Some of the alpha developers are going fast and they're getting in there. We're seeing that. But to your point about this inflection point, we've been saying it in SuperCloud. I keep saying it again from the shout, shouting from the highest mountain I could find, that this is like the web. I mean, this is so early. And that what's coming is more innovation. There's white spaces developing. You've introduced a new paradigm. To me, it reminds me of not to date myself, but it sounds like the old developer kits. Remember back in the old days, you know, you get a developer kit and you put on your laptop and you're actually PC laptops weren't even around there. You know, you code and then you ship it to a machine and you got an application. It's got middleware. It's got hardware. This sounds like it's a development kit for data and AI. Is that so? So think of it as close. So you're very close. The difference is, so your analogy about, look the world out there today is like a developer kit out there where everybody has to choose every little thing. It's great for somebody making choices for themselves to learn, but imagine an enterprise where they have to go to a production workload in eight weeks or 10 weeks and imaginable for them to go do that. That's really what we're abstracting, right? Yeah, and the thing also that's coming out of all of our conversations, certainly it's been great on theCUBE to kind of horizontally get the conversation space around AI nailed down as the data is the new mode. Data is the competitive advantage. Data is where the value is. And back to your point about dark data, implying that enterprise have all this data. It's not yet indexed. And not necessarily they want to be indexed. You know, I see people saying, I want my own model. And okay, now I want to maybe put a wall around that or firewall it or do something, bad word, but protect it. And then integrate it into other data to see how you can actually build applications faster. And not only that, the notion of having just one model to make sure that everything you do in your enterprise is actually replicable is also a little funny, right? You need to be able to expand beyond the notion of just one model. It's like magic and chemistry. It's like the mad scientist dropping stuff into the mix. And then you got your magician is prompt, you know? You know, waving your hand and magic's happening. So that speaks to that integration, you blend data. This is not, I mean, common. I mean, data alchemy has been around in some use cases, but with neural networks and now data is not available. That's not a conventional wisdom in the data management world. Not really. And also what we call data is data in the traditional sense and also institutional knowledge. Right, in a lot of sense, the institutional knowledge is not really captured properly. And think of this as a level playing field where you can capture unstructured institutional knowledge. Somebody's offhand comments come into the place, but then you can actually set, okay, this is coming in from a 30 year veteran. Pay more attention to that. And all of the structured information coming together, you can mix it together into your own unique company platform. It's interesting. Arcubee, I can actually go in and find out who's smarter. You are, I mean, there's, it's linguistic, it's content, it's data, I mean, to your point, I mean, this is the, this is the, this is the, to me, the, why I'm so excited about this is like one of the most exciting computer science business impact waves I've ever seen. And I've seen a lot. I mean, going back multiple generations of collection points, but I got to ask you architecturally, we're covering things from KubeCon, which is cloud native, Amazon re-invent, all the cloud players, now AI and big data, since a dupe and, you know, the cubes 13 years, we've seen everything. This is about like, okay, the architecture is impacted now. And everyone out that I talked to is like, we got to re-architect our enterprise to support cloud distributed edge because the data equation is changing, compute and now the infrastructure is changing. And yet there's not enough platform engineers out there. One, that's just on the infrastructure side. So there's a whole opportunity there. And then on the LLM side, this does not enough coders that can, can sling APIs and do rag, retrieval augmentation generation, as well as actually figure out what to train. What do I deal with data? Data engineering is emerging as a huge issue. What's your take on this whole architecture changeover, how are companies dealing, how do you guys see these enterprises evolving, knowing that there's a developer market that's robust and the plumbing is all shifting. So I would say in one sense, the architecture is emerging, but not necessarily completely changing. And the other sense, we finally have a glue, think of it that way, that can actually connect the different disparate architectures that have evolved. So we've gone from the edge to the cloud, to data lakes, to data warehouses, to lake houses. And what I call, if you start putting data without thinking about what you're doing in lake, data lakes or lake houses, you end up with data swamps. We actually have an opportunity to take advantage of all of that because finally you have a layer that understands unstructured data and structured data equally well. And the notion of, I mentioned this in the CES talk before, there is some amount of fear saying, oh, this is going to replace developers and all that, but to your point, It's enabling developers. It's enabling developers and imagine the number of new developers we need to get all these applications deployed. I think your point about the data swamp has been around, that's been a cliche for years, about going back to the Hadoop spark transition, obviously data bridges using that lake house and data swamp example. But think about the concepts. I've been using chatGPT to clean up a bunch of old email lists. Go out and take in to all the G-Mails and proton email and just, it just doesn't like that. I think that the swamp goes away with AI because you say, hey, go clean the swamp. Yeah, and you can clean it when you need it. It's not something that you pre-clean and then somebody else does it. Everybody can do it themselves. That's on one side of the story. And to your other question of emerging architectures, so you're going to see an explosion of applications developed from folks who may not necessarily be traditionally called application developers. That's really the white place we are seeing emerging because one of the applications you've already deployed with a security company is they have gobs and gobs of structured information about everybody's security profiles. But then when somebody asks a very natural languagey question of, hey, how many of my instances are open to the internet? Trying to find the exact information in the right table is the connection that LLM actually enabled to do. And our platform is going and doing them where you can improve the number of relevant responses by not just 5%, 10%, but by an order of magnitude. I mean, your point, everything's being smarter. Does this software work with this cluster? Yes or no? I mean, this stuff's going to be figured out on the fly. I mean, smart data, everything's getting smarter and productive if the data is smart. If you've got clean data or using the data, what's your take on this whole smarter intelligence? What's the change that's going on? So in fact, we are trying to get that into a lot of the enterprises as well. The notion of I need to have perfectly clean data before I can go do something is slowly going away. You of course need high quality data, but that doesn't mean that you need highly structured quality data. If you feed it bad information, don't get me wrong. It's going to get chunked out. But if you have information that is highly unstructured, you can use an LLM to actually get your answers much, much faster. Just like the classic learning machine concepts back in the day. If you have enough core data that seeds out the concept, you can figure out and infer and train out the bad data. Yes, so we are seeing two things emerge. So there was a study that just got concluded by BCG and Wharton and a few other teams as well, where if you give this piece of technology to experts, like real experts, they actually go much, much faster. You give it to folks who are in the middle of the pack. So they're not experts yet, but they're also not newbies. They also do really well because they have discerning capability. You have to be really careful how you give it to people who are really new to the field. Because what happens is they get an over-reliance on these tools before they develop a discerning capability for themselves. And that can get very dangerous, because then you don't know whether the thing is giving you good answers or bad answers. You don't know if it's grounded or not. There, the experiment that was conducted was even if you simply ask the question, have you thought about the answers that came back? Yes or no in every answer that they're submitting back? Changes the mindset completely. OK, the question that's burning for me is, can I get my hands on the software? Absolutely. So the- How much is it going to cost me? Are you going to break the bank? Is it in the queue before it? Absolutely. Or small? Yeah, yeah, so absolutely. So the affordability- That's not for the rich. This is a democratized platform for us really in the early days for us. We are more interested in getting into specific verticals and showing a big difference rather than actually making a massive profit in the beginning. And the reason we are doing that also is we are truly wanting to solve the toughest problems in every industry. And the reason for that also is that's what is the most sustaining. If I go after a customer success use case, which is very useful, there are 100 players doing that and they'll probably do it better. We don't need another Grammarly. We don't need another word processing. Yeah, they do it well. What they do is- Yeah, they got that nailed. They got that nailed. We can have that. But the harder problems are what needs to get solved right now and that's really what our course is. I think the domain-specific data that drives real unique questions that could have reasoning answers to them that uses retrieval and data effectively is going to, where the value is going to shift, that's in the specialty models. That will be part of the training apparatus and the meshing and the interaction. Interactions, right? I mean, just to give you an example, even if you go on a website, we have a very, very large financial services customer use case. We have a small startup, but it's a very successful security startup. As a use case as a customer, we also have a large government as a customer already using it. So it's as diverse as it gets and it's really around us going into fields that we think the problems are hard enough to solve. We tend to run into tough problems. Arun, great to see you and it's exciting to see. When I saw the announcement, I saw you on there. I was like, wow, okay. Now I know why we were so exciting to have you on theCUBE on SuperCloud and it's a great opportunity. I think you're going to see some things emerge. I think it's going to be a real accelerant for people to get up and running. Again, in all these inflection points, the simple equation is reducing the steps it takes to do something, make it as simple, fast, and easy to use and intuitive. You do that as a winning formula. And make it affordable. Affordable, yeah, that's independent of product. You do those, that's a utility. We're in a market of scale and speed. Sounds like a great product. Just final note here. Put a plug into what you're working on. What's your plans for the next year? Obviously you get the funding, independent company, so you don't have the real purse strings of Intel, but they're not going to let you go under, but you're not like an Intel company. You're independent, you're on your own. What are your goals? What are you looking to hire? What's your objectives? Absolutely, right? Our objective really is to go after specific verticals and make a big impact in each of those verticals. And the verticals I'm talking about really are also all on our website, but really we're going after, of course, a government is a large space. Financial services is a large space. Aerospace is another large space. We're getting into telecommunications and of course, semiconductor because of where we come from. And in each of these places, it's really around showing that it can be done. The toughest problems can be solved in a meaningful way and in a way where the customers can get business outcomes quickly. That's really what I would consider success. Scale is something we will continue to scale. And as an early stage startup, we'll continue to double or triple. That's just in the DNA of that. And of course, Intel, a motivated third party, you get more workloads in AI, workloads in cloud, more chips to make it go faster and scale up with the performance. In fact, that's one of the biggest pitch we actually have already started getting customers to start using massive amounts of accelerators that they would have never used before because we put them in production. It's not just about a research team going and doing an experiment. It's in production, very different class of use cases. The app development market and AI is going to surge of the big topic up and down the stacks from super chips to super apps, got super cloud, super on-premise, super exciting. Arun, great to see you. And thanks for coming in again. Congratulations on the entrepreneurial venture. Welcome to Out in the Wild. No cover here, we will execute. Absolutely, John. Thank you so much for having me and you've always been a pleasure to talk to you. This has been a fascinating conversation. Thank you. Yeah, great, great, super important. This AI trend is continuing to innovate and create an opportunity for entrepreneurs and businesses to create an environment where more intelligence and productivity will help humans in society. And this is the good side of AI, we're going to let it run. Of course, in theCUBE, we're always interested. Thanks for watching and we'll talk to you soon.