 This is Yoho Sapil Bhartiya and welcome to TFR Newsroom. And today we have with us, once again, Puneet Gupta, CEO and founder of Emberflow. Puneet, it's great to have you on the show. Great to be back, stop now. Always good to connect back to the end. Today's topic is going to be something of great interest to a lot of folks because almost all of us, especially tech, we are talking about Genetic AI, Gen AI, or mostly we talk about chat, GPT and all those things. And we have, I have been recording videos almost on a daily basis, where companies are moving a lot of things in production. Before we talk about this specific announcement that you folks are coming out with, what I do want to understand or hear from you is, how are you seeing? Because oftentimes we do see blip in the redars. We see a lot of hype around certain technologies. But over six months, nobody talks about that. NFTs are a good example. But when it comes to Gen AI, what are your feelings? That is it hype or is it something real? Which feels like almost as big as Docker containers or Kubernetes or Linux kernel? Obviously there's an active debate going on about precisely that, whether this is interim, maybe this is some kind of a fad or short-term. Because we've seen some of these in the past, right? Where we thought there was going to be sort of a long-term game changer. I think people got excited about crypto, blockchain, a few others. And they've had an impact. But I think generally now the consensus is that, yes, this is widely disruptive and this is widely impactful. I think not just hearsay or narratives, I think data is starting to come out where folks are experiencing what this technology can do and then how it can ripple through pretty much any use case that we can think of. Having said that, yes, still sort of a little bit early innings. I think we've seen some powerful, I would say consumer type use cases. But I think it's just a matter of time before we start to see a huge, huge onslaught on the B2B side. Just like any technology, it takes a little while to for sort of the maturation and for it to kind of propagate. And as we're seeing the stack is sort of still building out, right? You have the foundation layer and then you have additional layer that are being built on tops. A lot of tooling is coming in place. So, and you know, I mean, I am in the Bay Area, this is considered ground zero for AIML. And I can tell you, I think yes, I think some of the best minds are talking about the fact that this is transformative and this is here to stay. And this is gonna be just as big as, I mean, Kubernetes cloud computing sort of in that caliber even beyond that. As organizations, you know, everybody, they get excited about the new shiny object. We, you know, we move to, everybody moved to cloud and now we are like dealing with the cloud complexity. We are dealing with cloud cost. So when it comes to once again, Genitive AI, what are the challenges that you see? I'm not talking about the technical challenges, but once again, the challenge of scaling, challenges of cost. One of the things that I think folks are gonna find where the landscape has changed or you know, how they have traditionally approached development, product development, feature development. In the case of AIML and then, you know, the particular form of AIML that we're all talking about which is generative AI on the backs of LLMs. I think one of those things that we have seen it again, just a little bit fortunate from having some experience in the past of working at some large cloud providers. This is, and as you sort of called out, you know, sometimes you can kind of all wrapped up around some cool new technology, but if you figure out what your use case is and that value vector, which I think most many companies are starting to figure out, right? I mean, we've seen as I said on the consumer side, what the PROM based interaction is, right? And just about every B2B company is thinking how they can leverage that into their products and services. Okay, so that's kind of how it's going to transform and change your product offering, right? Now, one of the things that's going to be different in that life cycle, in that implementation, as companies move towards that end goal is sort of the monetization aspect of generative AI and LLMs, right? So let me sort of contrast and compare. In the traditional world, prior to generative AI or LLMs or any form of AI ML that was part of your technology stack, most of the monetization models were sort of, you know, what I call old school fixed subscription, right? $50 per user per month, seat-based license, user-based license. AI ML, generative AI, chat GPT, LLM really just turns it on its head, right? You can no longer go on with a fixed model because by default, anything AI ML is usage-based, right? Whether you are using your own models that are layered on top of foundational models, everything is going to come to you in the form of a consumption unit, right? Chat GPT uses the form of something that they call tokens that they charge you based on what kind of query that you're sending into the chat GPT model. So you're going to have to very quickly get ahead of this and start tracking. And what I'm getting at is compared to sort of the old school to new school way of implementation, you have to think about this right from the outset. You can no longer think about monetization as let me get the technology built and then I'll hand it over to an accounting team or a finance team or even a sales team to come up with my monetization strategy. That will no longer work. You have to bake it in as part of your development process, right? So I think most companies are discovering it. Some unfortunately will maybe discover it on the end of it. And I think they're going to have to backtrack. And we are suddenly evangelizing that just from our past experience, you have to start tackling this right from the outset as when you start to incorporate LLMs into your products and services. So if I'm not wrong, what you folks are working on is not for the users, but it's for the folks who offer AI based solutions. So the usage, they can monetize from the usage. Is it correct? 100% yeah. Cause you know, that's basically like, we like to say we're kind of bringing them full circle, right? As you just said, okay, a lot of euphoria, a lot of excitement about Generator of AI LLMs. I don't think there's any company left on the planet that hasn't gotten into a room with a set of PMs and engineers and have a head, had a session about, okay, how do I bring in some flavor of LLMs into my products or services? I think every company has had that discussion, right? So everybody's roadmap, everybody's roadmap has been impacted, right? They're thinking about how to do that. And all of that is great, right? You're thinking about, you know, what that experience is going to be like for your users, for your customers. Okay, but which is great. So go down that path, but don't overlook the monetization aspect, okay? Which is what we're helping with. You know, I sort of have this great analogy I thought of a few days back. You'd remember sort of the big old California gold rush and you know, it was said that, you know, don't show up to the gold rush without a shovel, right? So everybody's making a mad dash here for generative AI and LLM, but don't overlook the monetization aspect. Don't look at it for sort of on the far end of it when you get done. No, you have to bake it in. So on that end, we are providing a monetization platform for generative AI and LLMs. So any SaaS company that wants to monetize their AI ML, we have a turnkey offering where we enable you to meter and usage track what you are using of your underlying LLM and generative AI model. So first ability to track that, because if you think about it, you know, you want to track number of tokens consumed, latency on the prairie, user feedback, number of prompts, all of those things will need to be metered. And then you want to probably do some kind of a markup over those transactions and then charge your customers appropriately based on usage and consumption. So we're providing a turnkey platform where they can do all of that and then generate on-demand real-time metered invoicing and billing for a generative AI chat GPT-based product and service. As you folks are working on this extension on your platform, I wonder, I'm curious that what was the driver behind it? Is it like the way you work with your customers or you saw, hey, these are the trends that are happening in the space? I'm just like trying to understand the origin of that, how you came up with, hey, we have to do that. I'm so glad you asked because, you know, and you and I have chatted on a couple of occasions and you kind of know our story, right? We have been very bullish on the fact that, I mean, we just hold a steadfast view. We've held it for a long time that the world is going to shift to a usage-based pricing and billing model, okay? Now, having said that, Swapnil, I think, you know, maybe when you and I first chatted already the year back, maybe longer, not everybody was on that bandwagon, right? I mean, and I know, I remember time three years back when we started the company, you know, it was still kind of half and half. I mean, not many folks kind of got around to it. In fact, one of the things that where we did have alignment on, folks said, yeah, okay, I could see infrastructure and platform-type companies or solutions going down the path of usage-based pricing and billing, but many folks sort of held out that, you know, traditional SaaS applications, sort of the top tier of the technology stat, that is going to be a holdout. You know, many folks said that, you know, I don't see how that, you know, how I see Salesforce shifting to a usage-based pricing and billing model. Well, tell you what, we had held the view that it will, right? Now, two years ago, I could not tell you what would be the catalyst for that. We just had, you know, we knew that it was inevitable and we've talked about why it was inevitable because cloud is elastic and ultimately that elasticity will, you know, transcend through your technology stack into how you send your product and services. Well, guess what, Swapnil, the catalyst ended up being generated AI and LLM, right? So this thing came out of nowhere, you know, four or five months back when people started to experience chat GPT and like we just discussed, now everybody's product roadmap, doesn't matter what company, what industry, what vertical, everybody is thinking about, okay, how do I bring back this into my products and services? The moment you're going to bring any flavor of AIML, you are down the path of usage-based pricing and billing. So we have this vision from day one. Our platform has the, what we call primitives, building blocks to really support this. So for us, really, it's just a very natural extension. You know, when this catalyst sort of emerged, it was simply a matter for us to build some predefined meters on the backs of chat GPT and these, you know, custom monetization parameters. And that's really all. So it's been really seamless and we provide tremendous value to a customer just out of box. They can really quickly monetize any flavor of chat GPT, LLMs, custom LLMs that they're building and however they want to project that to their customers. If you can just give an overview, when we look at Emberflow today, that these are the solutions that are there part of the platform. So you are capturing, you know, as we have talked earlier also, Emberflow is where, where about the customers are in their own journey. Actually, sometimes you're staying ahead of the curve as you said earlier that, you know, people would not even think it. So let's talk about that also. What we're enabling is basically an end-to-end modern monetization platform that will keep you ahead of these changes that are coming rapidly, right? So we are a modern platform. We're built around the foundation of metering. And, you know, metering is sort of a euphemism for usage instrumentation. But usage instrumentation done right where there's a class of primitive and that's why it's called a metering service. It's not observability. It's not monitoring. It's not, you know, logging. It's metering. So, you know, by design, it is built from the ground up to be accurate, things like item potency, data deduplication. You just kind of set it and forget it. This is that one system that you want to put in place and this will scale with you no matter where you are today with what you're metering and a six months from now or a year from now or five years from now, whatever that looks like, this system will provide you a standardized interface across your enterprise, across your organization and it will scale with time. So it is highly flexible, highly customizable, accurate, real-time usage instrumentation metering system and then layered on top, we provide you what we call our billing cloud which then allows you to really construct any kind of a modern flexible usage-based pricing plan that you want to expose to your customers, right? So you want to charge on any flavor of usage-based, tiered-based, prepaid, true-up, commit-based. You want to draw down against a commit that a customer pays you, you know, $10,000 upfront and you're basically still metering on a monthly basis with the usages and you're drawing down all kinds of permutational combination and on the back of that, we enable you to then generate on-demand, real-time, metered, invoicing and billing that you can then showcase to your customers. So it's a full-on turnkey, end-to-end, monetization platform. Of course, you folks are announcing it today but can you also talk about one thing is that what kind of potential use cases that you say where, hey, these are the companies who say leverage it or if they're already partners that you're already working on, I mean, they're already leveraging this technology. We're seeing use cases across the spectrum, right? And I think this is just, again, a testament to the fact we already talked about it. I mean, every single company, I kid you not, you pick anybody and everybody's having a conversation internally with their product teams and how they incorporate LLMs into their technology stack. SupportLogic, one of our existing customers, guess what, they already had a layer of AI ML built into their platform. They provide support ticket analytics to their customers and now they are leveling up that experience as you would imagine as we've just discussed with a more prompt oriented and interactive generative AI and LLM based experience, right? So they're already deploying, they're already a customer and then they're taking our extension of our platform and are going to be delivering that experience. Several of our other customers are already down that path. Like I said, you know, that roadmap discussions already have already happened. They have our platform, they were design partners for us and the extension of this platform as we have enabled modernization for LLMs and generative AI. Puneet, thank you so much for taking time out today and talk about this announcement. It's really good to see that you folks, as I said, are staying ahead of the curve, offering these kinds of solutions before the industry is ready for them. So that's really good to see how you're kind of bullish on usage based model and also getting generative AI into that form. It was a great discussion and I look forward to talk to you again soon. Thank you. Thanks so, yeah, likewise. I'm sure we'll be back again and very exciting times. So looking forward to it.