 Welcome back to Moscone Center, everybody. This is day four of theCUBE's continuous coverage of RSA 2023. I'm here with John Furrier. Dave Dougal is in the house again. He's the founder and managing director of Enterprise Web. Dave, we just saw you at MWC in Barcelona. That was an awesome show. I mean, bigger than this, but this is big. Yeah, this is big. That was about 90,000, 95,000. So it's good to see these conferences coming back. And MWC feels like yesterday, right? I mean, seriously. I mean, I know you guys are on the move constantly, but we had a great conversation at that point. No, John, you weren't there, but there was a Dave and, yeah, Telco SuperCloud with Dave and Lisa, right? That was fun. I think we put on a real good show there. But there we were talking about how the, you know, Telcos want to work across their domains, work across layers, right? So they can flexibly optimize services for low latency and energy efficiency, right? The only problem is that they're static, vertically integrated, tightly coupled stacks, getting away with their digital business transformation, right? They need to be much more, you know, dynamic, loosely coupled, horizontally architected. And that's what we do at Enterprise Web, right? We have an intelligent interoperability automation platform that allows things to be just highly dynamic to meet these kind of modern needs. Yeah, and the thing about our conversations all week, Dave and I have been talking about this, is the platformization of security, which is kind of their first awakening to like, hey, let's build a platform. But the difference is that what you're doing is essentially like a real platform, right? They're doing my focus in security, but the conversation is, if you don't put network and security to enable cloud native, then it's not the right fit. So what we see security doing is saying, okay, if security and network run things, it then has to enable on top. This is kind of what your Telco cloud vision, super cloud vision is. Talk about where your platform intersects with some of the narrative coming out of this show, and what does that mean? Yeah, so I think the whole, and you're right, platformization is a thing now, and I think that's actually almost a maturity of the move to the cloud, right? Because now people are thinking of distributed systems as systems, this is systems thinking. That's what platform is, right? It's a holistic view of your problem space, your problem domain. People used to do these siloed specialized products, and the problem is the problem is no longer siloed. The problem is that you're a whole attack surface, right? So you've got to go wide, you have to have an understanding of that domain, and one of the other hot topics here, of course, is generative AI, right? We came back from Barcelona and also generative AI that sort of stole the thread, right? We've been joking that it was invented over the holidays. AI. Yeah, absolutely. It is sort of funny, because it's been kicking around for a couple of years, but boom, it just exploded, right? I mean, it sucked all the air out of the room, and the funny thing, it's great for us, because Enterprise Web, given our nature, is the ultimate serverless backend for generative AI, right? And we're already working with the big players here, because if you think of it, like, generative, this is a revolution in data analytics, that's what it really is, about being real-time, conversational, interactive. Well, then you want your backend to be able to support that. If your backend's sort of static, you're not going to optimize against this information flow that's constantly coming in, right? You want your processes to be able to react to what's going on in the real world. You want your transactions to be optimized for that as well, your operations to be synchronized. So, we've actually, believe it or not, we already have an industrial-grade no-code automation demo using generative AI. We just started, I've been here all week in the Valley, running up and down the usual streets, right? And it's blown people away. So what are the requirements, Dave, from the backend to support the sort of new wave of generative AI workloads? What are the fundamental characteristics of that backend? What do they have to look like? All right, so I mean, obviously this goes pretty low. So I think starting at the highest level and then working down, because I don't want to lose people too quickly. I've been accused of that before. And at the high level, you can say this is two sides of the same coin, right? This is just the latest advance in analytics, meaning the latest advance in automation. That's really what it is. And those two have always gone together symbiotically, right? Data drives process, process creates data, and it's just a loop, right? And that's what you really, in a pure sense, that's what you want. You want to be able to observe the environment, use those observations to drive next actions. You want those next actions to be really fully optimized for what you, you know, based, but you also want them to be safe and secure. And some of these are some of the issues with generative AI right now, right? You want them, you know, sitting aside the whole IP issue about what generative AI is trained on. That's one issue totally sort of orthogonal. But the other issue is generative AI is not always right. Sometimes it's adamantly wrong. Sometimes it hallucinates. I mean, it's, you know, it's working over those large language models. It's essentially drunk on data, right? So you need to put constraints on top. And that's the difference. Again, that's the historical difference that old, old app versus OLTP, if you remember those days, right? Analytics versus transactions, right? Probabilistic, statistical on one side, deterministic rules based on the other side. You can think of those as almost being two sides of the brain, right? Left brain, right brain. You got sort of this creative side that's constantly reacting. And then you have sort of the logical side of the brain that's actually has models and it's applying rules to things. Enterprise Web is bringing sort of that industrial grade side to it. If you look at a lot of the automation demos for generative AI, they're clearly experiments. Are you really like rough beta prototypes? They even say it when they're doing it. And they're sort of like, if this and that kind of stuff, really stuff. They're not thinking about security and they're not thinking about IT governance. Well, guess what? Nobody in the enterprise is ever going to touch that. Right? The enterprise needs, this is you can't have creativity in your transactions. Right? You can't, right? You know, you're doing, yeah, you're doing. It's got to be binary. Yeah, you like your banking ledger to be, you know, hey, you know, it's like put in a hundred just shows the hundred, right? You want that kind of transactional control. And so Enterprise Web is adding that, let's say rigor, right? So in this demo that we have today already, we can demonstrate working, it's great. We're essentially, it's a Microsoft stack that we're working through today for the obvious reasons. It's really the one that's breaking away. But we're working through Jarvis, which is our NLP, right? So we can actually talk to Enterprise Web. Jarvis through OpenAI, right? To Enterprise Web. There's another product in between I'll get into when I go into the technical details that you asked. But essentially I can talk to Enterprise Web and say I want a network service, something really advanced, right? That's even hard for an engineer. I want this, please compose this for me. Then when it composes it, I want it to be optimized this way, configured this way, and then deploy it on the network. And this in a couple of minutes, I could dictate this to the machine, the machine does it, and then it shows me the state, right? So I can compose, deploy, and manage on an ongoing basis, the day zero, one and two, working with generative AI. So you can say, okay, optimized for low latency and give me an estimate as to what it's going to cost monthly. Yes. Okay, that's a little too expensive. So dial down, dial up the latency a little bit and you can speak to it in a natural language way and then optimize your infrastructure for an outcome. And this is exactly, in telecom they're calling that intent, right? Yes. Managing the declarative intent, right? So a developer can say energy efficiency, maybe it's not latency, maybe it's energy efficiency, right? So manage, I don't want to care how you manage energy efficiency. The details of that are way too low level. You know, it's down to the network, it's down to the ran, the core, it's like that. We're doing a lot of things at that point. So, but you know, does the business really care? No, it just wants its SLAs and SLOs met, it's wants its intent met. Honestly, if you think about, we've spent the last 10, 20, 30 years of computing thinking so much about our tools, right? Completely independent of the business behavior and objectives. And now we're actually almost at that stage, we're at that inflection point where we're really going to enter the 21st century for the first time, right? I'd argue that we haven't been in it, right? We're still doing 20th century automation. Now we're also going to be in, the focus will be, you talk to the machine, you express your intent in the system and bonds it. And it's fully traceable, right? It's fully governed. Talk about the self-healing aspect of it. I noticed that was in your LinkedIn post. Yeah, yeah. Because that's that third wave of Gen AI. You have prompt engineering, prompt ops, prompt tuning, which is to see the query, operationalizing it, and then self-tuning, tuning it, which means it's on its own. Where do you guys fit in on that piece? Because I can imagine that's where, when you give an order into the command, provision this, the hallucination side's a concern. So I'm assuming you're doing that self-tuning, self-healing. Take me through that piece of it. So we do this, so we're using the Microsoft Gen AI in front, right? So let's call that the Jarvis and the open AI in front. We put another technology between us. Essentially what you want is a vector-based time series database. So this goes to your question, actually to both of your questions, because we do this because we want an intermediary. We want to be arm's length. We want a nice separation of concerns between us and the generative AI for exactly the reason you just described, right? I don't want that hallucination to flow into my commands, right? So what happens is the hallucination is working through this time series database, which is putting that data in order, right? It's taking this massive stream of event information and it's applying rules to it and then just throwing events to enterprise web, right? But those events are already sort of filtered in a way that enterprise web can respond to them. And what it's doing is that, and we work with a company called KX, which is a really interesting company. You guys should have them on some time, actually. What's the company? KX. KX.com, 20 of the top 20 real-time ML partner, right? Yeah, yeah, exactly. So 20 of the top 20 financial institutions use them for real-time trading, right? They wanted to enter Telco, so they actually partnered with us because we have techled Telco domain expertise, right? In this case, so now totally different use cases and more because enterprise web is generalized, generative AI is generalized, KX is generalized for analytics and everything like that. So we use them as an intermediary. So what happens is we're arms length from generative AI. When it's talking to KX, KX is then translating it into the symbolic language, the semantics that we speak for us, right? And it's almost like a universal translator, right? Translating back and forth between vectors. That brings coherence. Coherence, right? Essentially, what we're allowing it to do is through the intermediary, generative AI is walking our graphs, it's walking our domain models and introspecting our catalogs. So when I actually, when I talk to my system and I say, I want something, it's literally walking the enterprise web graph saying, oh, I found it. Oh, and here's the object I want to use. It's a Cisco router, a Juniper router, whatever it is. I'll compose this with this, this, and this, and maybe it's a Fortinet security firewall, you know, whatever it is, it'll put that together, apply my constraints, and then implement it. The controls come on enterprise website, completely discreet. So generative AI is enabling the conversation. That's super powerful. Almost think of it as the most advanced user interface you can imagine, right? It's now Star Trek, right? This is, you know, right? We said in theCUBE one time, everything in Star Trek will be invented except for transporter room and that automatic food making. But you just gave an example of whatever, a Cisco router or a Palo Alto firewall, whatever. What is it that allows you to not care and then is that different in telco? Oh, so it's interesting. So part of it's what we've already had, the core enterprise web platform. In enterprise web, essentially, we have what's called an upper ontology, right? Upper ontology just means it's the generic set of concepts and types that runs across all enterprise businesses, verticals, whatever. Every business has people and has organizational units that has facilities, right? Those kind of things. And then of course from a cloud and system perspective, there are common types, right? There's formats, there's protocols. At the upper level protocol, enterprise web maintains that. It's sort of the universal concepts that apply to everybody. Then enterprise web allows you to model a domain like whether it's telco, whether it's life sciences, whether it's IoT, it doesn't really matter to enterprise web. We enabled you to rapidly model a complex domain. And then what happens is you're onboarding objects into that domain, right? So that router you were talking about, that firewall you were talking about, right? So what you're doing is instead of doing that old kind of point-to-point integration you used to do in the stack, what you're doing is actually just mapping up to those graphs. You're just mapping in metadata. So we're saying, hey, we understand Juniper router or Cisco router or Palo Alto security firewall or Fortinet. We understand them completely in metadata. Enterprise web is 100% configured by metadata, right? All enterprise web is a graph model, which has policies, which has metadata, policies, and types, and then it's stateless functions. And enterprise web uses that metadata to efficiently hydrate context for stateless functions. That alone is a super big idea, because serverless is true. I mean, you have a very big idea. So the super cloud aspect of the telco thing is super impressive. What you're getting at now is the modern platform vision. It's enterprise is hard. We had Jeremy Burton on who was a very distinguished executive over the years. Now he's doing an observability startup. He's like, it's hard to do a startup, actually in the enterprise, it takes years. Where are you guys at? Tell me about it. Where are you guys at? Take us through your progression because you guys have a solution now. How do someone get in? How do they get involved? What's the deployment? How do they adopt the technology? Because it is definitely in line with what people want. Some people don't have the skills. How do you guys do the managed services? How does someone engage with you guys? And how long have you been around? Yeah, fantastic. So I've been personally, I've been around for a long time. No enterprise web. So actually, so I had started turned around and grown several companies in a row. And so around the time of 2000 and 2009, you know, pre, pre, before cloud took off. Cloud doesn't really take off 2012. Before Kubernetes, before microservices, right? Before containers. My previous engagements, my roles, where I was like seeing that the 20th century automation tools were gaining the way, the kind of things I wanted to do already. Just from a business intuitive perspective is like, why can't I do it? Why does IT tell me what I can't do? I'm the business guy. I want this done. Why is being, you know, when I ask some behavior to be implemented, I don't say do that in concrete, right? I'd like it still to be adaptable if all of a sudden my needs change, the competitive landscape. So I start, in 2007, eight, seven, eight, I read 3, 400 academic papers and industry papers. I seriously, I actually, I left my prior job. I said, you know, I'm going to take off. I'm going to do sabbatical. I'm going to figure this out. And so I went out and just did all this research and just look it up. Like, why can't people be more dynamic? And basically it was the old, this old software was based on the old constraints of hardware and infrastructure. And when the hardware and infrastructure kept on advancing, the software didn't. You know, partially because those models of selling package software are pretty profitable. Selling lots of different components. How are you motivated to keep it the way it is? They didn't want to capitalize it, right? So I said, you know, actually this should be horizontal. This is really capabilities architecture, right? I should be able to say I've got a catalog with my capabilities and I have business logic over the top. My middleware should be the thin horizontally architected layer between my business logic and my things. And that's all that matters. I don't want to care how it deploys, like we talked about earlier. So I started doing that and essentially I just worked behind the scenes. I just self-funded it. I've taken no money. I've bootstrapped this whole company, this whole way to profitability, right? Congratulations. That's awesome. Not an easy road. I don't recommend it to everybody. It is an elite title to have that self-funded start. But now you have options that you wouldn't have had if you had taken money early on. So we're in production. Some of the world's largest companies and we go to market new channels. So... What kind of company? How big are they? Well, Red Hat. So on our last show, we had Red Hat, you know, on Zara. Said from Red Hat was on with us and that was a great show. So Red Hat looks at us as, you know, maybe being the application layer over OpenShift, right? We have partners with Intel. We just did that award-winning test bed. 5G, we're the closing loop with our partner KX on the AI ML side. Closing the loop from reported event to mediation in 300 to 400 microseconds. Not milliseconds. The state-of-the-art orchestration is 11 milliseconds. I'm at three, right? I'm orders of magnitude more performant they are. And the reason is I have no cruft, right? There's nothing but a graph and the stateless functions. Once something happens, we flatten that graph, execute the functions. I love them. I love serverless. Oh, and we implemented it, so we have our own, so we have 19 ward of patents, multiple patents panting, right? So we actually, because we existed before well, there's other technologies, because you have to remember- At least he's throwing money at you right now. We're getting a lot of interesting conversations. And the generative AI thing is, without me saying anything, the biggest players, and you know, there's only two or three, the biggest players in the generative AI are super interested because right now, every organization in the world is reconsidering their operations in light of generative AI. Totally. Right? In the last two months. If you're not, you better be. And their concern is about the hallucinations which you're trying to address. That's the upside of like, I want pure, reliable, scalable. Deterministic. And it's got to be portable. Low latency, high performance, portable, right? We're 50 megabytes. You know, again, we got all these great characteristics because we rethought it, right? And people didn't think it was possible, so they didn't try. And also I think one benefit. Well, I think your architects that you laid out, also business apps, essentially could be put cloud right there. So you actually had, the architecture was perfectly lined with how the cloud spawned and our super cloud and our rendezvous, our serendipity meeting you. Yeah, yeah, yeah. That's awesome. Well, actually, even before we went on the cube with you, I went on the cube with you, you were talking about super cloud pods. Yeah, yeah, right? So I saw that, I was like, I got to follow up with these guys. We're on the same tribe, let's go. These are my peeps. Well, I'd been on the cube back in 2015 in the structure conference with George Gilbert. I remember that. Back in the day. I do, still working with George. Yeah, great. Actually, I've been having some conversations with him. He's a great guy. And he, of course, dies right through the week. I remember Digital Ocean back then, but I interviewed those guys there. Yeah, it was good time. Yeah, yeah, so we go through market through channels. So we have SI, people like Tech Mahindra is one of our SIs, you know, Red Hat, Intel, you know. How does someone engage with your customer? What, who wants to engage with Enterprise Web? What do you do? Well, so I mean, anybody can, yeah. A channel partner? Yeah, so they can go directly to any of those channel partners or they can, obviously, davidenterpriseweb.com. Can I do that? Am I allowed to do that? Of course, yes. Davidenterpriseweb.com. So anybody can reach out to me and I'll direct them to the right partners that can do service delivery and support. I'm focused on building the world's greatest software company because I think the world needs it. Every epoch, every period, requires that new IT company to come and do that. And, you know, maybe it's Ubers, right? I'm a guy who sits in Glens Falls, New York, and mostly work in my t-shirt and jeans. But, you know, it actually, every once in a while, the ocean does need to be boiled because it's rancid, right? And people need to read things. Well, Gates, I was in Seattle. No one thought anything about Seattle except the same much it pays us. Yeah, we're going to put Glens Falls, New York on the map. I love it. And so, yeah, so it's been a journey. You know, it's been a lot of fun. I mean, I've learned a ton. Right, I've become a student of organizations, you know, working with all these big players. I've been squashed so many times. I've heard of every naysayer. But, you know what I'm saying? I knew our ideas were right. It stayed the course. Yeah, yeah, and that, you know, luckily we had enough friends, partners, customers to, you know, support us. We knew that we were doing things right. We were validating it in the background. And that's what I knew I had to do. When you make big claims, you need big validations. Yeah. So that's why- Well, you've got customers. You've got them. They're in production. Yeah, yeah. Big customers. And your next steps now is what? Scaling the go-to-market. Scale out. Scale out. This is ready to rip, right? This has been proven. And you need funding to do that? Yes. So yeah, we're at that scale. So we're at that inflection point. So growth capital, you're ready to go. Yeah, and we have other conversations too, but yes. And you have, yeah, or right. Okay, but then you have options because you own the whole thing. Yeah. That's awesome. Congratulations. And the GNAI piece is beautiful, so congratulations on that. The GNAI almost, you know what the thing is, partially I think is now this is really our moment. I think we were on the right track anyway, but middleware's just never been sexy, right? It's been, it's plummeted. So selling middleware is a thankless job, right? And if we do our job well, it's like a utility. You shouldn't even know it's there. GNAI makes us super sexy. You know, being the back end to GNAI is a, we're talking about what? $100 billion, trillion dollar market opportunity? We're talking about all the time that Amazon turned the data center into an API and generative AI is going to turn away in which we interact with technology into language. And it's automation compatible. People know what automation is on DevOps and operations. They see value in automating undifferentiated tasks. So I think it'll play well. And then having this more self-service provisioning, self-healing. Yeah. All right guys, we got to go. Dave, always a pleasure to see you. Fantastic, thanks so much for coming back. Great seeing you, man. Great seeing you guys. All right, keep it right there. We'll be back with our next guest from RSA 2023. You're watching theCUBE from Moscone West in San Francisco. Right back.