 Hi, everybody. I'm John Holwick, Vice President at Loom Systems, and I'm here with Julio Calderon, Product Manager for New Innovation at Keo Networks. So thanks for joining me today. I want to talk to you about how to monitor your open stack. And the great part of this presentation is that I'm not going to be doing most of the talking. Julio, my customer, who has a very large deployment of open stack at Keo Networks, is implementing Loom, and he's going to tell you his experience and what it solves for him. So very quickly, what Loom Systems does is AI log analysis. And the difficulty with open stack deployments, I'll put it in the words of one of the folks I was speaking with two days ago. In open stack, the number of combinations and permutations of problems is endless. So traditional monitoring does not work. You can't tell a monitoring tool to look for something in particular and then tell you when that happens, because there's too many things that could go wrong. We as human beings can't foresee them, so we need AI to properly manage and monitor open stack. Loom Systems does four things. The first thing we do is collect the logs from all of your components in your entire environment. It automatically structures and parses them so that the analysis can happen. When we do that, we apply artificial intelligence and machine learning to understand what normal behavior is for every component and every combination of components that need to work together for the system to operate properly. Then when something deviates, it proactively notifies you that there's a problem. It then correlates all of the things going wrong at the same time, and as a final step, we offer a resolution to you and a recommendation on what's going wrong and how to fix it. We pull that from the community and match it to the problem that you're having. So to talk about that more in context, I'd love to introduce Julio and Julio take it away. Great, thank you. Well, first, KEO Networks, who are we? Founded in 2002, a while back. It's a Mexican-owned company. We offer many different types of services. We have 30 different data centers, 20 plus in Mexico and the rest of Latin America and also in Spain. KEO, the word KEO is sojilli and it stands for mirror. So everything in KEO is architected in that same way. Everything is mirror, do all, et cetera. Wanna hit the next slide? So KEO Networks is a group of different companies, right? Smart is a security company. We have our own SOC. Wingo is a company tailored for sliding the business, your credit card and buying a cloud-based solution or a SaaS. Mass negotiation means more business. It's a mid-market company who monitors and does a lot on the application side. Daftless is our company that does big data. We have data scientists there. And KEO, the core, we do it all. It's a single one-stop. Reddit is a company that we have that does infrastructure, IP, fiber. So Julio, maybe you can tell us a little bit about what's in your environment and how are you doing your OpenStack? You know, this event obviously is OpenStack, so this is tailored for the, in that context. You know, I made an emphasis to say that this is one flavor. So because we have so many different data centers and they're all tier four, we attract some of the biggest companies in Latin America. So they want their own cloud. So these components like Horizon, Nova, DHCP, Neutron, KVM, Cinder, Obuntu as the OS, Ceph or ScaleIO or Swift are some of the components that compose our OpenStack distributions or installations that we manage. But also there's more. There's the firewalls, right? There's the switching and the routing, the low balancers, either hardware or software, the antivirus platforms we use and obviously backup software that we include in the different environments, right? So it's really tough to manage all that with people. We have a lot of people. Yeah, and I can understand how that's challenging. Tell me a little bit more about what were some of the problems you were facing, what were you trying to solve? Yeah, so it's, I think as we progress with technology, it continues to get more diverse. One of the key points here, right? It's the OpenStack, you guys know it, you guys do it. It's not a turnkey. It doesn't come in a single box. You don't just turn it on and everything magically works, right? There's a lot of expertise. I don't love that you put it into it to make it work the way you need it to. So there's a lot of hand work by your experts and it's not something that you can just turn on and go. It's many different components that are distributed that you need to tie together, right? Detecting a single point of failure within that environment is very tough. Meaning you run into an issue. We do this at Keo, right? We have our experts, by the way, they're here. If you wanna ask them a question, they're right there. But they'll get together. They handle their own customers, their own stacks. But what do you do when you run into an issue on a Friday night and you have an SLA to the end user and it's running production, right? It's not OpenStack for play. It's OpenStack for pay, right? They're paying for that SLA and these guys need to handle it right away. No matter where they are, who they're hanging out with, they need to jump on a call, right? I need to resolve it. So what I've seen them do is live, you know? They call me and just to be nosy, I get in there, see what's going on. I'm thinking, wow, this is over my head, right? But they solve it. It just takes a lot of manual work, a lot of hard effort. That default metrics that we continue to, that we deploy with is one set of KPIs. But then there's the ongoing learn lessons. So after every lesson, we have a new KPI and then we apply that KPI to our regular monitoring tools, right? This is a handy work. Yeah. Right, so what we see is that Loom takes us above that. Right. It took us from doing the KPI, identifying it through a lot of hard work and hard weekends and they discover the new KPIs basically. So I think what you're saying is that it's reducing all of the manual labor that's involved in figuring out what the problem is rather than just what all the symptoms are. That's correct. All right, I think that's really interesting. So you know what, I went over at the very beginning, the Loom system does four things. I mean, out of this list, parsing and preparing the data, correlating different issues, detecting single point of failure and then helping you resolve it. What's been most helpful to you out of these? It's the correlation, the quick, well, imagine if we have one person looking at one environment, it's impossible, right? So, but imagine that same person trying to look at 10 environments, right? So it's, there's no way, there's no way that a single person or two or even 10 can look at all the logs that are being generated, right? So then we default to our known KPIs and known monitoring tools until we need to identify new KPIs. So what Loom is doing is helping our guys, well, one, sleep better, you know, recover their weekends and, you know, get, discover new KPIs on the fly, right? What are the new KPIs we need to address right away? So it takes us from a reactive investigative work to a proactive and, I don't know, have more fun type of environment or work, right? That's, I love that. So we're giving you guys back your weekends and helping you have more fun. I mean, that's like, what other products do that? That's pretty amazing, all right, cool. So, you know, I have some samples here that Julio's been nice enough to share from his own environment. I wanna mention though that we're at the kind of the halfway mark. We're gonna do a raffle at the end of this. So stay here, watch the samples, and our friend Sabo has been collecting business cards. So we're gonna raffle off an Amazon Echo. It's AI in a box. It's pretty amazing. It's not as amazing as Loom, but we're gonna give away a free one at the end of this. But if you work hard at it, you can probably, or maybe easy enough to hook up that to your Loom, right? I think you could. I wanna do that. You could hook up your Loom to the AI in a box and tell you, sir, in whatever voice you want, you have a problem. So that sounds pretty amazing. That'll be our next presentation. So let's go through a couple of examples here. Here's one with Kimu authentication issue. What do you see based on what Loom is showing you here and what are your conclusions on this one? Well, this is pretty interesting, right? Because the problem, I don't know if you can expand the pointers. There you go. At the entry point, you see out of many different thousands of logs that are going on in the background, it highlights you right away to an issue that lasted for two minutes and it told you that three different events or particular lines in the log that can correlate to an actual issue. And it's an authentication issue, right? On the right side, it says, hey, insight for you. So that's the actual possible fix, right? The cool thing about it is that when our experts look at it, they can either agree or disagree, or they can even enrich that answer. They can say, well, sure, but in our own stack, we do this, right? And that will actually become a new fix for some other, when we continue to grow, have a junior tech actually look at it and be able to address it right away. Having identified the same correlation. Right, so how much manual configuration did you have to do and how much did you have to tell Loom to be able to get this incident pushed to you? So all we did is point our logs to the environment. Nothing else. To me, I was very skeptical. I'm always very skeptical about any new product. So we always try to break it. And I figure, okay, from all the clouds we do, because this is not the only cloud we do, we do Hyper-VE, VMware, et cetera. We figure, okay, let's give them the hardest one. Let's give one of our open stack logs. And we didn't give them all. We just gave a subset of logs that we could share based on lab equipment. So we shut that, those logs to it, right? And this is what it, a day after ingesting the logs, that's what it gave us. That's one of the events. It gave us many, many events. So I think that this is one of the things that Julio and Keo are such a good partner for us. I didn't know that he was giving us his hardest problem to solve. But that also introduced us to open stack, because open stack and the problems that everybody faces in open stack are what we built this product to solve. So thank you for giving us hard problems. No, my pleasure. So let's talk about another one here. This one's with Nova. And there's some inconsistency between databases. Yeah, so this was interesting. Yeah, this is, I didn't expect to see this actually, but there was a mismatch on the number of VMs that were actually deployed versus the number of VMs in the database, right? And this happened to be running for 34 minutes. So in this case, it told us right away what is the recommendation, right? First, it identified there was a mismatch. And second, what you should do about it. So one, it told us what we didn't know right away. Eventually, we would find out because obviously the guys do this on a Friday night and Saturday night, right? We would have found out, that's right. But in this case, it was automatic. And it saved potential a junior person that need to go out and find out or call them. Hey, what should I do about this? It actually told them right away. That to me, it was the coolest thing, right? It's the ability to enable a junior person to take action, right, and then to consult with an expert, right? Yeah, I really, I really like that. There's a business impact, too. Yeah. If I don't know what's in the database, I don't know what to build for, right? And to me, that's the most important part. Right. One of the most important part. So then you're reducing escalations too. Absolutely. And you're reducing the time that's spent by the really expensive people and the extra time that's lost getting it to those people to begin with. Well, the escalation and the time to fix, right? Satisfaction is up, right? All right, that's great. So, CELOMETER, so I know I've heard from a lot of people that they see similar incidents to this. What's going on with us? Well, actually, I didn't check this one out. So I see. They checked it, it wasn't me. Do we want to bring up some tests or some of the team? It's bottom line, right? It's a similar organization, right? It tells you what you found, the evidence that it correlates, to lead to that conclusion, and it tells you as well, right? What is the fix? In this case, you run for a minute, and it tells you the user. In this case, CELOMETER, right? So, and I can even look at this and figure this out. I'm a non-engineer. I'm just a data guy. And I can look at this, I can see that three new behaviors have happened, the first one in the third and the fourth. And those are all correlating over the duration of one minute with the severity of error messages out of the CELOMETER. So, the fact that I can look at this, I don't know anything about Julio's environment, and when he tells me about it, it goes over my head, because I'm not like you guys, I'm not an engineer. So I can look at this and I can see a problem. And if I had to bring this to somebody and tell them what was going on and how to solve it, I could do that. Which is pretty amazing for an average intelligence person with no engineering ability like myself. You know, there's something that we didn't show in this, and you guys should talk to these guys at their booth, is you can expand on every single log. You can then say, click on it, and I want to know more. I want to know what other events that correlate around the same period of time. You can expand on all those logs right away. It's pretty cool. And I'm happy to show any of you guys exactly what he's talking about. We're at booth B29, just find the worst location for a booth in the entire place, and that's us. It's at the very end of the hallway, which means that you only pass by there on your way to lunch, but you can come make a special trip, see me, I'll show you this live, and we can talk about how it would apply to your environment. Why don't we give away a AI in a box here? Sabo, come on up, and I need a volunteer to grab the winner. Who wants to be a volunteer? Sir, in the brown jacket, can you please help us find a winner for AI in a box? Close your eyes, please. Thank you. Wait, is that, actually that is funny. Did you pick your role? That is Keo, right? I don't know, we might have a conflict of interest here. I don't know about that. I'm gonna give that to you. It's so funny. You can decide what to do on that one. Wait, it's me, guys. All right, well, good thing we have two. Sir, come back up. Try to pick one that's not you or your friends. And this red card is from Richard Waterhouse. Come on up, Richard, get your AI in a box. All right, give him a hand. Let's give him a hand. That's you. That's so funny. Thank you, thanks for attending. And come by our booth. I wanna give a special thanks to Julio to help describe this. And I look forward to talking to all of you guys about how AI can help you find the problems that you don't know yet to look for within your open stack environment. So thank you, everybody, and talk to you soon.