 All right, is this thing on? Yes? All right. That sounds normal to me. All right, so welcome to end-to-end monitoring for OpenStack Cloud. We have four people today. First, I want to thank you all for coming, spending your last time before lunch with us, especially since there are probably about 50 other sessions going on right now that you could have chosen from. So here today, we have Ichi Liu, Phil Carinas, Josh Wilms, and I'm Chet Luther. We are all developers at Xenos, and we make a product called Xenos Core, which is open source, and a version of it called Xenos Service Dynamics that is for pay. So I'm going to start out today by just giving you a quick overview, hopefully no more than 10 minutes, on what Xenos is to give you the context you need, and then we're going to dive into the OpenStack specifics. What we do for OpenStack monitoring, but the idea being that you come away from this with a desire maybe to go and look at Xenos for OpenStack monitoring or needs, or maybe other monitoring needs, or just thinking about monitoring OpenStack in general. It's kind of an interesting animal when it comes to monitoring, because there are so many options and so many integrations. So let's see if I can make this thing work. Oh, it's very finicky he was right. So what is Xenos? It is monitoring software, like I said. I could probably talk forever about monitoring software, but I'm going to try and keep myself short. There we go, down. So what makes Xenos different than most other monitoring software out there that might make you want to take a look at it? It's unified. I mean this in a couple different ways. The first way is that it's unified sort of for all the stacks of your infrastructure, from the application down to the storage. It's not an OpenStack specific monitoring solution. It's not an application specific monitoring solution, and it's not just for monitoring hardware. The idea is to get all these things together into one place because they're better together, because you use the whole stack at once. You don't just use a piece of it, right? This is especially important, I think, with OpenStack, because OpenStack is made up of so many pieces. If you only look at what OpenStack knows about itself, you're missing a big part of the story. So when I say it's unified, I also mean it's unified in that you have sort of these major monitoring concerns all together in one place. We have what we call the model, which I'll get into, events, metrics, and impact analysis. Quick point there, the asterisk next to impact analysis indicates this is a differentiator in the commercial or the non-free Zenos, the service dynamics offering. We've tried to call out anything that's not in the free Zenos Core version with an asterisk throughout. If something doesn't have an asterisk on it, it's available in Zenos Core, and you can download it and start playing with it today. So let's start with the model. The model is essentially the sum of all the configuration and what Zenos has discovered about your environment using that configuration. It starts kind of looking like this. This is a screenshot of the app and over here you have a list of the devices that are in the system. These are what you add to the system to be monitored. They can be actual devices like a server, but they can also be entire open stack endpoints. And this is mostly where your configuration of the system stops. If you dive into any of these devices, everything you're looking at here, this happens to be a Linux server. So you see things like network interfaces and file systems and processes. And all these things are discovered automatically and they're kept up to date automatically. So if you're the configuration of anything changes, the monitoring keeps up with it. So events, events, yeah. Events in Zenos are essentially anything that might be actionable information, as opposed to log management. This isn't just a fire hose of all the logs coming out of your system. These are events. These are supposed to be potentially actionable items. Great thing about Zenos is no matter where you're collecting the information from, be it OpenStack, be it Linux, be it a hardware platform, all the events come into the same place into the same event management system and you get features like correlation with our model, deduplication, clearing, things like this. And there's also a powerful system behind the scenes that you can sort of write arbitrary Python code to process these events, augment them, generate new events, do all those kinds of things that you might want to do. And this is just another view. That last view was a view of all the events in the system, kind of the global event console of everything that's happening. This is just drilled into one particular spine node on a Cisco 8-pick deployment and just showing the events that are relevant to that thing. The next big section here is metrics. Probably know what those are. They go by many names, right? We call them data points, actually. These are time series numerical data, the kind of thing you'd want to throw up on a plot on a chart like this, or the kind of thing that you might want to set a threshold against, right? So you get an event if you make an observation that falls outside of bounds or a thing like that. Finally, we have impact analysis. So impact analysis is service monitoring, right? Everything before was sort of resource, infrastructure monitoring, impact analysis introduces a concept called a service, right? One of the most overloaded terms possible. But this is the configuration for a service. It's really boring looking, right? But that's kind of the idea. Configuration should be boring and simple. It shouldn't be really hard to do. So this is an example of a service that has an open stack tenant. That's the only configuration that's been done. This open stack tenant is in this service. And by doing that, what you get is if there's a service outage, or to determine that there is a service outage, Zenith will go through and find all the dependencies, all the things that are related to that tenant through your whole infrastructure, not just open stack, and create on the top, we have these kind of meta-service events. These aren't the things that were measured from your environment. These were created because something your service depends on has an event. So when you look at one of these service events, down below you see all the symptomatic events and the potential root cause events. And they are ranked according to our confidence that they might be the root cause. So how does this work? It's basically just a graph behind the scenes. In the model, understands what the dependencies are from the open stack tenant down to, who knows, whatever these things are, the APIC networking and things like that through your compute nodes. So when there's a failure, it bubbles up this graph. It can be filtered out by any of these nodes and if it reaches the top, you have a service problem. So this helps you answer questions like, I have a service outage, what could potentially the root cause be? Get you back up faster or maybe just back up to full redundancy faster. You can also look at it the other way around where you can say, what services depend on this resource? What services depend on this control node right now? Next major thing, Xenos is agentless. The two stars don't mean it's twice as expensive. They mean that Xenos uses agents. But there's no Xenos agent. We use the agents that already exist out there that are running on everything you're already running. So you don't have to install Xenos agent. You do things like authorize Xenos's SSH key or set up SNMP or provide Xenos the credentials to your Nova API and things like this. So out of the box, Xenos has support for a lot of protocols, APIs and uses these to automatically discover and monitor things. This is just how many I could fit on the slide and still have a reasonable font, but there are so many more than this. But if your favorite important thing that really matters in your infrastructure isn't supported, Xenos is very extensible. You can add support. Really, every part of the system can be extended, but very commonly the actual way to monitor new targets is extended. So as you extend Xenos through this idea of a Zenpack, there's a catalog for it out there wiki.xenos.org. You have all the Zenpacks here by our user community and by us kind of going down through there. A lot of these are just configuration, right? Just you configure your system, you save that configuration for how you want to monitor. Some of them are wrappers around Nagios plugins. You can use Nagios plugins out of the box. Others are more advanced and they use our Python framework for doing more efficient polling, more efficient collection. So in addition to being extensible, it's programmable, meaning you can use its API. So for example, our entire web interface is built on our API. So anything you can do in the web interface, which is everything you can do through the API. If I'm interested in automating something, I usually just come into the developer tools and I see what call is the web interface making to do that and I just do the same thing. Finally, it's very scalable. So we have users out there monitoring tens of thousands of servers, which is routers, all that kind of stuff, from a single Xenos instance, meaning you go to the same web interface to configure and look at all the data. It is scalable because it runs inside of this Control Center application. Control Center is a Docker control plane for running distributed systems on pools of hosts. It's also by Xenos, it's also open source. So in Control Center, you deploy applications. An application is just a collection of services with interdependencies. You deploy those services to pools, which contain many hosts and the services are spread across the host using a scheduler like they might be an open stack. And to process more or less data, you can just scale up the number or down the number of instances of these services that are running. So that's the basics of Xenos. So hopefully you have enough context to understand what Josh is gonna talk about. There you go. So it's down for next. All right. Thanks, all right. So now we're gonna talk a little about how we actually did open stack with this. So we've built this unpack that uses the standard, the standard open stack APIs to gather the inventory of all these different components. Everything from tenants and other Keystone concepts down to Nova concepts like instances, of course, flavors, images, neutron networks, ports, routers, and then all the Cinder concepts and volumes and all of those. And so we start pulling all these things down using the standard APIs and we build a list. Great, big list of things. Not super useful, but it's a beginning. Where it really gets interesting though, I think is where we start to think about how these things relate to each other. So we built kind of an object model in our system where we start off with really basic high level concepts. What is an open stack component? What kinds of components are there? Some of them are logical components, like a region. Some of them are software components like a Nova service or a neutron agent. And using that sort of taxonomy, we continue to extend this model out, adding concepts specific to Nova. So this adds all those Nova things in along with the relationships between them. You can see some of these things support nesting. Some of them don't. Some of them are one to many. Some are one to one. And it gets even more complicated because now we start adding neutron into this and then you add Cinder into this and pretty soon you've got a pretty big model in object model in Xenos to represent really a subset of all the things going on inside of open stack. But it's the subset that we think is important to monitor and it's operationally relevant that we can use to build interesting views. So if you looked at just a little part of that, you can start to see some of the potential in that. Here we have an instance and you can see the instance is related to a hypervisor which of course also relates to all the instances running on it. What image it is, what flavor it is, what tenant owns that instance. Relationships into the network stuff. So the relationship between the instance and the ports and how the ports relate to networks and subnets and routers. And this information really starts to allow us to populate a model that really is very descriptive. So believe it or not, that's one instance basically. And even then it's a little bit of a subset. Still too big, hard to really see. But good information, very useful information. Where I think Xenos really can add value is we start to take these big monster graphs and find interesting subgraphs within them. So this is a subgraph representing some of the more interesting relationships of an instance. So this particular instance is running on a specific hypervisor. A hypervisor is on a specific physical host. It's got a particular image flavor. You can see the volume that's connected to it, what tenant owns it. A lot of information here that we can draw conclusions from that really come into play when we're doing those sort of service impact analysis questions as well as other sort of operationally monitoring and event management. So in this example, how would we know what hosts this instance depends upon? Well, it turns out it depends on two, which is a little bit counterintuitive. But in this particular subgraph I've got here, this happens to include both the hypervisor as well as the neutron agent that's providing DHCP for the subnet that this instance is connected to a network on. So that's useful information that might not have been popped into your head if you didn't have this complete view of things. And so we use that to build user interfaces. So this is an example where we're looking at a specific tenant. And now we can see here's all the instances that this tenant has. So that little sort of display dropdown lets you choose all the related objects to the object you're looking at. So this is one way to kind of explore the model. And then if you're using the commercial version of the product, you get this dependency view, which is where we take the relationship information that we've already got and we go further and we say, well, which of those relationships actually imply a sort of dependency or an impact if you think about the other direction that if this thing goes down or is broken, it affects this other thing. That's another layer of intelligence on top of a basic relationship, which should be like a running on relationship. Well, is that a dependent relationship or not? Well, we went through and thought about all that. And so we've built this sort of dependency relationship model that can be used to build views like this. This takes into account both direct and indirect dependencies. So you can see this actually shows those two hosts. We also have a visual way of showing this, something called the dynamic view. And the way this view works is sort of breaking the model up into layers going from side to side and showing all the components on that with the line sort of showing where the impact relationships are. But these views work not just for simple things, simple things like an instance, but also to any layer in this model. So this is showing it with a tenant. So we can say, here's all the hosts that this tenant has something going on on that they might be impacted if that host went down. Here's all their instances, but also you can apply this to anything. So you can say, all right, if this particular neutron agent went down, what tenants would be affected? That's all available through this. Really interesting, useful information there. You can also present that visually, but it starts to get pretty crowded pretty quick because you're talking about a lot of instances, a lot of processes. So I think that dependency view is a particularly useful one as well as the service view that we showed earlier. So there's another challenge to this. We've built this model, great, and made lots of API calls, process that information, stuff that can do another database. That's an ETL. It's out of date as soon as you put it in there. So keeping the model accurate is really, really important and really an area we spend a lot of time on in case of open stack. So we have to keep the model in our database in sync in as close to real time as possible with what's going on in your open stack environment. We do this by periodically doing a full model over again, which is certainly a starting point. This, you know, we make those open stack calls. We also make other OS level SSH commands and other things that helped enrich that model. And that happens just, you know, every few hours we'll do this, but that's not enough. So what we also do is incremental modeling where we consume notification events from Solometer that tell us when changes are going on. I mean, that includes state changes, that includes creating new objects or moving objects. So like if you create an instance, we'll get an event that says, hey, new instance, here's this ID, here's this name, what state it's in. It also includes things like live migrations. So if you're moving an instance from one place to another, we'll get an event that tells us, hey, it's going from here to here, and we'll update our model instantaneously, essentially. So we do this constantly. These are coming in on an ongoing basis. And that's how we try to stay in sync as much as possible. And of course, if we're wrong, if we drift away, the next full model will fix everything that might be out of sync. But normally this will keep you accurate. So now we've got a model, it's hopefully accurate, hopefully staying up to date. Let's do some actual monitoring. That's supposed to be what CNS does, monitoring. So, so start off with some simple things. You know, are the APIs available? Can we pull them? Since we're constantly trying to talk to them anyway, this is a very easy thing for us to keep track of. So we raise an event. If we have a problem talking to something, it's like, yeah, there's a problem. We also use those APIs to do the service availability calls that already exist. You can ask Penova if the services are running or not. We'll use that. As well as not necessarily that we don't trust that, but it's good to supplement that with actually looking and seeing what processes are running. So we, CNS already has process monitoring capabilities. So it's going in there. It's running a PS, basically, making sure the processes are running and keeping track of how much CPU memory that they're using. Then there's a bunch of different metrics that we collect and build graphs out of sort of overall counts of various resources. That's sort of a starting point, but then we also look at the growth of those over time. You can kind of see them on a graph, see what's going on with your resource utilization, do a little bit of planning. In a per instance level, we're doing basic, Libvert is providing information about CPU usage and disk IO network activity. On the host, since the hosts are just Linux boxes and CNS knows how to monitor Linux boxes pretty well, we get all these other metrics. Now, I should mention those could also be applied to an instance. You just would have to point CNS at that instance and let it log into it. And we would run all the various commands to keep track of the load average usage over time in CPU and memory and IO, all sorts of disk things and usual. So I mentioned that we get a lot of data from the salameter, especially on the event side. And I thought I'd kind of go into a little bit about how we actually do that. We decided early on when we first implemented this that we didn't want to depend upon the salameter API or database, because at that moment in time in particular, there were some performance concerns around if we were to sit there and pull it continuously, we were gonna cause problems for some of our larger customers. So we opted instead to intercept the data inside a salameter. When it goes to write it to database through a dispatcher, we added our own additional dispatcher that also sends that data over to us. And that has applied both to meters as well as events. And we don't interfere with it in any way. You can still have a database. We just don't require you to have it. So if you want to, you can run just salameter as a collector with no database and no API and it still works from Zena's perspective. You can also run a completely full normal deployment and that works great for us too. So basically the way this works is, salameter collector process is pulling data off of AMQP. It's generating all of its events and meter values and it sends them to us and we send them across to our AMQP and then inside of your Zena's cluster, we collect the data off of that process and store it into our normal places. So sensibility is a big thing. As Chet showed, there's quite a list of ZENPACs that we already have and some of those are very relevant to OpenStack. In particular, we've invested some time into more integrating OpenStack enabling certain ZENPACs where they are aware to some degree of OpenStack. And what this does is it allows us to draw the lines to connect the lines between OpenStack concepts and sort of underlying concepts in these other ZENPACs. So in particular, we have a sort of limited support from VMware and it's sort of a VIO type of deployment on the Nova side, but we focus more on the neutron and center integrations where we have support for OpenV switch and APIC and VMware NSX, LVM and Ceph. And of course some of these are commercial. Some of these will work with the free version. The way we approach integration is first of all to recognize that when you're building a ZENPAC or when we have built ZENPACs, they're built to represent the objects that are most operationally relevant about the particular technology you're looking at in the way that makes the most sense from that technology standpoint. And so the way that OpenStack uses, for example, Ceph and the way that Ceph actually is, they're not one to one, it's a subset that's being chosen and used from the OpenStack perspective. And so the integration just has to be by saying this thing in OpenStack corresponds to this thing or these things in Ceph. It's kind of a loose integration. And so, but we found that was fairly powerful by keeping, by thinking of it as just sort of a correspondence between this object and this object. We can start to build that up into more. So if we start with just correspondence and say, okay, this volume, now it has a little link in the UI that says, well, actually it's implemented, the center implementation of this volume is this particular logical volume over here, or I'm sorry, this is LVM in this case, the logical volume in LVM. But, so if we kind of think about these sort of integrations, so this is a neutron example, each technology uses a different sort of subset of OpenStack concepts that map to a subset of its concepts. And as long as there's at least one in there somewhere, there's a connection between the two so that if something goes down, so OpenVswitch for example, we integrate only on port, both neutron and OpenVswitch have a shared concept of a port. And so if a port goes down, and it doesn't have to be every port, it just has to be the ones that correspond. So if there's a port that goes down OpenVswitch and it corresponds to a port in neutron, that's great, that's all we need because now we can propagate that service impact through. In case of APIC, so there happens to be also an idea of a tenant, so that's cool, those map up, so we can use that when we're building those service diagrams. Likewise, in NSX, they integrate in two-point places. So this sort of loose coupling is really handy for us. One of the things we've found is we don't have to change the OpenStack's impact to add support for one of these other technologies, it's generic. So we put the burden on the person right in the ZenPact for the other technology to integrate to OpenStack rather than vice versa, so we're not constantly having to upgrade the OpenStack side of things. And also the domain expertise is sort of kept in one place. We don't need a complete map, we just need some sort of a subset and as long as there's a touch point there, that's really what we need to do our impact analysis. So again, this is commercial only, but this shows a little bit about how this actually works in practice, so this is a LVM example. So you can see that at the bottom, and it might be a little hard to see, but at the bottom we've got actual low-level Linux device, block device, and it sort of rolls up through all the LVM concepts until it's a volume and then it jumps over into OpenStack land and says, oh, well there's a volume in OpenStack that corresponds to that one, what's attached to that and continues on up and up to the top. Same thing for Ceph, of course in Ceph it's a little more complicated, there's more moving parts, but the touch point is basically just at that volume level, and so all that other stuff at the bottom of the Ceph cluster part, that's already been done, that's already part of our Ceph ZenPact and so now just by making that small little bit of glue between the two, we can draw a much more interesting picture. And of course when you get to APEC as you showed earlier, it's even deeper and even crazier. So I think this is hopefully that you can see there's some value there, this is really what we've been trying to accomplish is to take our knowledge that we already have and all these different technologies and pull it all back and relate it to the OpenStack technology and so you can build this holistic view. So thanks, any questions? How are we doing for time? No, we've got plenty of time. Can you go to the mic if you have a question? So is there any plan to start taking advantage of the new solometer components or the new telemetry components like NOCKE and AODH? Yeah, we've looked at them a little bit. We definitely would like to. I haven't gotten there yet. Mostly we'd like to because the way that we currently do it is paying to install because you have to put this extra component on your box. So if we can get to the point where we don't have to do that anymore, we'd like to. So that's definitely something we're looking into and one of the things I wanted to learn a little more about at the summit. Yes, I had a question. So can you talk about like two things? One is, what kind of problems have you faced with using ZenOS as monitoring? And also, how did you come to the conclusion of using ZenOS, is that? Well, I work for them. So that's a simple part of that one. But the challenge of what we originally had when we first built this was really how do you apply those concepts to a virtualized environment like this? The way that ZenOS thinks about devices, there's this idea that there's this high level thing called a device that has components. And so when you apply something like VMware or like OpenStack, it's like, well, this is a very single device with lots of little components. And but when you think about the fact that in the OpenStack world, one of those components is something called a host and those hosts are themselves devices. We had a sort of unique challenge and we actually had to build kind of a new way to think about that where we can have existing Linux monitoring going on on a Linux device related to a component on an OpenStack device and bidirectionally. So that was a challenge. It was pretty interesting to solve. I can probably address the other part of that question. I was using ZenOS before I came to work for ZenOS. So I was unbiased at the time. I came from world of using, you know, Nagios plus whatever the graphing solution of the day was for Nagios and pretty much nothing for events. So I just kind of got tired of, you know, writing my own glue between all those things and having no good event management solution. So I kind of had to cobble together a solution that was similar to ZenOS. But I decided my efforts were probably better spent elsewhere and just use ZenOS that was already cobbling this together for me. I actually had a similar experience. I was working on a similar product at another company and saw a ZenOS demo actually which I was there. And it was like, oh, this is really similar to what we were doing. So that was one of the reasons that attracted me. My question is about server discovery. So as you add another compute host or you add another sep node or something like that, how much is that kind of automatically discovered versus going in and configuring a host, you know, through your product? And I guess the other side of that would be instances. If we did want to monitor instances, could you say, hey, any instances in this particular tenant or all tenants, automatically get added to monitoring something like that? That's the second part first. Yes, that isn't supported right now but we have done it before with other virtualizations, technologies and so we're kind of thinking that's probably a general thing that we're gonna do for other, for this sort of use case. For discovering physical hosts, it tends to be trickier because a lot of times discovery is one of those things that works really well when you're talking about logical concepts. When you get to a certain level, you don't always have a good system of truth for what is supposed to be where. So we do what we can and we'll definitely pick up new hypervisors and new hosts in OpenStack but like on the step side, it's a little bit, it's not bad but the auto discovery stuff can be problematic. We do have a pretty good job in both technologies there but there are other ones where it might be a manual step, it depends. One more piece on that is, if you discover the Linux server and let's say you already have OpenStack discovered, the correlation, regardless of which order you discover or add those in, the correlation is formed immediately once both are in the system. You don't have to configure that part. So from a production operations standpoint, you think about, talk about services and a service that you're providing. So in monitoring, you can monitor all the different layers and all the data and you can even correlate it, right? You're talking about discovery and the analytics piece but what about synthetic transactions? So the idea of actually synthesizing from your monitoring system, a whole set of activities that a typical, say a subscriber would go through or something like that. I know you are. So a lot of our monitoring really to other systems might be thought of as synthetic transactions because we're agentless, a lot of times the management API is the same as let's say the tenant API. So in a lot of cases we are doing synthetic transactions because we're accessing it from the outside. We do have Zenpacks for synthetic web transactions where you can say, go here, fill out this form, click this button, make sure this appears on the next page and blah, blah, blah. Kind of the same thing for SQL. We can, synthetic SQL statements and that kind of thing. But just kind of generally, the synthetic transactions come down to the protocol. So as long as it's web, as long as it's HTTP, then you're covered. I have two questions. So I think I guess the Zenpack is pre-packaged or you can download by yourself. And I wonder if we can allow for us to reprogram or just like customize the package. And the second thing is I wonder if the Xenos has the ability to scale up by itself or we need to like define our own solution to scale the monitoring servers. So I guess second part first there, the scale like I just showed you a glimpse and I kind of said you turn the instances up and down. Just like the Xenos application, that control center is also fully programmable and has an API. So we don't have any sort of built-in auto scaling because we're not assuming that we have an elastic infrastructure underneath of us. But you can certainly drive through that API adding new hosts to the pool and turning up and down the number of instances of all of the services. There's that. In terms of adding to or extending the Zenpacks that are already in the system, we do that a lot ourselves and we use the same Zenpack tools and APIs that you would. So one Zenpack can depend on another. Certainly like the OpenStack Zenpack depends on our Linux monitoring Zenpack. So if you came along, you don't have to modify our Zenpacks and then worry about incorporating upstream changes into yours. You just do a new Zenpack that depends upon the one you want to extend. Hi. When you initially started, you said this can monitor across layers, like app, past layers, or infrastructure layers. But mostly, the presentation is focused on the infrastructure layer. So we can monitor the applications as well as the pass and the software is installed on the VMs. That's question number one. Two is like how it is different from the StackStorm, the other monitoring solution that's in place, which also monitors across layers. So I'm not really familiar with StackStorm, so I won't be able to provide a good comparison with that. But in terms of the layers, we focused on what Xenos focuses on. We had a little time to do it, but Xenos has definitely focused on the infrastructure. The main reason I put app up there is a lot of apps or infrastructure, like Rebitem Q, it's an app, but it's infrastructure. You can certainly do app monitoring. We have the web transactions and a lot of our customers do custom application monitoring, but it's not an easy thing to do out of the box. It's not something we do out of the box because it's a custom application. Thank you. I'd like to know a little bit about the architecture of how this works. Is it more of a central database and then you deploy collectors at each site and then it does SNMP monitoring? Is that how it works? So the architecture is, there is a concept of a central system. That central system can be deployed on many hosts. It gets deployed onto a pool of hosts. We have a variety of databases for the different kind of data we have because we essentially have model data, metrics data, and events data. Those are the three big pieces of data we have. Our metrics, you might have even seen it as I brought up our application that was deployed in Control Center. They go through OpenTSDB and they get stored on an HBase cluster. Our events all get stored in MySQL, indexed by Lucene, and that's all handled by the central pool. And then you deploy agents that are dumb agents that you can destroy and recreate at will to remote sites potentially to actually do the collection work and send the data back to the central databases. Gotcha. Now, let's say we're in a virtualized environment and we have a bunch of VMs with guest OSes. How does Xenos monitoring go as far as services or packages, RPMs, that stuff shows up on a guest OS? Because if I configure SMP or whatever on the guest OS, it still kind of sees as a physical entity when I monitor it, right? But if I'm using like a Zenpack for a, I don't know for a VMware or OpenStack or something, you know, I'm curious to know to what extent it can see in an application level data. Yeah, I mean, it's a good question. We don't make any assumptions that a server is physical or virtual, right? It certainly could be either one. It just comes down to what's available through the thing, you know? So like with a VM where we can get a certain amount of data from vSphere about a VM. It's not all the data that we could have gotten from monitoring the guest directly. The same thing goes for, you know, ZenServer or any other of the virtualizations that we provide. So normally what we like to do is if you are in control of the instance guests, right? They're not attendants. Then I would say, yeah, monitor them too and we'll draw the connections for you. So if you have a VM guest operating system have some kind of a failure that might be impacted by the VM as we know it from OpenStack. Is this mostly read-only monitoring or do we also do read-write? Yeah, it's read-only monitoring. Okay, thank you. I have a couple of questions. Just one second please. Just an extension of the previous question we had. So when we talk about root cause analysis so the kind of root cause which you guys are doing is more like dependency relationships. It's kind of a model system. So what about systems which can't be modeled which are external? So can't we write some policies which can tell me about the root cause? If certain conditions or events are met we have a root cause. How do you actually achieve that? So our impact is definitely very much focused on discovery. Some of us had experience with older sort of BSM kind of tools where it's really rule heavy. You do a lot of configuration and then all of your rules fall apart as the system evolves. So we try and stay away from doing a lot of creation of rules for dependencies but you can create these things called logical nodes and put them into your impact graph to serve for those cases where we can't get a model of the system. So you create your logical nodes which are essentially just event filters. Saying events occur matching this criteria from these systems. I'm gonna treat that as the state of this node and this node fits into these services like this. So we still use dependency managed. You can just create logical nodes. So the impact analysis is more like kind of a graph traversal which you guys do? That's exactly what it is. Okay, I see. Thanks. Yes. So are there any remediation capabilities inside that you can configure? Oh, actually that was one thing I meant to go through is in the event management we have this concept of triggers which are just event matchers and then actions. So if you have events occur that match certain criteria, actions can be taken. Out of the box those actions are like send an email, send a page. There's a Zenpack to go to page your duty with it. You can execute commands. There's no specific remediation but there's definitely a call out where you could invoke remediation. Remediation on the target clients? Yeah. Is that possible? It would be a function of what kind of remediation you wanted to do. Some people have kind of used runbook automation, call out to runbook automation systems to do the remediation there. And the dashboards, does it provide any policy based capabilities for operators, block these events from showing up or those kind of, that's what the condition based, is that configurable? Yeah, that's definitely configurable. And how about the moving part? So you talked about OpenTHDP, is that something that we have to separately manage as a part of? No, it's all self-contained and you just deploy the Xenos application and it sets up your HBase cluster and all that. You don't even, you don't know it's there until you go under the scenes and you see the service. So from a troubleshooting standpoint, is it like if things get out of control, like do we have troubleshooting steps and things that? Yeah, it is still those under the covers and we have sort of within our application we have centralized log management so you can go to the central place to look at logs and that kind of stuff. We still have a few more zinnies up here so before you know what's up. Is that funny? Raise your hand. And I had a question is, since OpenStack has different versions and may have different objects or relationships, so can the current Dennis supports all the different OpenStack releases? So far. We've done. So far, so far we've done well but when we were getting into a situation where there's something contradictory, we will definitely do our best to resolve that. Generally the way we try to do this is as soon as we can test it with a new version and make sure that we haven't had any surprises but that's our intent. We don't want to have a bunch of different versions of this impact, we want to try to keep it. So, so far so good. All right and for those of you who have stuck around this long you get a prize for hanging out, we have a happy hour Wednesday. What is the start, six o'clock at handlebars? Come by, get a free drink. Thanks.