 So yeah, while selecting a topic for my talk, I basically selected in from monitoring using all these tools, but my mentor so pointed it should look glamorous, it should look that it does something. So let me go to, is monitoring automated and adaptive monitoring just a glam, just the two words are a glam or we actually implemented it. So monitoring as my top suggest is of two types, automated is can be of both the types, automated and adaptive. Why we say automated is in a very general context automated would mean something accord or control system controlling the way something is supposed to work. So in a manner to reduce manual efforts that are implied to it and automating monitoring solution would in all essence of the word mean the same. So this is something we try to achieve from our Chef scripts, I'll be taking Chef in detail slight detail because there are a lot more tools to be covered after that. And what we do with how we automate our monitoring solution is we drop in scripts into our machines that need to be monitored the nodes and we turn it up in Nagios, we register them to the Nagios server. And why I look at it as an adaptive infrastructure because everybody would think that automation does it right. So what you need is someone doing the task for you putting in the script in your machines and then coming up on Nagios. Adaptive would be more in a cloud context where the number of your containers or VMs or machines that you're using increase and it's a more scalable infrastructure. How do we do an adaptation sort of, how we achieve the adaptive, you can say is also via Chef but in a sort of hacky way we try to drop in scripts and we try to bootstrap the agents so that the right scripts are dropped in in the right age. So say a server running passenger, a web server running passenger would have the passenger monitoring scripts and not a probably a CI agent monitoring script. So how that goes is we divide our Chef cookbooks into the specific role that they're supposed to achieve and those particular scripts are a part of those cookbooks. So the components that I'm going to be using to achieve an automated and adaptive monitoring solution would be Chef of course or some sort of you can say configuration management, I prefer Chef. The next would be Nagios, a simple monitoring tool but very powerful at that has a lot of features that we'll discuss later and the third is graphite. Graphite is primarily a plotting tool, all the performance data that Nagios gathers can be just dumped into graphite to plot graphs and a lot of analytics can be done in graphite as well. It gives you a lot of leverage or it gives you a lot of freedom in what kind of analytics you need to perform, the scaling that you need to do, how you can superimpose graphs on each other. Coming down to Chef, there has been a talk of Chef. I think the very first talk I attended was a Chef talk and there seem to be a lot of misconceptions around Chef. There was one question somebody had pointed out whether Chef can be used to automate development boxes. Well, the answer is yes. If it's a system, it can be automated. It can be configured using Chef or Puppet or whichever CMDB you're trying to use. The second thing somebody asked was if I make a change in a cookbook, upload it to my Chef server, would it directly trigger my agents to run the Chef client build or to compile and execute the cookbook? The answer is no because it's not an SCM. It's a simple client server infrastructure for your configuration management. Chef primarily works on a client server setup where all agents are called nodes in Chef to quality and they are registered to the Chef server. Chef has its own DSM which is written in Ruby. So it becomes really easy for Ruby developers to put in their templates or put in their files. And Chef is very adaptive. We have resources in Chef which adapt themselves to the sort of platform you might be using. So if you just say a package, say HTTP install, if you run it on a one-toon, in the background it will run and have to get installed. If you run it in Fedora or you run it in CentOS it will do a yum install. The only thing is you have to be very specific, very clear about the package name you use because that particular bit it will pick up literally from what you've written in your code. It also enables you to perform system specific searches to gather information of your infrastructure. What happens in Chef is your node information is stored as a JSON on your Chef server. So while you think you're just making changes through a knife plugin, the Chef server actually converts the data into a JSON. And all the key values of that JSON are also the attributes which are searchable, which can be changed, which can be overridden, and can be set to be false. Chef also allows us to scale our infrastructure. And each new node that has a Chef configuration file, which is also a client RB file, which is stored in Etsy Chef. There is a directory that gets created when you install Chef. And this points to your Chef server URL. So there is a specific key value that says Chef server URL. The moment you configure it to your Chef server, Chef server runs on port 4000, and the web URL runs on 4040. So the moment that is configured, your client gets registered to your Chef server. And all the polling that it will do will do through that Chef server, all the cookbooks that it will run. So yeah, these are the basic points that I just discussed. So now we'll move a little bit towards the Chef server. It's basically a central distribution point for all your cookbooks. If you really want to have a developer workstation, a Git repository is advised for your cookbooks. But just committing your cookbooks into the Git does not mean that you're committing into your Chef server. You have to specifically upload the cookbooks into Chef. This is a slight confusion people have in the start using Chef, with even I had when I started. Well, the components of a Chef server, management and authentication of nodes is another feature. The components of a Chef server are the API service, which is Chef server and Chef server API. So these two run on port, it actually runs on 4000. The management web UI, which is a basic web UI that gives you all your nodes, all your cookbooks, that runs on 4040. There is a Chef solar, that's our indexer. Let me get back to that slide, sorry. There is a Chef solar, which is in all essence an indexer that runs on port 983 and a queuing server. So that's RabbitMQ. So what really happens is, and there's a data store called CouchTV. So what really happens is you upload a cookbook. It gets stored on your CouchTV database. And whenever there is a request, the solar indexes it, and these requests for indexes are stored in your RabbitMQ, your basic queue. And the responses are reverted back to your nodes or your web UI. So CouchTV works on 5984 and it stores the data of all the nodes, all cookbooks. Everything that you do on your Chef server will be on the CouchTV. Well, that was a little bit about the Chef server. A Chef client is a node that is registered on the Chef server. It's pretty much the container where everything that your cookbook is supposed to do will do. So your cookbooks are not compiled on your Chef server. They're just stored there. They're executed and compiled on your Chef client. So this is what a client RB looks like. This is where your Chef server URL will come. If your Chef server URL, for the first time, if you haven't configured it, it points to the local host. If your local host is also the Chef server, then it starts working for you. But if it's not, you'll get an error where your Chef client run will say that it cannot contact the Chef server. The Chef Validator, when a Chef server is created, there are two users that are created by default. One is the Chef Validator, one is the Chef WebUI, which is the first admin that gets created. Chef Validator basically generates a PM file, sort of a private key, public key sort of connection. And for any workstation that you want to create, where you need to have Knife, but say not your management CLI, not your node, you need to have the WebUI. Any admin key. So if you have, once a common admin key, you need to share that with the client aid, the Knife aid. So, and this is what a normal node.json looks like. Everybody see it, is it visible? So, all the keys are in the form of the attributes, name, that will be the host name of your, the FUDN of your node. The Chef environment, Chef server, by default has two environments, none, and default. You can also create your environment, when you can do version control of your cookbooks, you can freeze the version of a cookbook and upload it to the environment. And this is a node that we created where we specified a Java version, Java home, Go for all uses, because I'm from Thoughtworth, Go is our CI, henceforth. And of course, Nagios, because we are going to be doing a bit of monitoring later. So, we need to configure Nagios. This is just a test, this is just a JSON that I'm trying to show you where, all these attributes can be the NRP, the Nagios, Go, all these attributes can be defined and they can be defaulted to or overwritten. So, now we'll come to the next part, that's the Chef cookbooks. These are the basic units of distribution in Chef and all the magic that you want on your agents is configured in them. Chef cookbook can contain, it executes resources, yeah. It can contain all these files, the attributes, attributes like I mentioned, we'll have a default.rb file where you'll be overriding, where you'll write your attributes that you need to override or the attributes that you need to set as defaults. There's something like libraries that you can also configure in your cookbooks. Say, specifically you want to write an LWRP, lightweight resource and provider, something, there was another question in the previous talk where somebody had asked if I have my custom installer, say nothing like yum or if I write my custom installer. Chef has some inbuilt cookbooks, the Opscode repository, so one of them is app. So things like LWRP, you can configure yourself where you would be using more of libraries and providers and resources and the only problem there is that particular cookbook if you're using it along with your cookbook that you want, it should install according to your installer, has to be loaded on your Chef server, has to be part of the role that you're applying to the node. You can't say, my cookbook needs to use that particular cookbook, but you don't have that cookbook listed in your numbers. It will not run on your own. Then you have something like files, files is static content that you need to plant in your node, essentially static files. Templates would be, templates are something that are primarily ERB files and dynamically if you want to load some content into your host. Now this is what exactly makes our system adaptive because how we try to, in the coming slides, I'll try to show you how we pull the nodes from our Chef server and try to put it in the Nagios configuration file. So what really happens in templates is while it's during runtime, it tries to pull the Chef server with any new node that has been added and in the templates if you pass a variable, you can pull it through as an array or as a hash and put those value and insert those value. So your configuration files are updated. Recipes are the basic content where we write our resources, what we need to exactly do and how files or attributes are actually set and where they need to be set. You may have nothing in your cookbook but you need to have one recipe that does, that may not do anything but it has to have a default recipe. There is a metadata that basically defines who's written it, the maintainer and your email IDs and your licensing, your version. So your version control happens here in metadata. Also, if you've included any cookbooks in your recipes, you have to put in a depends clause here. Otherwise, the standard dictates you need to do it. Then we have something like definitions. Definitions also are sort of, you can say reusable collections that you may want to use in your cookbooks. This is how a cookbook would really look like when you shoot a command with a knife, let's say knife cookbook create. So it will by default give you all these files. You may want to use them, you may not want to use them. The directories, it's up to you. When you upload a cookbook to a Chef server, any directory that's empty will not get up. Whatever content is there, only that will get up. You need to say that again. Yeah, yeah. So you have resources in your recipes that do that. We have something like an execute command, where, but that's not an idempotent resource. But basically, why do you need a CMDB configuration management system would be because you need things to be idempotent. So specifically for a execute, you need to shoot a command, and that command will be run each time the Chef client runs. So if you have a cron job, it will keep running each time. It won't poll whether the command has already taken effect or not. You need to put a small test clause that can be done with conditional statements like not if or if only. So you can do that. In fact, if you want to run scripts, you can actually plant the script somewhere and run it through your Chef saying execute the script. And you can change the mode of that script to executable or however you want. So it's pretty flexible. That means you can do a lot with the script. Well, his question was if I have to run a script, I think through a recipe, how would you achieve that? Now, after cookbooks, we come to something like a role. Now, role basically helps us in setting up the, you know, basically making the functionality of the system that we are configuring more verbose. So a role is essentially a collection of cookbooks. And in the run list of the role, you can actually list down all the collections of all the cookbooks. Now, the run list includes the cookbooks that are required to create the role. Say you want to set up a web server. Now, a web server may not necessarily, it's not just passenger that you need on it. You might need HTTP or you might need Ruby to run some scripts. You might need Python or Java, a certain specific version of Java for your scripts. All right, you can have specific cookbooks for specific actions, installing Java or have a Java cookbook for passenger, have a passenger cookbook, and list all this in one role. So when the run list of the role is written, you add all these cookbooks in sequence of which they're run. If you mess up the sequence, basically Chef runs everything sequentially. So it will pull the role cookbook sequentially, run the cookbook sequentially. Within the cookbook, the resources that you've written down will be run sequentially. So if you write a resource, say changing the mode of file, and the file is being copied at a later stage. So you will have an error, because actually on that system, the file is not there. You need to be very careful about your sequence. Now, a typical role would look like, this is a Ruby file, you can create roles by knife plugin, or you can have a Ruby file or have it on your gate and do a knife role from file. All these commands, I wouldn't be discussing too much because there's a whole list of knife commands that you can read from the Ops code web page. The description is basically what it needs to do. This goes in the Chef server. The run list is what all cookbooks that you're adding and how they're going to be done. Environment run list. Now, we discussed earlier that Chef helps you create environments. Now, based on the environment that you're using, it will use what cookbook it is going to run on that particular mode. Default attributes would be the attributes that will be defaulted to. Overwrite would be if anything, for max children, if my mode already, my default mode's 100, this will override it to 50. This will overwrite, so basically max children would be overwritten to anything unless it is changed here on your cookbook. Next, we'll come to monitoring. Nagyous is a very effective monitoring tool which provides, among a lot of other features, it provides the ability for you to monitor your infrastructure. It also provides you a pretty decent dashboard where you can see all the services that you're monitoring, the status of the services and it keeps updating itself as and when a polling is triggered. So, if it's a local system that you're monitoring, if you're monitoring the Nagyous server itself, you may not need any extra plugin, but if you're monitoring a remote system, you would need a plugin called NRP. Nagyous has a lot of plugins that it uses to poll. So basically, it's just a monitoring server which polls all the nodes that are configured towards it through NRP. That's the remote, it's the Nagyous remote something, I forget to be an executor, plugin executor. It also does an outage detection on custom and it provides you custom warning and custom critical levels that you can set up. So, for each Nagyous built-in plugins, you will have a dash w and dash c, you can plugins where you can set up the range or you can set up the particular level of warning and criticality. So, accordingly, when Nagyous plugins return the value, it'll either be red for critical, it'll either be orange for unknown, yellow for warning and green for everything is okay. You can also configure Nagyous to send out alert messages through emails, SMS pages, and there's something like problem acknowledgement for known issues, if you know that my particular machine is gonna come down at this time and it comes down, sends an email to everybody, you can do a acknowledgement, push in a, you can say status message or you can push in a commit message where you say that this is a known message and this is what I'm doing about it, it will be fixed. So, everybody will be alerted that this is the acknowledgement that we've received and so people don't have to go out of their way to switch into the network again and figure out what's going on. It also provides something like alert escalations which need to be configured in your Nagyous setup. That's not something that I'm gonna be discussing in too much detail, alert escalations. So, when we talk about Nagyous configuration, basically it derives all its configuration from text files which are stored in Etsy Nagyous directory. The primary conflict file is Nagyous.cfg file and specific configuration is loaded under all these files. How they're different from each other is commands, we'll only store the commands that you're going to run through your plugins. So, you write a custom plugin. So, just writing a custom plugin would not ensure Nagyous running that particular command to test your remote machine. You would need to configure it here. Then, contacts are configured for alert messaging who so ever you want to, should be contacted for whatever alerts should be put in contacts. You can also configure contact groups where you can add members that you've already added as contacts in your contacts or CFE. I'll show you a sample contacts file. Hosts is every host that should be tested or that should be monitored by the Nagyous is set up in the host file. Now, what I've tried to use is use the host file as a template where each time it will pull from the Chef server all the nodes that are currently registered to the Chef and have the Nagyous hookbook and it'll add it to the host file. So, essentially in all contacts that host is supposed to be monitored by your Nagyous server. The second, the fourth file is the local host file. Local host is its own file that it tries to, it has the data of its own configuration, the IP, the name. Basically, it's a host file, but here it has the data for itself. Services is the services that you're going to be monitoring per host. So, every host that you add, that particular host has to be configured with every service that you're adding next. So, all this can be configured through templates or four lines of code, which will be coded ironically and it turns the script into like a hundred lines based on how many nodes that you have. There is also something like time periods where you might want to set up time periods where of polling or how much time do you want the Nagyous to spend in contacting the server nodes and getting back the responses and how long will it work? Should it work 24 by seven? Should it only work weekdays or should it only work weekends? All that can be set up in your time periods and there is something like templates, basic templates that you need to set up. Templates are, essentially, Nagyous gives you node template, your, that's a generic node, that's a lean-up server template, where it defines four or five lines of code with whatever you need to do in it. If there is something, if there is some particular functionality that you want to associate with every node, you may want to define it here rather than defining it in the host file. Then you have something like a generic service, whatever a service needs, the time periods, the contact groups and everything is configured in the generic service. And there's something like a generic command, generic contact, so you can also configure templates on your own. Whatever flows through the definition of that particular object can be configured in, in your templates file. And then, going back to your host or commands, you can just use a use, you can just use a use key and apply that template with it. I'll show you an example of that. Now, coming here to templates. So, if this is a contact that I've defined, right? There is a template that says generic contact. So, this particular template is being used. And for host, if you can see, leanup server is the template name. And for service, I don't think it's the generic service. So, all these commands are specified for each and every host. They will hold true for each and every host. So, you can see it defines the notification period, it defines the check period, it defines the interval. These process performance data, this part comes when it pulls in all the performance data from your notes. Your flap detection, flapping, basically, how much time does that, does a particular service take to come up? Or is it flapping, coming up and down? There are a lot of other things that you can configure in this, say you want to add contact groups, say you want to add another person. You can say contact, and you can add the name of that person, right? So, this will flow through your mode definition. Coming to NRP, NRP is the Nagya's remote plugin executor. So, what it does is, it will retrieve the data of your remote services. It consists of, I think there's a typo there, it's not the chef, it's the check NRP plugin that is used. When you download the NRP plugins, the check NRP plugin is actually the one that holds all the remote machines and pulls in the data. If you just shoot the check NRP dash capital H and the host name that your Nagya server is monitoring, it will give you the version of NRP. That basically shows that your NRP is working fine. Along with that, after the host, if you shoot a command by using dash C, and you give the command name that you specified in the commands.cfg, that command will be shot against your node or the remote host that your Nagya server is monitoring and it will retrieve the data. If that particular plugin is not planted in the remote server, it will say that I cannot find that command. So you need to be very specific when you're configuring Nagya's and NRP. All the plugins that you want your Nagya's to query your nodes against have to be in the Etsy, sorry, user live Nagya's plugins directory, or you may specify the directory on your own. And you have to give the, if you specify a directory on your own, you may want to give the absolute path for that, particularly. So yeah, the configuration file is stored at EDC NRP. So this is what basically happens. Within the Chef server you have the check NRP plugin, which pulls the NRP service in and all these plugins that are there in your remote machines. And it'll retrieve the data and give it to you. So why we talk about retrieving data is this is how we will store the data into graphite to plot, or you may have a plugin for Nagya's, which is called the PNP for Nagya's, which plots all the graphs for you. This is save, this is a sample plugin that just queries the RAM of the system and it puts in levels of alerts. So basically there are four return values that Nagya's will identify. The first exit value of zero means everything is going fine and your Nagya's output would be green. If the exit value is one, it implies warning. So this is decided. So while you're writing your Nagya's plugins, make sure that your exit values coincide with the message that you want the Nagya server to have. So exit value one implies that it's a warning, that you're in the warning level. That you will decide through, all these, if clauses will decide which level it is at. Exit value four implies that it's an unknown, you can say it's an unknown state that it's unable to do what it wants to do. And exit value three means it's critical. Then while setting up a sample plugin that you've created, first you'll drop that plugin file into the node that you want to monitor in the user's lib64, if it's a 64-bit server, or lib Nagya's plugins. And in the nrpe.config, you'll either include the directory where your plugins might be, or you may want to write in the plugins here itself, include the plugins in each. And also, you may want a custom, or if you want a custom.cfd, this is where the command would be written. But this command has to come in your Nagya's commands.cfd file. So there has to be a sync between your nrpe plugins and your Nagya server. So all the commands need to be specified in the commands.cfd file. All the services that you're monitoring need to be specified in the services.cfd file. And the command also, if there's a custom command, it also needs to be in your Nagya's configuration. Either it can be in nrpe.d, or it can be an alignment at the end of this file. So now, if we do a sample configuration of our Nagya server, so this is where I'll define my, in the command.cfd, I'll define my command, and I'll define what exactly is it supposed to do. It will go to the user's plugins directly, pick up this file, that's my plugin file, pull it against the host address. Host address, it'll take up from here, from the cookbook that we're using, because we're dynamically planting all the hosts. The second thing that we need to do is, now we've specified the command, the second thing would be specifying the service. The service would explain exactly what it needs to do. It'll run check nrpe, and the exclamation basically means after check nrpe, this is the command that needs to be run. So the name of this command has to tally with the name of the command here, and the name of the command that has been given in the nrpe.config file, right? And all this is notification period, notification intervals, all this can be set up. So yeah, now when we talk about a host, in my cookbook, my host will be configured this way. This will be my mode FQDN, and alias can be anything, it depends on what you want to write there. The address would be the node IP address. Why I haven't mentioned any values here is because in my Chef, going ahead with the cookbook, I'll show you a cookbook, where in my Chef, I'm gonna pull this data dynamically. This is the time periods file, this is how we'll define the time period, depending on what days of normal work hours, what will you define it as. And this is a sample contact that I would like to create. It pulls in the template for generic contact, and this is a contact group. This also goes in my contacts.cfg, sorry? Yeah, not in time periods, when you go to the host definition, when we checked the generic host, there was a check interval, right? There you can typically hear if you don't want it there. If for one particular service, you want it called every hour, you can put it here. So yeah, now, how do we do Chef and Nagyos together? So the first setup would be my default on RB, that's my default file. This is just the part where it'll load. The first thing we need to do is a package Nagyos installed, right? That can be referred from anywhere. The most important part here is the nodes from solar. Now, this is the search ability that I was talking about earlier. You can directly push in a search to the nodes, that all the nodes. So basically, colon star would be all my nodes. What the star first means is the attribute that I'm searching for and the value that I'm searching for. This would give me every node that is registered to my Chef server. Go agents, now, this is a specific, we were trying to set up host groups. So this will specifically search nodes where the recipe for go agent is applied or where the recipe with the name go agent or that particular string appears. So now, here what happens is it'll, so all these templates from my templates directory in my cookbook, it'll pick up each and every object. It'll give all these attributes, the owner, the group, the node. Now, when I say variables and I'm passing these variables into my template, when this particular block of code is executed, it'll pick up the nodes from solar as an array that are pulled in here and for each particular element, it'll create an entry. So if my host, my host file basic looks like this and here is what I'm gonna put in my data. Now, since every node is adjacent, I can easily pick up the node value and whatever attribute I need should be assigned here. So when this is executed, in the actual node, you'll be able to see the node name, you'll be able to see the node FQDN and the IP address that you've set for that node. So this will dynamically populate your host file and based on how many ever servers or nodes that have been attached to your Chef server or configured with your Chef server will appear here. Say I add two more nodes and if my cron job or any job that I've, any scheduler that I've run will run Chef, will run this particular cookbook on my Nagio server say every 30 minutes. So every 30 minutes, if I add a new object, it'll appear in my Nagios configuration, my Nagios web UI and all the services would be, they'd be started to monitor. And also, as I mentioned earlier, your services have to be per node, per service per node. So this will also pull in from the nodes, it'll pull the service name that I'm trying to specify here, that's the check ram service and for every node that it picks up, it'll add this service checker. So every time it'll check your Chef, it'll check this particular service. If I do not add the host with the service name, that host will not be checked against the service. So it is very particular, whatever host you want should be checked against whatever service should appear here in your services.cfd. So I'm into host config. Now, if I had two hosts in my server, running that cookbook will populate my host.cfg like this. It will populate two hosts and it will populate the two IPs of that host, however I put it dynamically. This is not what I had put it in my cookbook, it's not static. So it got populated. If I were to add another host and if I ran the Chef client again on my Nagio server, I'll have another entry because it's pulling from the Chef server. All my nodes while bootstrapping are getting bootstrapped to my Chef server. So that's something that has to be kept in mind that the node has to be registered to the Chef server for you to achieve this. And my service would look like this. So per node, my services are configured. The same service doing exactly the same thing but the host name changes. We'll move a little bit towards graphite. The only, you can say functionality of graphite that we've really exploited is just the graphing part of it. It does a real-time graphing. The front end is a web app. You can see the output through a web app. And the back end is a storage application that's carbon. It is basically a Python application and it has three processes that are run in a pipeline. One is carbon agent, carbon cache, and carbon persistent. So what essentially happens here is all the agents contact the carbon and they send their data to evaluate that data for real-time graphing. The primary process that is run is the carbon agent.py. It starts off two other processes in pipeline that's the carbon cache and carbon persistent. When agents connect to the carbon server, the carbon agent accepts the connection request and all the data that is formatted according to graphite. How that formatting is done will be in the next part. And basically it forwards the entire data to the carbon cache for caching where it groups the data points based on their associated matrix. So there's a lot of things that happen in the carbon part and the carbon cache basically feeds all these data points to the persistent which then reads and writes them to the disk as using this for that's an RLD, that's a round-robin database. So essentially what happens is it just pulls in all your data and puts it up on a graph, it just beautifies it. I don't think this is visible actually. So this is my Nagio's cookbook, right? Nagio's server cookbook. Now the first thing that we need to do is we need to add the formatted data into our Nagio's configuration file. So that appears in my templates. Is it visible? No, it's not there. Can you guys see this? I'll have to start with them. Don't change your looks. Now can you see? Visible, it's a little messed up because that's the way it is which has been written by Sean Sterling. It is basically, you can put it in a way and pushing it into the carbon source. Performance data file is actually the format that goes with it and all the performance data that needs to go, right? We can add these two commands which is again not visible. It goes under my graphios.cfg. Either I can put it here or I can put it in my commands.cfg. It really doesn't matter because if I'm putting it as a separate file, graphios.cfg, it'll come under my, how about to go back to Nagio's.cfg. There is something like cfg underscore file here, this particular tab. So you need to include all the files that you're going to be using in your Nagio's server. So my graphios.cfg file would be included here. If I am putting it as a separate file altogether. So these are my two commands that need to be put in. So these are the performance post and the performance service. Now these commands will particularly come under each, each host and service that I define. We need to have a graphios. You can have a graphite prefix if you want. It just puts in the data in a beautiful way. You can say Nagio's monitoring the graphite postfix. The graphite postfix is decided by your services. So the postfix is defining your services. What it means is my Nagios is monitoring this particular node with this, and checking this particular service. For every service this bit will change. So there's only one line of code that is added in my host, in my service file in the services.cfg. One line of code that is added in my host.cfg and these couple of lines here. This is basically Sean Stirling's page. He has an elaborate read me and an elaborate installation advice. So we can go through it and you can configure graphios. It's pretty simple. All these commands I mentioned in his read me all his configurations are mentioned in his read. So basically, well, client so, couldn't show you the names. So this is how my Nagios would actually look like. Why it's all green as well, everything is okay in our infrastructure. And all these services are that are being monitored. So you see this, is it visible? Still not visible, is it? I don't know what much to do about this. Anyway, all these services are being monitored. Well, the last service here is check go server because I can see it. This particular VM is my go server. So it's monitoring my CI server. But the next VM is my web server. So it's not monitoring go anymore here or the CI. It's monitoring passenger here. And the next is my go agent. So it will monitor my go agent. So you're writing, you can easily add to your plugins directly and have it configured in your Nagios server cookbook. So why are you writing the cookbook? The process that you'll follow is either plug in in your custom cookbook, go to your Nagios server cookbook, upload the command, the service, and get it running. And that will populate it here. And this is how my graphite would look. This is basically an interpolation of a lot of data. So my graph would either be one graph. This is two, three graphs together and you can add color to it. You can do a lot of analytics with it. So yeah, that's what graphite might be. What? Graphite is open source, graphios is open source, everything is open source. Yeah, that's all that I had to say about this. So any questions are welcome. I'm sorry, I can't hear you. Graphite DSM? No, it's true. We are not essentially going into the graphite and doing any of the using graphios, which is sort of a wrapper you can say. Yeah. Yeah. While we are passing the data from Nagios to graphite, we don't see it well. So we generally try to alter the graph in real time while looking at it. We're still to explore that part of it. We haven't really, thanks so much. We'll try to explore it. Any else? It works on TCPA, port based. I guess in the services configuration you can and maybe poll the data through it. There may be specific documentation on this, we haven't really come across such a problem yet. Thanks so much.