 So, hi, I'm Kalyan, I work as a system admin at DirectEye. So at DirectEye, we deal with lot of graphs, lot of metric collection tools, lot of monitoring. So now my friends talked about digging diamonds from coal mine. So yeah, it's a kind of coal mine, but we have a huge amount of diamond mines itself because each application uses either Kibana or Graphite or Ganglia, lot of graphing tools. So there are lot of data, lot of metrics and you know, for a Redis or a Varnish application alone you will have 150 graphs and it's difficult for a human to see and predict any issues on those from those 150 graphs. So it's like the operations team needs to bring in technologies like you need to bring in training models and predict anomalies and correlate those anomalies. So and a graph should be used to proactively determine or predict the issues rather than reactively looking at the graphs and then probably reasoning the issue. Of course, it's a good way we could learn things, but it's better a proactive way. So it's not the first, we are not the first guys who talk about it, it's he has done about Skyline and Oculus. So Skyline is a tool which predict anomalies. So it works fine with Graphite, but since we have a lot of other metric collection tools, so we are trying to implement a generic Skyline which could have pluggable modules and which gets data from Erkibana, from Graphite, from Ganglia and then it creates training models based on all your mean statistics and standard deviations, linear regressions, least mean square. So we could come up with K statistics or any algorithms and create a training model and then so it's better to reduce your false negatives and I mean to reduce all these negatives you could have more algorithms and give bring a consensus among the algorithms and because anything that is visible on a human eye can be made for a mission to also understand and show an anomaly. So it helps us to point to a problem very faster and it helps us to predict issues. The second problem we face is anomaly correlation because there are a lot of anomalies say HTTP load could be due to a backend, say our Redis cache and Redis cache issue could be due to MySQL or MS SQL database we have. So all these issues do exist. So you need to have a correlation. So what HC does, HC has a tool Oculus which is currently open source and since we are working on I'm mapping to Oculus. So what Oculus does is it finds the anomaly at a time when it happens. So if a bunch of anomalies happen at the same time they are correlated as together and since you have Kibana these LogStash things, LogStash already has a good mapping between the logs details. So based on this you could construct a graph and adjacency matrix and you can do, you can search and you can cluster and correlate things. So this is something which we have a prototype model worked on and we are basically trying it for our DDoS mitigation and stuffs because we could pinpoint the cost of a DDoS it could be either a UDP level or DNS level or HTTP level it do work for us quite well. So we will probably scale it up and bring an open source tool and once it is ready probably the next root conf we will have a talk about it. Hope it goes well. Thanks. Thank you.