 So it's going to be definitely interesting. So enjoy the talk and please welcome Philip Thank you Okay, I know it's about lunchtime. I will try to be a bit entertaining so you won't fall asleep So let's dive into monitor your containers. Um, just to get an overview who was in my first talk about monitoring microservices okay, so Don't be disappointed this time. We're jumping more into the elastic second kind of like the history The previous one was more about programming. So let's dive into that I'm still with elastic the company behind elastic search log says Kibana beats all the open-source products I'm part of our infrastructure team. So we do stuff like containers AWS testing with Jenkins Even though we'd like to call it junk ins And we'd love to rewrite it, but that's a totally different story And then that is kind of a unix pipe by pipe that into developer advocacy. So I'm doing lots of talks Yes, and this talk is about logging all the things in particular to containers But you will see that so Quick show of hands who has more than 10 servers containers to monitor That's quite a few so how do you log if You say SSH plus tail that is the wrong answer It just doesn't really scale at some point anymore so Quick overview who's using the elastic stack Elk stack already? Okay, that's quite a few Anything else? Yeah, those are the rich ones. Yeah We always have a place for splank users at elastic as well But let's not go down that alley today So taking a step back. What are logs for us? Logs always you have a timestamp and You have something else and that's something else can be metadata Like what machine what IP address? lots of other stuff and Probably a log message, but the main thing is you have a timestamp at some point in time something happened and That is what we want to store and kind of centrally store and make searchable thereafter So how do we go about that? This is kind of the the three-minute story of how we got here Since the elastic stack or Elk stack is kind of well known Back in 2010 shy our CTO started elastic search With the tagline, you know for search And it was just for full text search and it worked. Well, it's very widely used So if you search on any of these sites behind the search box There is elastic search doing the hard work for you, even though you probably don't recognize it and Then the next step was that there were two open source projects one kibana one lock stash We thought well, this is a great thing. We want to extend it The first one kibana is the visualization part You have lots of data in your cluster and you want to see what is actually going on and you just want to visualize that It's kind of the window into your data and then lock stash lock stash is to Get data parse it enrich it and then store it somewhere else and that somewhere else is commonly elastic search But it can be different targets as well We have like 200 plugins to read from different sources and write to different destinations out So there are lots of choices and so those two were open source products And we we like to say we brought them into the elastic family Because I guess their founders wouldn't want to hear that they were bought so so we stick to the narrative that they joined the family and That is the well-known Elk stack and it's Yeah, it even got that the plushie like we even had them a mascot like the plushie elk elastic search locks the skibana and that name was very sticky and everybody liked it and It's also widely used if you're at CERN and you log from thousands of computers. They're using that Salesforce in the cloud. They're using it to lock stuff Government sex is a different story. They are using it mainly for auditing and compliance So every now and then Government sex does something bad and they need to pay a big fine and at some point they decided it's probably Better to invest into an auditing platform and avoid paying the hundred million dollar fine by investing five million or whatever into a compliance platform And that is kind of their use case And that worked all very well until this little fellow came along So the major downside with the elk stack Especially with the log stages that lock says was started off as Ruby out of performance reasons it was then jruby and you would always need to install the JVM on all your servers to collect your logs and Many ops people if you're not a java shop are not too keen on just adding the JVM just for logging and And That was kind of recognized. So beats are written in go you get native binaries without any dependencies And they are just like cheaper or agents or forwarders or whatever you want to call them So they are just something that's running on your system to collect Information as efficiently as possible and forward that information and get it off your server That is disappointing What what what what's happened here? This is very weird No, this is somehow screwed up Let let me open that again That looks better Whatever happened. So we added beats and beat is great. The only problem is there is no bee in elk and We did we didn't kill the elk. That's maybe too drastic Elk is in in I don't know elk retirement home or whatever. He's not that and he's still living a happy life And many people are still using that the term elk even though we we tried to get rid of that name and internally We have the so-called elk alert whenever somebody doesn't meet up and they call it We'll talk about the elk stack somebody from marketing. We'll reach out and say well, this is cool but please rename it because we we're trying to get rid of elk and We tried to come up with something else and the first thing we had was this one That is that is the bulk or the elk bee and And you can see it's it's it's a bee and it has elk horns and it's kind of the elk reborn and nicer The only thing is it doesn't really scale what happens if we need to add or want to add another product We need to rebrand everything again and need to make up another animal and It doesn't really work that well even though many people really loved the Belk And one of our engineers was so enthusiastic He even created stickers on his own and some people even have those stickers on their laptops Even though they are very rare because he just did a batch for kind of himself that that was the first try the Belk or the elk bee Then marketing went for the kind of boring choice. They said well, let's just rebrand it to elastic sex since that is the company and Whatever open-source products we have we will just stuff them into the elastic sex So if we come up with something new and exciting in a few months, it will still be the elastic stack afterwards And that's what the elastic stack looks like today So we have elastic search in the middle to store your data and that is where you queried You have Kibana on top to actually visualize and interact with your data in an easy fashion And we have beats and lock sesh at the bottom so beats is like the light with age in the shipper that is just sending data away and Lock sesh is more of a centralized component that can do parsing of your data and enrichment So parsing would be you have a log file for example a patchy or engine x or something like that And you want to extract meaningful fields these meaningful fields would be HTTP response codes the URL so you want to extract that to be able to search that in a meaningful fashion and Enrichment would be you have an IP address and you want to get the location of that IP address And you do that once you try to insert it or get the data Store the location right away and then you can very easily or efficiently query for all the users coming from the Czech Republic for example and We also recently had a few months ago a big version jump So for example elastic search jumped from version 2 to version 5 Because it's not so much more awesome. So we skipped a few numbers and No, that was not the main reason and the main reason was and since we had all these products that joined the family at different points in time they all had their own version numbers and Kibana was a version 4. something 4.6 lately And it was even for us internally very confusing like what is the current version of which component? And if you ask me like what is the right version for elastic search 2.4? I know that is Kibana 4.6 and beats 1.3 and Logs dash 2.4, but if you want to know what is the right version for elastic search 2.1 I have no idea and I would need to look it up in we have a big matrix to actually see what version work with what version And we kind of got rid of that. So now everything is 5 We're right now at 5.1 and we'll very soon release 5.2 So all the products get released on the same day with exactly the same version number and even if there is no change in a patch level release Every product will get the same patch level release on the same day to avoid any version confusion and data question commonly is Since a lot of that stuff is open source So everything in that dashed line the four products We've already seen those are open source and freely available. You can just use them. They're a patchy to license How do we pay my salary? We have two things firstly we offer a elastic search as a service hosted On AWS, but it is not what AWS is offering That is actually a competing service. So that is elastic cloud and then we have commercial extensions Which go hand-in-hand with support? So that is kind of the business model behind it so Let's get back to containers and jump down to beats and see what they are doing So the first beat we want to cover is file beat file beat. I always explain it like it's like tail F over the network on steroids That is kind of the idea of file beat. It's really just like tail F It's just taking something and sending it over the wire as quickly as possible and as efficiently as possible You can then parse that The beat does not parse anything you can either do that in lock sesh or we have in version 5 We have a new node type called ingest node that can also parse messages But we're not going into the parsing stuff today three nice attributes of the file beat is You have at least once delivery. So you will not lose messages. We support back pressure if you're if the receiver is Gets too many messages and cannot keep up. It can tell the sender actually slow down Wait a bit send me more stuff later on and we have graceful downtime You will not lose messages during downtime. So how does it even work internally? So you have lots of service and you have your beats running on all your servers They will then connect to either elastic search directly or lock sesh for parsing and enrichment and you can see we have a few Lock sesh instances for redundancy and the beats will just round robin between them and pick one instance That suits them that is available if one they try to reach is not available They will try to find another one and we'll just forward the data to those lock sesh instances How do we make sure we are not losing any stuff? So we have one file and we have for every file we we monitor We have a so-called registry file and that registry file kind of keeps the pointer up until here We have sent everything and that's the last bit We have sent has actually been acknowledged already so the flow is beat sends a batch of messages to lock sesh and only once lock sesh or Lean just note on the other side or this elastic search have acknowledged that message That part that is yellow the red part only once it has been acknowledged The acknowledged the gray area will be moved forward and the registry file the pointer will be moved forward as well Only once it has been acknowledged on the other side if you do not get an acknowledgement it will be sent One more time. So that's why we have the at least once attribute and the ninth thing is with files Since everything is a file if the other side is not reachable you can simply stop Keep the pointer to wherever you have been previously and once the other side is available again You can just keep sending stuff again and since it is not in memory. There are no buffers that can run over This even works across File rotations. So if you have a rotation policy, it will know okay I've only sent everything up until here in that on that file you rotated it and you will send the rest afterwards More features and it can do multi lines So you can define a pattern every line needs to start I don't know with a square bracket and then there is a date But you only say square bracket and if it's not a square bracket then it's a multi line statement like a stack trace Which are generally a pain in the ass to parse But you can at least package everything in a multi line statement together We can do client side filtering so you can log on debug level But for example, you say I only want to have centralized in my logs everything that is an error or warning You can both blacklist and whitelist in file beat so you can say It starts with I don't know with debug Then just throw it away don't send it over to the wire to save some of this space and We have json decode if you are already logging out in a structured format You can insert that directly into elastic search So you don't need any parsing anymore That is actually what we kind of like most because it doesn't make much sense to write the log message as a regular line And then needing to reparse it again It makes much more sense if you can actually write something out in a structured format already So How does it work together with with Docker? There are 101 options So we will need to speak quite quickly now. Let's see If you're looking at Docker, they have some logging Appenders they support that is that complete list. So yeah, there are some options Which ones are relevant for the elastic stack? The first one is json file, which is the default what docker logs out, which is nice because docker logs still works It's well integrated. It contains the metadata major downsets is logging json is potentially a bit slow and The default configuration even though you can change that does not rotate files So that file that log file will grow unlimited Next up syslog Yes, you can run your own syslog server locally and just ship everything to syslog and can Then get the data from syslog Which is very nice because it's totally configurable and you can rotate and it does everything the major downsets are you need to run your own syslog server and serializing and deserializing of multi-line statements is pretty error-prone or you need to parse stuff again So syslog can be a bit of a pain Next up journal D Because there is nowhere around journal D anyway and anymore The nice thing is it's widely available. It has some metadata. It works with docker logs Major downside for us is we do not support journal D yet There is a community beat. So somebody saw the need and wrote a journal beat file beat will Be able to do that at some point in the future no guarantees when but we definitely have it on the road map So in the future Journal D will probably be your best choice And then there is gulf There's a direct integration into docker logging Major downside is its UDP. So you can lose messages. There are no back pressure and there is no graceful downtime For some use cases that's fine It's just like okay send as much as you can if I have it on the other side. It's good if I don't have it It's no big problem For others losing log messages is still not great Finally Luckily 101 was only binary Was is you just mount the volume if your application locks to something you write to that mounted volume and have file beat Collect the whole thing major downside is you're losing metadata So today the best option is generally Jason syslog or logging to a volume in the future once we support journal D. It's probably Journal D. That's kind of looking into our glass ball and seeing what What will the future hold? Next up metric beat metric beat is a lot like top. It's just collecting metrics, but not Just what top is doing because previously it was called top beat But is can do more metrics by now So we have system modules like that is what top used to do But we also have application or service integrations. So it can just extract metrics from these applications That is the current list it will grow over time So if you want to monitor some database like my sequel mongo Postgres Metric beat will be able to do that. So it will know how many queries are you doing and how long I decrease taping and stuff like that How does it work with Docker You can just read C group data from proc which is already part of the system module Which is very nice because it is it doesn't need access to docker Because the downside is once you can access Docker you can access Docker and then you could start stop or do whatever with your containers And probably your logging system should not be able to do that. So just reading proc is from a security point of view probably the nicer approach Proc works on any kind of container technology and not just docker Yeah, we can automatically enrich your processes with C group information Major downside if you're using docker, there are no docker names or labels because they are docker specific So you will lose that information if you just use proc And then everybody screams well, but we want to use docker Everybody just wants to do docker. We know or at least somebody from the community know and they created a Community beat called docker beat, which would just use the docker API to collect all the information from docker and They had a 1.0 release and they wrote the blog post on our blog and I think within two days or so Docker the company reached out and said no Docker as the first part of any name It's the wrong URL. Anyway, you'll get the right one in a moment Because docker said everything that starts with docker is trademarked to them. You cannot even if it's an Open-source community non-commercial whatever project project and it cannot start with docker so that was not so nice and we reconsidered and now you can take a picture and And we renamed it to docker beat and at first we thought even about just Removing the E and just calling it docker beat But I don't know somehow we decided to just call it docker beat or the community can consider Considered doing that and since 5.1 It is actually part of metric beats. So you will use the docker API now we The person who wrote it in a community was nice enough to actually contribute it and it's part of metric beat now So to get some permissions You will need to ground these permissions Actually read from proc Those are the volumes you will need for system permissions if you want to monitor an application You will need to link the container correctly. So for example, if you have some random mysql container To actually have metric beat collect information from if it's living in one container It wants to monitor another container. That is how you will need to link it up so that it can actually access that information for you and general approach should be to run it as a sidecar and not Put it into the containers themselves or run it on the host. That's that's not the The ideal way run it as a sidecar so you can easily upgrade it or replace it or do whatever you want with it And then there is the common question. This is one of my favorite comics like okay We have metrics now and we're trying to store metrics in the full text search engine is it very fast and there are lots of Benchmarks some some vendors love to do benchmarks I'm the similar conditions for example they benchmark the house cat against a squid in a tank of water and obviously the house cat is dead So their system which is obviously this quit will be much better and this always reminds me I think two years ago and MongoDB couch base and Cassandra each made benchmarks against the other two competitors And each one of them managed to be at least twice as fast as their competitors Just by finding the right scenario. So so we try to avoid doing benchmark with competitors But we have improved performance, especially for metrics quite a bit in 5.0 So previously we only had float which is nice, but a bit big and then we have introduced half floats So half the precision but only half the space required and we also provide scaled floats now Scaled floats will just store an integer But there is a scaling factor by which you divide the number to get back to the original value for example you could do Scaling factor of hundred then it's just divided by hundred but it cannot be like four hundred four thousand whatever and that is your maximum precision you can actually store them But it's of course the trade-off less storage and easier to compute by doing that Next up packet beat Nearly everybody I guess is using wire shark wire shark is super nice as long as you have everything in one instance But as soon as you need to monitor or collect network information on lots of hosts It can be a bit of a pain and packet beat is kind of the idea to circumvent it it's using the same base library LibP cap and collecting network information and just extracts the meaningful information from the headers So it will know something was a request and this was the response for it The whole thing took 20 milliseconds the protocol was HTTP The response code was a 200 and the URL you hit was foobar and that is just passed from the network headers You don't need to instrument your application or anything. This is just extracted from the wire So we support quite a lot of protocols So you don't for example need to instrument your database anymore It's just passed from the header information. So it knows, okay This was a MongoDB request and that was the response it knows what was the query how long it took and it can extract all of that just from the network information and If we do not support the protocol or the protocol is encrypted We cannot work magic We have something called flows So you will at least know who communicated with who like on the TCP level, you know This IP address with this IP address how much traffic how many packets how many reasons? So you will see what is going on in general in your network And again packet beat should be run as a sidecar Don't push it any worse Yeah, dr. Containers do we have them Yes, we do Luckily we started naming our stuff correctly from the start so is we called it elastic search slash Docker and not the other way Around that was even before the incident happened So in hindsight that was the right decision. So we have elastic search Kibana and lock search in Docker containers The beats for the sidecars you will need to build yourself at the moment We will provide them soon. This is also part of what my team is doing Though everybody's pretty busy with other stuff So we will soon start building these containers Major thing to consider. So this is the minimal example to get elastic search in Kibana up and running I'm just linking them together. We're mounting ES data one as a local drive. We're exposing the ports Major thing to keep in mind here is We are using our own registry If you search for elastic search Kibana or lock stash on the Docker registry, you will get the Docker official packages, but not ours For yeah, to keep like it tight Better overview of what is going on and who is downloading what version and stuff like that. We run our own registry So only if you reuse our registry, you will get our Tested packages. Otherwise, it's something done by the document community or by Docker themselves Just as a warning, we often get complaints that stuff doesn't work as expected or we overlooked something and often it turns out Those are not our packages. So you will need to use our registry to actually get our packages And the other thing is we don't have latest anymore. We did provide it at the very beginning But then we quickly reconsidered because the assumption is you will start Your cluster with three nodes they with latest and in half a year or so you'll want to add two more nodes And if you're still using latest you will probably totally screw up your cluster Because you will just get any random version. Yes, I know it's very nice for testing to just use latest But since people might do that for production We are now leaning more towards being safe than sorry. So there is no latest anymore You will need to define your version number explicitly Okay, I think we got like 10 minutes. Yeah, pretty much exactly So I have the That is like the entire elastic stack running This is pretty much self-driving. So I've just put everything into a virtual machine This is running elastic search Kibana packet beat metric beat file beat and I have Docker container with red is running because metric beat is inserting its metrics into the red is in the container and Then store it in to elastic search So if you have a very big system or you have big spikes or something like that It is common to have your messages inserted into a queue We support Redis and Kafka Redis is not normally for the smaller setups and Kafka is for the huge setups Or if you were based in Silicon Valley, you need to use Kafka because I don't know Just mandatory there. Everybody's using Kafka and Silicon Valley So we are inserting our metrics in here Just to give you an idea what we're collecting so metric beat is just very consistently pinging all the processes I have running and Collecting that on the system level and the process level. So if I look at any random message I have a metric set here and this is actually monitoring the docker-socker and Getting the CPU usage for that But we have a field type which is probably at the very end. Oh Yeah, everything is metric set that is not helpful here So we could for example find the data set just for the Java processes And it's searching and we found 180 entries for Java And that is one single process that is running on my System and you can see okay, that's the loxage service since it's JRuby. It's running in the JVM It's one of the Java processes and that is all the information that is actually stored. So you can see all the All the secret of information and the general memory and CPU usage information that is all stored per process This is a lot of information if you do not intend to use that information do not store it Because if you have lots of service and this will be lots of storage you will need Just try to keep that in mind And otherwise don't complain that is taking too much memory, but you can see okay. What are the options to? You have started what is the memory usage? What is the CPU usage you can find out all of that information from here and we provide some pre-built dashboards? so for example if I open the dashboards for Processes You can see okay. We have 90 processes running and that is probably too small for you to see But little surprise and that big thing here. There's a memory usage. I think at the bottom is Java Taking up some memory Any guesses which processes on my system are running Java here? elastic search and Logs dash since it's Ruby next thing that is taking up some memory is node and it guess is what node is Kibana Yeah, I know Jess. Yes, but the process that is running Kibana the visualization that is a node app And we have the beats you can see packet beat is using some memory I have MongoDB running docker. D is taking up some memory. They're ready servers very slim Yeah, file bit is also very slim and I'm collecting Since I can't show you there as well. We are collecting. Oh, no We didn't don't have a log message with Java in it. So let's see Those are all the log message. I've collected. So I think I have only three lock So I'm not taking the docker locks and I'm taking the Kibana logs into my system It is probably very small for you to see I can try to make that a bit bigger But that's breaking them. Okay, and here for example, I could say I'm only interested in the docker logs now So you can just click that plus sign and you filter down just to the Docker logs you can see okay Every now and then we have a few docker logs and we could for example look down into one of them and you can see okay Yeah Ready's locked some message out here. I have simply In my example, I'm using the default logging. So to Jason lock files And I'm simply collecting the Jason lock file that my container is emitting And that's the way I'm just getting whatever my container is doing here Okay, so those were the processes we can take a look at the containers as well So on my system, I'm a bit lazy. I just have one container running That's my ready's using very few resources, but if you had more containers you could see what each one of them Is using in resource wise Yeah, fronting pause stopped you can see how the Volumes are nested and you can see CPU and memory usage of that one container and network usage and you if you had multiple containers You could see the aggregated statistics and you could even filter down to each single container to see how that one is doing So this is one of the visualizations. We have built that is provided by default the final beat I should show you probably is we have Packet beat dashboard so that that is all the network traffic I have Client location will be empty since everything is running on localhost on my machine and there is no good GOIP look up for localhost So that one is empty, but you can see we have a few web transactions the web transactions here That is actually the kibana calls So I'm monitoring that all the my interaction with kibana that is driving what the web transactions Database transactions I have MongoDB running here and cache transactions That is what Redis is doing in the background So we are monitoring all of those and you can see response times most of them are very quick up to 10 milliseconds And a few ones are it's locked lower We also do percentiles like 99 percentiles go up to 500 milliseconds, but everything else looks pretty decent very few errors Error so time latency histogram and we have specific introspection for example in web You can see okay, those those are all my web transactions 324 in the last 15 minutes Those are my top URLs the error codes I had one 404 error and those are my most common URLs and all of that information is just extracted from the network headers So you don't need to instrument your application in any form It's just like HTTP headers and you just parse that apart and get all of that information for free Or we could look at MongoDB I'm not running any meaningful MongoDB queries, but you see okay no errors since there are no proper queries going on It's just being pinged so you can see okay. It's doing some traffic If you had errors you could see them and you can see your top Queries and again, this is just from the network interface. So we're just parsing the header information from those calls And yeah, we're not running any any two interesting results Redis should be a bit more interesting. Of course, you can search here as well if you can type So we have a very stable connection of up to three clients I'm not actually sure why it's three because on the one side. I'm writing in on the other side I'm taking it out. I'm not sure which side is taking two and which one But you can see we have one ready server With that version it's a standalone server and the only command we're running is e-poll But again, just network information And you can actually see what one or all your ready service in your network are doing or whatever database service you have So to sum up I Always comparing the elastic stack to Lego you have kind of all the buildings box But you have needs some assembly so you cannot just say okay Start this thing and do everything automatically But you will need to kind of put the right bits and pieces together the main advantage is you can use It for pretty much anything so you can use it for log management for metrics But you can also use it for your business data So you whatever business transactions you have like visitors on your website signups for your newsletter all of that stuff you can collect and Visualize in Kibana as well But it will require some upfront work Everything in a dash line is free and open source. You can just use those Right-hand side is the commercial plugins and at the bottom if you want to have it hosted We are happy to do that for you currently AWS only might change in the future Yeah, and we always say that the story if you if you want to get deeper into it Most people start on the open source side then we offer trainings We do development support Consulting and support even though we don't do implementation consulting. We just do like architectural reviews But of course we do 24-7 support of your clusters and especially for the bigger customers. That's That's one of the nicer things So I think we we hit the time pretty much exactly Or we I can I can do more demos like for three minutes or something like that So one nice thing that is also pretty new is timeline timeline is for time series data Let's make that one a bit bigger Since we only have one. So this is time series is just You can either just count documents or you can go into a specific metric here For example, I have just all the documents I have in my cluster So I'm inserting about three hundreds events per second Now I would not like to know like What is mainly driving that is it metric beat packet beat or file beat and you can actually split it up So you can say the index I'm interested in is file beat The next index I'm interested in is metric beat And the last one is the next elastic search index is packet beat And you can run that and you see everything is Q. So that is not very helpful So we can actually label that one of the nice attributes of this one is it will actually you just type a Dot and it will suggest all the things it supports So you can just say label and I'll call that packet and I'll call that metric and you can see okay packet beat is doing most of the interactions metric beat is very stable and doing also a lot and the Yeah, file beat is just randomly ingesting some files. So that is a very nice overview to see who is responsible for what And you can actually then dive into the data and see I don't know how is my latency going? How is my response time doing? Everything that is number based basically you can graph out very nicely over a time series Okay Since we're out of time any questions Yes So sorry come how do we log stack traces? So? Yeah So the question was how to how to log core dumps and stuff like that So file beat will just collect anything where you point it to So wherever that goes though, it's probably doesn't make much sense to to collect an entire core dump and make that Full-text searchable I Don't think that is the the best use case for for log management And you will have the the general event in your log probably which says I don't know crash Here's the core dump whatever and then you can dump into that so you can define a search And you can store the search in the interface and just show me over the last week or whatever all the core dumps I had and find those but the core dump itself you probably would not put into elastic search and Since it's all full-text search You can just define whatever pattern you have to to search over all the log files and then find whatever you have But you need to know what you're looking for There's there's no magic like it knows what what that thing is But of course you can then visualize it and graph for example everything that is Log debug or whatever so you can graph it out over time and you see okay. Oh, there we have more errors And there we had fewer errors, and then you can just dive down into where bad stuff is happening Yes Yes, there is an alerting feature and it is part of expect Sorry, the question was is there any alerting feature? Yes, it is even called alerting But this is part of expect the commercial extensions and that is where my salary comes from And alerting is a very nice feature You can just define a threshold. It's basically cron based so it will run every minute every five minutes whatever over your data set and Check is a specific condition that like do I have is my cluster in a bad state or do I have specific messages? Or they have a breach the specific threshold like do I have more than five percent? 500s in my logs and Then you can define a rule what to do and that could be email slack page at UD webhook I think we can Now even create JIRA tickets automatically I don't know why but some customers must have really wanted that feature to create tickets automatically I don't think it makes much sense, but we can do that. Yes but security monitoring of your cluster alerting and graph to explore your data and Generating PDF reports. Those are commercial features and they always come with support A Pricing model is generally per node So we are not like some of our competitors volume base, but it's really only per elastic search node that is What counts any other questions? Yes? Yes, I Have talked a few months ago with the colleagues on the beats team Sorry, the question was a journal the support We don't know like Even if I I probably couldn't tell you if I knew but I honestly don't know I don't know how it's prioritized on the roadmap there They are pretty active like for example the latest release brings inputs for Prometheus So you can use Prometheus endpoints and collect that data via beats into elastic search And there is lots of stuff going on There is a new beat to actually ping your services now But I don't know really when journal D beat will be available But I understand that it's it is it makes a total sense that you don't need to write out to log files anymore And then read the log files anymore, but you can query journal D immediately We get it. I have no idea when it's implemented. Unfortunately I'm sure there is an issue and you can watch the issue and Get notified when something is going on in the meantime since there is the community beat Try that I honestly haven't tried it. I don't know how good it is But I assume it's workable since it's common requirement Yes And the question was are the Docker containers supported for production use case That is an interesting question. I know we had lots of internal discussions around that I think So the point is and just using Docker containers does not buy you much normally Because the orchestration is kind of the major issue So if you just use a docker containers, we will help you with your containers But we will not help you with your orchestration since that is yours and we don't do that I Don't know if that is a yes or a no for you now but So our point is pretty much and docker does not solve most of your problems because you have persistent data and upgrades are still an Issue but internally our cloud uses docker containers as well So we have worked around that and we totally support that But we do not want to get involved in your orchestration It's kind of like the orchestration wars. We do not want to meddle in there and get kind of lost That's not our core business Yes The question was will there be a sorry what a PC? So I wouldn't be aware that we are planning anything But of course Quite a lot of the beats start in the community and if it makes sense and is widely used We might be willing either to rewrite it or take your existing code if that is okay for you So I think there are three or four dozens of community beats by now So people are writing their own beats for whatever makes sense So somebody has written a Twitter beat So you can collect tweets which is probably more of an exercise to show how it's working in general or hecking you So whatever So you can always stay in Kibana and pretend to to be productive while you're just reading hecking news So it's totally possible But we don't support it ourselves yet, which doesn't mean it won't happen any more questions I think we have time for one more No everybody good perfect grab stickers. Thanks a lot