 The requirement is just to have one, one stutter. We can go out of here. Works on the first die. This one works. We will need sound, so it's enough for us to connect. Yeah, it's connected. It's connected. It's on the regular. Yeah. So I will be guiding you for like 40 minutes to like 10 minutes for questions enough. So is it okay? Yeah, it's okay. We've got some flexibility, so if we're running late, we'll just stop. Okay, and for the microphone, there is microphone for recording. There's no microphone for the room, so it would be great. It would be talking like in one and a half, two meters from this one. One and a half to two meters? Yeah, okay. Not going to the other side of your room if you were to be talking. I guess we've got 12 so in a minute we can start like it. Okay. Is it what we want to see? What's that? Is it what's our presentation? No, I'm trying to... You need some more minutes? Okay, so I'll be late. Okay. Okay. So, welcome Richard and Lukash. We will be presenting you something uncommon about the game. All right. Hello. My name is Rich Megenson. This is Lukash Volchak. We're working on a logging stack. In case you were wondering what is a log and what isn't a log, we prepared a short video that should help explain that. So that's not what we're talking about. You may recognize that. So that's not what we're talking about. So why uncommon logging? So why is it uncommon? Logging is commonly an afterthought. It's stuck on at the end. We're trying to make it an equal partner in the application platform. We're trying to enable distributed logging of cloud-based services. Most developers just figure they'll use the logging APIs provided by the operating system or the platform, assuming they'll figure out how to store and use those logs at some later time. Usually by cobbling together a logging solution from bits and pieces of some sort of logging stack. This often doesn't end well and it misses an opportunity to make really good use of the data. So in this presentation we will present what we've been working on including some video demonstrations. So this is the world of cloud and microservices today. You have this complex ecosystem of all these horizontal and vertical layers all integrated with each other. So the amount of digital exhaust produced by all of these services and platforms is daunting to manage and understand. It's difficult to answer questions from users such as what's wrong with my application? Somebody logged into the system and did some administrative action and now something is broken. How do I trace back my problem back to what that guy did when he logged in? And how can I proactively monitor my systems? So these are some of the objectives we want to achieve with our project. We want to be able to collect logs from distributed services running in the cloud in a secure and reliable way. We need to have a data model that we can use to describe data from many different sources. We want to have a clear definition of fields used commonly among all of these systems but we want to also give services the ability to define their own fields for their own service specific metadata. We want to support multi-tenancy so that users can only view logs which you're allowed to see. We also want to support federated authentication. So for example if you've configured OpenStack to send logs to our logging service we want you to be able to log in to the visualization part with your OpenStack credentials. We don't want you to have to configure multiple credential stores. We want to integrate well with Red Hat products in their upstream products such as RDO, Overt, OpenShift Origin, ManageIQ. We want to work with subject matter experts in those projects which would be you guys to figure out how you want to represent your data, what the actual data coming out of your log means. How it's meaningful to you and how it might be meaningful to correlate with other services as well. We have to be scalable. We have to be elastic. So we have to be able to scale up and down as needed for burst situations, things like that. But at the same time we also want to provide different tiers. So if you come to us with your requirements we can have something that's already ready to go for your particular deployment scenario. We want to enable other uses of this data so we don't still want it to go into a warehouse and sit there and do nothing. We want to enable the use of big data applications and any other applications that the user might have in mind for their logging data. And we also want it to be all open source. We want to identify open source components that will fulfill our requirements, work with those upstreams, and also work out in the open in an open way. So how are we doing logging? So this is a picture of the current architecture that we have today with sort of the components abstracted. So on the left here we have all these hosts. They have all application services, all these things that are creating logs, going into log files, the journal, or syslog. So we have a collector that collects logs from all these sources and sends them into a central logging system. So the logging system consists of, on the front end, a load balancer. And these boxes represent what we call their log aggregators, multiplexers, normalizers that massage the data as needed and send to a data warehouse. Then we have a visualization component that's able to do queries of the data warehouse and display graphs and charts. And then we want to have monitoring of all these components so we'll know when administration is needed, when scaling is needed, things like that. So this is the current architecture with the components that we selected. So for the logging system, we're using the OpenShift platform. The OpenShift platform has, the integration services team for OpenShift already developed a logging stack based on Fluent D Elastic Search in Kibana that we're making use of. So this provides us the ability to deploy, manage, and scale the containerized logging components as well as take advantage of some of the things built into OpenShift and Kubernetes, such as flexible storage options, load balancing, and high availability. So we're using Fluent D both as the collector here and as the normalizer here. It works well in both situations. This HA proxy component is the routing service component that's provided by OpenShift platform. So we're using Elastic Search as the data store. And we're using Kibana for visualization. So we've developed this tool called Watches, or WatchES that we're using for monitoring of Elastic Search. And we're still working on the components to monitor Fluent D and monitor Kibana. Okay, so Fluent D is both the collector and the normalizer aggregator. So Fluent D is a Ruby-based log agent. The configuration for Fluent D is very similar to Apache config file, so it's familiar to admins and developers. So you can use Ruby code in the configuration. So here's an example of some Ruby code embedded in the configuration. The code between the dollar sign, open brace, and closed brace is some Ruby code that converts the timestamp coming in from Journal into the timestamp that we want to use in logging. So the protocol that Fluent D uses is the secure forward protocol. It uses TLS. It has built-in load balancing and failover. And it uses the message pack format to compact on the wire to reduce network traffic. So Fluent D has hundreds of plugins available, literally hundreds, for virtually any kind of input, output, and processing you would ever want to do. But if it doesn't, then it's very easy to write plugins of your own in Ruby, even if you have very minimal Ruby experience. So one of the reasons we chose Fluent D was because there was already a fabricate.io plugin that was able to get Kubernetes metadata from container log events and annotate those log events. Another reason was because the OpenStack ops tools team had already chosen Fluent D for their reference architecture, so they already had some literature and guidance to provide for that. Fluent D also had all of the inputs, outputs, and filtering that we needed out of the box, such as HTTP, Journal, and Elasticsearch. We would also like to support ArcSyslog too, so that we are still investigating the use of ArcSyslog. So Elasticsearch is what we're using as our data warehouse. It's a widely used Java-based search engine. It's based on Apache Lucene, which gives it excellent full tech search capability. It's also good as a time series database. It's not as good as some databases that are dedicated to time series data, but it gives us a nice balance of features between performance and other things that we need to use. We use an Elasticsearch plugin, an open source plugin called Search Guard, and this gives us the TLS authorization and access control. The Fabricate.io team also developed an OpenShift Elasticsearch plugin that works with OpenShift for authentication, authorization, and access control, and that's what gives us the ability to do multi-tenancy. The OpenStack ops tools team also had chosen Elasticsearch for their reference logging architecture, and other teams such as Over had already decided that they wanted to use Elasticsearch for logging in their metrics store. The last component is Curator. Right now it's provided by Elasticsearch, and it is what does our log trimming and deletion. For the visualization component, we're using Kibana. Kibana is a Node.js-based application. It's very tightly coupled with Elasticsearch, so it just works right out of the box with Elasticsearch. The OpenShift Elasticsearch plugin also works in conjunction with Kibana. It pre-populates some of the combotic configuration you need when you log in, and it creates index aliases such as the all alias that's used by administrators. We're currently investigating the use of Grafana in Manage IQ as a user interface front-end as well. So here's a detail of the logging architecture, showing what's going on in the logging system. The logging system itself is based on the OpenShift platform, so that gives us a lot of the fundamental capabilities that we need. On the left side, all the log messages from the host come in via FluentD Secure Forward, and they go into the multiplexer service route HA proxy that's provided by the OpenShift platform. So this distributes that load among several aggregators FluentD. For messages that are container logs, FluentD will go out using the plugin and get the Kubernetes metadata in the OpenShift API service and annotate those logs. The connections come into the search card plugin before they go into the Elasticsearch data store. So on the browser side, the browser first goes to the Kibana pod, which is running an Auth proxy container. That goes out to OpenShift to do the authentication, and then it annotates the request with the token and user ID headers. So those are passed to Kibana, and then Kibana passes those through with that information and the query to Elasticsearch. The first thing that hits there is the OpenShift Elasticsearch plugin. So that will authenticate the token, make sure the token is still valid, and then it will go out to the OpenShift API. So we get the user ID. We also need to know what the user's roles and projects are so we can set up access control for that user. So one of the big challenges is correlation. So for example, if you're running OpenShift on OpenStack, so how do you correlate the errors you see in your containerized applications with things that may be happening at the OpenShift layer? Or below that the OpenStack layer or below that the operating system layer? How do you correlate login sessions with problems that might have been caused by that admin logging into the system and changing some configuration? What do fields mean? Do they have different meanings depending on what application is writing it? So when I get a hostname field from OpenStack, does it mean the same thing as a hostname field from OpenShift? And time, time is an important datum for correlation. So we need to ensure that we have a good global, consistent, and high-resolution timestamp. So the solution is a common data model. So the common data model provides a rigorous definition of commonly used fields such as hostname and timestamp. Plus it provides the ability to have application service-specific metadata fields in such a way as to avoid conflicting definitions. So what we're trying to do is eliminate conflicts and inconsistencies in log data. We're trying to precisely define fields so you always know what a particular field means. It provides hierarchical namespaces for service-specific metadata. And we want to provide concrete recipes for formatting data in collectors so that you can automatically know what your Fluent D configuration or ARSIS law configuration should look like. We're able to generate elastic search index templates and documents, and we have this YAML format that we use to produce all of these things. So for example, on the left side is an example of an elastic search index template, and on the right side is an example of ASCII documentation that are produced from the same definition format. And so here's an example of different namespaces. So on the left I've got a namespace called Kubernetes, and on the right I've got a namespace.overd. Both define a field called level, but it means different things in different namespaces. So when I collect the data, for example, in a kebonic query, I can refer to one as Kubernetes.level, and the other is over.level, and there's no conflict. Another challenge is packaging. There are many, many packages to maintain. So FluentD and all the plugins that we want to use is about 35 runtime dependencies and about 90 build time dependencies. Another challenge with FluentD is because it's Ruby, keeping up with the FluentD versions and the Ruby versions. For example, the next version of FluentD only supports Ruby 2.1 and later, but REL7, EL7, CentOS 7 only has Ruby 2.0 standard. So this is one of the reasons why we like containers. It's because we can containerize all of these things so they don't conflict with others. Or we can use software collections. ElasticSearch 2.x introduced this concept called JarHEL, which is quite strict about what versions of plugins and dependencies can be used. ElasticSearch plugins have to keep versions in sync with every ElasticSearch release. So when ElasticSearch releases a ZStream, all of the plugins also have to bump the ZStream version of that dependency in their Maven Palm file. Kibana uses Node.js, NPM modules, LibUV, VA, so on. And this is one of the great things about using containers because we can keep all of these separate services self-contained and avoid problems with interdependencies with other components. So it takes a lot of manpower and automation to keep up with all of these versions and dependencies. So multi-tenancy. Our multi-tenancy implementation relies on an ElasticSearch index naming scheme, search guard for index level security, and the OpenShift ElasticSearch plugin to provide the information needed to map the user's roles and namespace membership to the ElasticSearch index name. So for example, you can have an index called .operations that's only visible to logging administrators, and then indices matching this project.name are visible to members of that project namespace with that name. So when a project wants to send logs to the central logging service, the logging administrator will create a namespace for that project. And then we'll assign users from that project to be able to act roles within that namespace so they can access the logs. So we're currently using OpenID Connect for Federation so that if you log into your system and you acquire an OpenID Connect token, you can use that same token to authenticate to Kibana. So there's also physical security as well. So let's say you just want to keep your operations data completely physically separate from everything else. Well, you can create a separate operations cluster just for your operations data and have the data that you want users to have visible in a completely separate cluster that only they can access. So we're currently investigating if and how we can use federated groups and roles too. So some of the other challenges we have is languages. So FluentD is written in Ruby. LasticSearch is Java. Kibana is JavaScript. Curator and some of the other bits are in Python. OpenShift is in Go. So it's quite a bit of things that you need to be familiar with in order to work on this. The complexity, there's just many moving parts. There's a lot of stuff to monitor, maintain, and administer. So we're looking at integration with monitoring systems so we can know we're running low on disks. So we need to add more storage for LasticSearch. Our FluentDs are way behind. We're getting back pressure. We need to scale those up, things like that. Security is another big challenge. So all of these public-facing things have certificates and keys. FluentD Secure Forward has a shared secret. We need to be able to securely generate, distribute, and update all of these certificates and keys and shared secrets. So we're looking at integration with things like IPA, CERTmonger, Custodia, and Let's Encrypt. So R-SysLog. We would really like to be able to use R-SysLog with this. R-SysLog has a number of advantages. R-SysLog has this protocol called RELP, which is a highly reliable Secure Logging protocol. R-SysLog is already part of the operating system. We're going to rebase R-SysLog to version 8 in REL 7.4, and that brings us very close to feature parity with what we are using with FluentD. R-SysLog is also much more scalable and performs much better than FluentD. The problem with FluentD is that, I mean, with R-SysLog is that we need to support more integrations. There's not nearly as many plugins as there are with FluentD. Also, R-SysLog has a lot of negative perceptions. People in the past have been turned off by its arcane config file format, which is much improved in later versions. And people have had bad experiences, perhaps, with R-SysLog that was in earlier versions of REL 5 and REL 6. We also want to correlate the log data with data from metrics and monitoring and even configuration events, because if you want a holistic view of what's going on, you need to correlate, you know, CPU spikes with configuration data that may have caused that CPU spike with log messages that you're getting from various systems. So, what we want to have is, we want to have some sort of, I don't know, pre-canned tiers of deployment for different customer use cases. And these are small, medium, and large. So, a small would be that you have a direct connection from the collector running on each host to Elasticsearch. And that's really only suitable for very small deployments. And it also means you have to have all the normalization logic in the Fluent D running on each host. But it is simple. There's much fewer moving parts, but it's really only suitable for small deployments. So, the medium size is the slide I showed before, where you have a scalable farm of these Fluent D aggregator normalizers. And it's applicable for a wide variety of deployments. And then, for the large sizes, we would like to introduce a message queue. Some of our very largest customers are already using message queues for similar applications. So, we want to be able to also use that. And this is what we think it might look like for the message queue. So, the collectors write data into, let's say, a topic that's a raw data topic. In this scenario, we don't have an HA proxy. We assume the message queue is providing that capability for us. So, the normalizers would read data off the raw topics and normalize them. And when they write it out, they write out the normalized data both to the data warehouse and back to the message queue in a normalized topic. So, the normalized topic could be used by, for big data analysis, archival, tailing, monitoring, and lots of other uses as well. So, one of the problems with the queue, the challenge is that it introduces more complexity. So, now you've got another thing that you need certificates for, and you need to make sure that they're securely distributed to. Access control. What if we want to have different topics that have different levels of access control for each topic? How do we do that? So, I'll turn it over to Lukash. Yeah, thanks. So, I will very quickly talk about performance. That's one of the changes. So, we are focusing on performance as well. So, what's the problems with performance? So, first of all, if you think about it, it's fine, but the first change is to really get the hardware and something like that to be able to get, to be able to do performance monitoring. So, what I mean, for example, some users can have very high-quality networking or hardware, and it makes a difference in performance. So, that's kind of the challenge that is probably solved by budget. But, out of that, we are also focusing on collecting the data and really trying to make sense of that. So, if you think about what you have seen, how our logging architecture looks like, most of the pieces are quite simple things, like one, you know, tools that do one thing very well. So, it's not that hard to tune them, but plastic search is an exception. It's a very complex thing, and we need to take it very carefully. So, the good thing about plastic search is that it can export a lot of internal statistics. And as Rich mentioned, we built a tool called Watches that can pull all those statistics from plastic search. And then we are collaborating with our performance department that has their own set of tools and ways how they can measure performance. And we are trying to integrate the data from plastic search with their tooling. So, specifically, they use a tool called Revenge. Maybe, if there is someone from the performance department, they can tell more details about it. But why this is important? It's important to see how plastic search is doing in context of other components in the system. So, for example, if you see that plastic search uses a lot of CPU, but at the same time, the other component is also requiring a lot of CPU, it's important to know it. So, it's not enough to look only on plastic search or performance characteristics. You need to see it in context. If we have enough time in the end, I can show some examples of that. So, what we want to achieve with performance is that we want to give users good recommendations about hardware sizing and configurations and possibly to provide good starting defaults. And what we are also looking at is further using the curated tool to optimize the existing indices or, you know, the size of the indices. By the way, when I mentioned P-Bench, we already used it when we were comparing performance of Arsyslog and Fluendi. And this tool was very handy for us to learn that Arsyslog is way better in performance. That's why we want to use it. So, I think I can hand it back to the Rache. Thanks, Lukash. So, a related project to logging is session recording, which is also called T-Log. So, this is what gives you the ability to track login sessions. And we're working with the T-Log team to develop a schema for the common data model so that we can log that data with all of our other data. Another project from the T-Log team is AUShape, which takes audit log data and formats it into a format that's more readily consumable than the T-Log team systems. So, we're also working with them to develop the schema for the common data model. Okay, so now, I think we've got plenty of time. So, we've got some demonstrations. So, the first demonstration is a demonstration of Kibana. Oh, I'm sorry, of installation. All right, so we use OpenShift Ansible to install. So, if you set hosted logging, deploy equal true, it will deploy all of the logging components. You can also set things such as the public host name that you want to use for Kibana to have access externally. And there are many, many different parameters that you can set. There's a lot of elastic search parameters that you can set here, as well as other parameters as well. So, for this demo, I've just done a single-node deployment, which is an all-in-one. So, we use the same, the typical. This is using OpenShift Ansible from upstream source and using Git Branch Release 1.4 for Origin 1.4. So, basically, you do Ansible Playbook with the inventory in the Playbook, and it goes through. And there's your scrolling Ansible minus V output. So, fast forward to the end where the logging happens. So, there is a task called deploy logging. And this is what does all the logging tasks. Starts all the services. It runs the deployer, which starts all the services, such as elastic search, Kibana, Curator. And it waits for them to come up. So, it's got some sort of lightness, readiness going on. It starts Fluent D. That's already integrated in the OpenShift Ansible installer. Yes. So, it creates a separate project namespace called logging for deploying the logging components. And that's flexible too. You don't have to use that. But that's the default. So, okay. So, the next demonstration is demonstration of multi-tenancy. So, what we're going to do is, we're going to create a log admin user called log admin. And we're going to assign him the role cluster admin. You can also use cluster reader as well if you want to have a log admin that doesn't, is not able to change things. All right. So, we log into Kibana. So, the log into Kibana, you can see it redirects to OpenShift origin. So, it's actually going to open, it's using the OpenShift OAuth provider. And then we get redirected back to Kibana. And this is the usual discovered tab in Kibana. So, if we go to settings, we can look at our index patterns up here. We can see that we have access to the operations index as well as all the other indices that are available. So, we logged in as the admin. We can see everything. We have access to everything. Log admin can see all. Okay. So, now we're going to create a normal user with restricted log access. So, we create a user called log user. So, in order for that, so we created the log user, but we also need to assign that user roles within the namespace that we're using for this, which is just logging. So, we log in, so we log in, we log back into OpenShift origin. This time we're logging into the OpenShift console as the log admin. So, we'll go to the logging project resources membership. So, there's no users right now. So, we want to add the user log user. We want to assign that the role basic user and view. So, those roles will give that user the ability to view logs from the logging project. All right. So, now we go back to Kibana. So, this time we're going to log in as log user. And we can see that we have the index for project logging, which is the logs from our namespace. We go to settings, we can see that we can't see any of the other indices. So, that's how multi-tenancy works. So, the third example is just to show, kind of show, give a little bit of the, give a little bit of the flavor of what Kibana can do. Okay. So, we're going to log into Kibana as our log user. Okay. So, this gives us our, it comes up by default in the discovery tab. So, and by default, it's the last 15 minutes of logs. So, we're going to change that to view logs from the last four hours. And you can see at the bottom, the graph changed to have a much wider scope and show many more logs. So, the list of fields available to query is listed on the left-hand side. If you click on a field, it'll give you a quick count. It's kind of a nice shortcut view of how many logs, you know, the top logs values for each of those fields. The level field is a syslog, you know, three is error, six is info. So, what we're going to do is we're going to look, we want to look at our container logs. We want to look at our pods. And we want to find all the pods that are emitting logs to standard error. So, container pod logs that are emitting errors. So, here's the list. So, we've got quite a few logs that are emitting error logs. So, if you click on a line below, you can sort of look at the record details. You can see that we've got our Docker container ID, a bunch of fields for Kubernetes. The log message, which looks like it might be a problem. The level is three. We can get our, there's our Kubernetes pod name. All the other metadata associated with that pod log. So, another thing you can do here is look at the JSON representation of the log. And this is where you can see the structure of the common data models. You can see that there's a Docker namespace. There's a Kubernetes namespace. There's a namespace for pipeline metadata. It's a good way to see, you know, what the logs look like when they are ingested by elastic search. All right. So, we're going to create a visualization. So, we're just going to create a simple line chart. And we're going to start a new search, which is, we're going to use the search that we just previously did. So, for the x-axis, we're going to show an aggregation. We're going to show the date histogram. So, we're going to use the timestamp as a date aggregation. And we'll do that by default. Kibana knows that it needs a field called timestamp. It should use that as the time axis. All right. That's our search that we used before. We want to do the same search. And there it is. So, there's our graph. So, the y-axis is the count of pods that have error log messages in the last accumulated five minutes. And you can see the graph is interactive. So, it's really nice for being able to scroll around and see what's going on interactively. So, we're going to save that. And we'll just call it pod errors. Okay. So, now that we have a saved graph, we can use that to display on our dashboard. So, right now our dashboard is empty. So, let's add something. We can use our saved search, pod errors. And there it is. It shows up on our dashboard. And it's got the same interactive properties that we had previously. So, we can resize it, move it around, do whatever we want, make our dashboard look as pretty as we want. All right. So, next, we're running out of time. All right. Do you want to share this? I'm not sure we have time for this. All right. We don't have time for this. If there are really no questions, you can probably share with us. Yeah. If there are no questions, we can share with you. Sure. We can give time for questions. Yeah. Still we have something. Yeah. Sure. All right. So, who's making this possible? So, there's a Sentos Office Tools SIG, Special Interest Group. So, it's a common shared repository of packages, configurations, installations, and sort of best practices for how you deploy a logging service. It was, we're working with other Special Interest Groups in Sentos, such as Cloud, Platform as a Service, Vert, and others to sort of, you know, share with us. It was started by the Cloud Special Interest Groups. They had sort of formed an ops tools group. And then we decided that it's applicable to a wide variety of projects. And this also covers metrics and monitoring. So, things like SENSU, CollectD, things like that. Some of the teams involved. So, this logging platform, as I said, was started by the OpenShift Integration Services Team. And we started by the OpenShift Integration Services Team. We started by the OpenShift Integration Services Team. We started by the OpenShift Integration Services Team. And Fabricate, which provides a lot of the other tools, such as the plugins. There's the Viacue team, which is Lukash and I and some other folks who are working on how to tie this all together, deploying, and working on the common data model. There's also the Performance Team that we mentioned. They've been very helpful in being able to test logging at scale. And there's also the OpenStack and T-Log teams who are helping us test, integrate, and develop the product-specific data models. So, where to find the code? So, here's the typical GitHub links and IRC channels. All right, so, we have about, we have a lot of time left for Q&A. Oh! So, people asking really good questions. I have these special super CD, DVD cases direct from the United States National Association of Governors. Very rare. For people that still use CDs and DVDs. So, if you ask a good question, you're going to get one of these. And please take them. Yes? Right, so, the question was, do we have a way to package FluentD for platforms other than EL7? Yes. Okay, right now we don't, because we're very, you know, enterprise Linux type centric, Fedora centric. So, we would welcome, you know, input from people that, you know, might be able to give us some guidance and help us package for some of these other platforms. Right. Right. So, the question is, what about putting FluentD in a container? It's already in a container. We're already running it in a container. It's, we're running it in a container that's in a Kubernetes OpenShift pod. So, it may be possible to take that and make that container more generally accurate. Question? Yes? Oh, so you're asking, are you talking about for the individual logging components such as for Elasticsearch and FluentD? Or are you talking about, how do we send Nagios messages into the logging system? The other way around. How do we monitor Kibana and Elasticsearch? Yeah, so we're currently, so the question was, so, you know, what integrations for monitoring do we have with Elasticsearch and Kibana and things like that. Right now, we don't have anything yet. We're currently investigating that. You know, for example, there's a, we saw that there's a CollectD plugin for Elasticsearch that we might want to investigate. I know that we've got a lot of people that want to use Hocular, so we're looking at ways of using Hocular. And, you know, we would appreciate any help with Nagios. Yes. Yes. So the question is, would we be interested in providing graphs and dashboards specific to certain applications? And the answer is yes. So that's another thing that we want to do is provide some pre-canned dashboards. You know, we definitely want to provide dashboards for things that are common data fields, and we eventually want to get to the point where we can have application-specific dashboards as well. So we would need help from subject matter experts there. Yes. Right. So the question is, what about other uses of Elasticsearch, such as for generation of alerts? We're looking at a tool called Elastalerts. So Elastalerts is a tool that is able to query Elasticsearch and take action based on log events that are coming into Elasticsearch. We're also trying to figure out if we can integrate with things such as Fluid D for alerting and Hocular for alerting as well. Questions? Questions? Yes. So for OpenShift deployment, it currently is containerized. We have some groups, however, that would rather not have to deal with the complexity of deploying a containerized Fluid D service. So we're trying to figure out how best to support those groups, whether it's probably something like software collections. So you'll have to run the Ruby 2.3 or 2.4 software collection in order to deploy the latest Fluid D. Something like that. We would prefer if everybody ran it on a container. It makes our life a lot easier, but some groups have expressed interest in not running it that way. In fact, that's another reason why it would be nice because our syslog is already packaged RPM for the operating system. There's no conflicts because it's baked in the rel, so that also simplifies our life. Yes. Yes. So I'm sorry, were you asking about Overt? Yeah. Okay, so the question is can you give me another update about integration with Overt? We've been working quite closely with the Overt team actually the past couple of months because they have expressed interest in using what we're providing as both the logging store and a metric store. We're working with them to figure out to show them how the secure forward protocol is going to work, how they will use that to send logs, how to deploy it on top of Overt heart virtual machines because if they want to run it in logging, in cluster versus out of cluster. We're also working with Overt to use the common data model and develop a data model for their statistics. Last question. Yes. Can you imagine our syslog? Yes. Our syslog has the capability of sending data. Yes, right. So CEE actually is sort of a subset of the common data model. The guys that designed the common data model have incorporated CEE into that data model as well. Yeah, I mean it's essentially a sort of adjacent data model that you can have as far as I understand that you can have your own fields. Is that correct? Yeah. Right, so there's actually the common data model I think has a CEE name space. If it's not in there yet, it's planned to be in there for CEE. See? I'm sorry? Okay. Okay. So I think the plan is that if you want to add CEE data to the logs, that will actually get either center converted into a CEE name space. So you can have all the fields in such a way that some of the field names in CEE might conflict with some of the names in some of the other name spaces. So we don't want that to happen. So we want to make sure it's in its own separate name space. Alright, so to say thank you we've prepared a short video. Soon Goodbye, folks. Thanks, James. Some adapter for ACMI. Hopefully you'll work. Yeah, some problems. We will see. Oh, you did? Yes. I have a VGA one. I need some help. Yes. I think it's on there. I've been pretty busy this past few years. And there is switch here. Or DVR. IGMI. And we need to switch to IGMI. And it looks like it's working now. Yeah. There is no microphone. You need to talk loudly. But this is for recording. There is a microphone for recording. So it would be good if you don't go too far from it. And you will be talking. And so they're in a way with security. Have you thought about using the belt? What are you doing? It's not a new surprise. But standing up. This is not a new treatment. Right? It's very hot as well.