 Zach Hamilton, and today, I'll be walking you through LogXIO's cloud observability platform at eye level. To begin, I'd like to set the stage by describing what we do as a company in my own words. LogXIO provides the best-in-class open-source observability technologies, including Grafana, Kibana, and Jager. We provide all of this fully managed in the cloud for our users. Aside from just providing the vanilla open-source technologies as a fully managed service, we also add advanced features on top of them, as well as we have integrated these tools and technologies together in one platform. Finally, all of this is compliant to major attestations such as PCI level 1, SOC2 type 2, ISO 27001, as well as we can demonstrate GDPR and hippo readiness. Now, getting right into the technology, let's look at what we see here front and center, Kibana. As you can see, if you are familiar with Kibana, we've preserved the majority of the look and feel of vanilla Kibana out of the box. If you start to look closely, you'll start to see some things that pop out that look different. Obviously, the nav bar that we use in order to navigate between the various tools and technologies, but other things that are actually in Kibana itself, like this tab on the document table. I'll use this as an example if we dive in to some of our advanced analytics features that we've added here. As you, if you're familiar, you know how to use the document table here in order to investigate individual log events. What we've added in patterns is a tab to the document table that you can look at statistically relevant patterns of log data that is in your current search context in order to do things like reduce mean time to resolution while you're searching through large volumes of log data by filtering out by pattern, as well as find groups of log messages that might represent noise and can be trimmed out. The next thing I'll touch upon is going to be insights, another advanced analytic feature that Log XIO has added in the platform. Selecting the insights tab will bring you to this view where you can see all of the insights that Log XIO has found in your data at a high level. Insights are essentially events that we have found in your log data that permeate and open source forums where the developer community is talking about issues in log data. We've curated much information from these forums and then reverse engineered it to tag your data and we find things that are coming and match those forums themselves. You can then consume these insights by opening them up by clicking and understanding a little bit more about what's going on with them, looking at the additional links or where we found them, even assigning them to teams or individuals and changing the status in order to track how they've been handled. Coming back to Kibana, we will then look at some of our data management features. Starting with a core functionality of our platform, I'll go to the cogwheel and then settings and I'll look at managed accounts. What we see here is what we call in Log XIO sub accounts. It's a way to divide the amount of data that you can ship to Log XIO on a daily basis into individual groups. Many users do this in order to manage quota by putting different amounts of total gigabytes in different places and individual search indices. People organize teams or projects using these sub accounts. People use them to provide role-based access or organize content and emit a variety of other things in order to make the platform and the data you send here easily used. Now, going into tools to describe some other data management features, we'll look at drop filters at a high level. Drop filters is a way that you can manage your data by filtering out things before they hit your index. This provides you the ability to manage the amount of data you send to the platform in terms of quota on a daily basis by being able to filter out messages that provide little value on a typical basis. You then provide have a switch that you can activate and deactivate these filters in order to ingest those messages on the fly. Finally, we'll look at archive and restore and the event that you need to keep a long-term archive and want the ability to restore data back into the platform. At any given time, you can click on the archive and restore set up a object storage location and then start archiving your log data to this location, which can then be restored on the fly. Coming back to Kibana, we'll now navigate to metrics and talk about Grafana in our platform. Doing so, I will click on the metrics tab. This will bring us to the homepage of Grafana. Essentially, what we've done with our managed Grafana is provide the ability to send metric data to us that we will automatic store keep for 18 months in the platform with all of the default functionality of open source Grafana. We've done some interesting things as well on top of this that I'll demonstrate now. I'm going to go into an example dashboard for Kubernetes data to point out a few things at a high level very quickly. Some of the things that logs.io has added on top of this are the ability to provide annotations from the log data sent to Kibana itself. If a user is sending log data to our platform as well as metric data, you can annotate the graphs of your metric data using log events on the fly in order to provide additional context while looking at Grafana. In the instance that you find some type of issue that you want to investigate in more detail such as this slow scale of CPU increase as well as memory increase near an annotated failure to auto terminate a pod and you drill into the context of a particular pod or resource that you want to investigate, you have the ability to use this drill down that you've applied in Grafana to then look at the log data directly. We've added data links on top of these panels in order for you to recreate the search context or the drill down context in Grafana into the log data specifically with the click of a button by clicking explore in Kibana. It will automatically populate a Lucene query on top of your log data with the exact time frame related to the dashboard in Grafana to show you the log data coming from the same time. In this hypothetical example, we can see IO exceptions with too many files open. This could be the reason why our Nginx pod has high both CPU and memory consumption. Now finally, going into Yeager, we'll look at a hypothetical example of looking at some traces in order to find an issue. Doing a basic search for our service customer, we can find related traces in order to see if there's anything related to this service that has errors. We can quickly see that get requests to our dispatch has a few errors. Opening this up, we can use the waterfall graph in order to see where these errors are, where they fall in the request breakdown, and understand greater context using their tags, process, and the associated logs. In the instance that you find something that makes sense to drill down into in the log data itself as you contextualize your search and their understanding of where in the request something is broken, you have the ability to come up to the top and also explore this trace in the log data itself by clicking view in Kibana. The trace ID will be applied as a Lucene query that will then bring back the entire trace and you can investigate the event and go for from here.