 Hi, welcome to today's webinar. I'm Steven, a Solutions Architect for Cecivio. Today we will discuss some of the challenges with Cubernete's observability and how Cecivio is tackling these problems. This webinar is pre-recorded and we post it on Cecivio's YouTube channel for reviewing after today's presentation. Be sure to check back for new content as we will be continuing to cover various Cubernete topics in our channel. Today we will start with a brief introduction to Cecivio. We will then discuss the biggest challenges of Cubernete's observability and how Cecivio is solving these challenges with their data-srolling methodology. Finally, we will demonstrate an example of data-srolling in action. Although this is pre-recorded, our team is available in the chat right now. Please submit your questions in the chat window and we will address these throughout the webinar. First, let me briefly introduce Cecivio. Cecivio is a predictive troubleshooting tool for Cubernete's applications and environments. We have domain experts both in AI and Cubernete's with several decades of experience in their respective fields. We are a globally remote company where there are headquarters in San Francisco and our R&D team in Tel Aviv. Cecivio is not just another monitoring tool. Cecivio provides you with answers and insights and not just raw data. Every other Cubernete's observability tool today presents you with raw data, which still has to be correlated and analyzed to help determine the root cause of an issue or determine an insight. This is both time-consuming and requires expertise to do properly. Cecivio is fundamentally different in that we present answers and insights, saving you time and money by troubleshooting and optimizing your applications and environments for you. Cecivio does this by heavily leveraging the use of lean artificial intelligence and machine learning to not only automate the troubleshooting process but also to predict values before they appear and provide recommended fixes for issues. We are the first and only observability tool to be predictive, which is accomplished through a novel methodology called data swirling. We coined the term data swirling to describe the ability to analyze data on the fly from multiple layers of the stack, which is the backbone of our predictive capabilities. At a high level, Cecivio uses data swirling to first collect data from the entire stack. It then compresses and translates everything to a unified Cecivio language. It then goes on to correlate the data to form a clear picture of what is happening inside your cluster. It then automatically detects issues and it does all of this in real time. Data swirling also allows us to provide fully automated application resource profiling, allowing you to properly allocate resources for your cloud native applications. This fully optimizes performance and cost savings for your entire Kubernetes environment. Troubleshooting and optimizing Kubernetes applications with today's observability tools usually looks something like this. First, collect and store large amounts of metrics, logs, and traces. Second, find an expert. Third, have the expert sift through all the information to analyze for root cause analysis of an issue or determine optimizations for your applications. This process in the Kubernetes environment is now obsolete for a few reasons. Kubernetes and its underlying layers produce a mass amount of raw data and logs. Sifting through the per volume of information, even with the help of today's observability tools, is still like finding a needle in a haystack. Also, raw data is not enough because we want to know things like what is actually causing those CPU spikes and are they even normal? We can see that a pod failed, but maybe we want to know the root cause of why it failed and how should we actually fix it? Simply put, raw data still requires analysis. Kubernetes failures are usually chains of events. One failure leads to another, and to another, and to another, all in the layers beneath the Kubernetes control plane. On top of sifting through mass amounts of data, you have to also correlate the right data to even find a potential root cause of an issue. Trying to piece together a puzzle on top of sifting through all that data increases the complexity of troubleshooting Kubernetes issues. Kubernetes pods are ephemeral, so storing mass amounts of data and trying to analyze it all after the issue has occurred is not only difficult to do, but also has much less value after the issue has actually occurred. Kubernetes may restart your application, but you still need to find the root cause of the issue to prevent it from happening in the future. It would be even better to catch the issue in real time and prevent it from causing a catastrophe. It all boils down to time. We don't have the time to look through mountains of data in order to solve one issue. We can't get back time, and time is money. Not only is storing mass amounts of data antiquated, another part of the reason we are limiting ourselves today is that most of the tools are relying flawed data for analysis because they were not designed for Kubernetes. For example, two of the most popular data collectors today are Prometheus's NoteExporter Influente. Both were designed to capture either metrics or logs and do a decent job of it, but unfortunately they have limitations in trade-offs, which makes real-time analysis on large amounts of data very difficult. When using Prometheus out of the box, you're getting competed averages which leads to inaccurate and non-usable data. Prometheus was designed for reliability and as a trade-off loses some accuracy in the data. For example, if you have a momentary spike in member consumption which leads to an application crashing, you may never even see the spike if an average is displayed. That momentary spike is a vital piece of information needed when troubleshooting why a failure occurred. Fluent D, on the other hand, is a single-throated application and caps out at about 18,000 events per second. Both of these factors limits the quantity of messages you can actually collect. This is bad because oftentimes critical messages may be lost and you would be troubleshooting with vital information missing. In summary, the traditional method of collecting the storing mass amounts of data to analyze after an issue has occurred is limiting the full potential scale of cloud-native applications. Cecevio believes the traditional monitoring systems used in Kubernetes provide limited value due to the reliance on inaccurate data and requiring an expert to determine what is actually happening in your environment. The challenge now is how do we break through this legacy approach and evolve along with modern technology ensuring that we can successfully get insights into our environments. One way to break this legacy approach is with a new novel methodology, Data Swirling. Data Swirling is our novel approach in which we use custom data collectors along with lean artificial intelligence and machine learning to collect and analyze data on the fly. By having data that is analyzed in real time, we can provide actionable insights and be predictive about events in your Kubernetes cluster. Data Swirling starts with collecting good quality data. Cecevio recognized the current challenges with today's data collectors and opted to build our own custom data collectors, optimized to collect very granular metrics and information from the entire infrastructure stack. This ensures Kubernetes troubleshooting is done properly, then starting with good quality data. There are several advantages to using Data Swirling. First, data is collected from every layer of the stack and immediately compressed. All of the irrelevant data is discarded and thus removes any burdensome storage requirements. Second, data is processed and analyzed in memory, removing the added delays of sending and receiving data to and from disk. Third, because data is lightweight, we can analyze a mass amount of data without being burdened on resources. All of these advantages allow us to see events as they unfold in real time and actually allow us to be predictive about the events before they were materialized into a failure. Data Swirling is what fuels Cecevio's machine learning prediction engine and alerts users of impending issues before they materialize into catastrophes. Data Swirling also fuels Cecevio's application profiling capabilities, ensuring that applications are being profiled with very accurate real time data. Data Swirling is the enabler for predictive troubleshooting capabilities. By having the ability to see and understand what is happening inside your cluster in real time, Cecevio is able to detect signals and chains of events as they are happening. Issues inside Kubernetes environments are sequences of events, much like a DNA sequence. As one event in a sequence occurs in your Kubernetes cluster, a strand of DNA is filled out. As more events occur in the same sequence, more of that DNA sequence is filled out. If looking at a DNA sequence being filled out, one can start to predict what the full DNA sequence might be. This is how a Kubernetes failure event happens. Each strand or singular event of a Kubernetes sequence is filled out and Cecevio's prediction engine detects what is going to happen in the failure sequence. By knowing what failure is going to occur, we know what will happen, the root cause of the issue, and how to fix it. Again, this all starts with data collection. Each piece of data we collect is classified with a severity score and analyzed in real time. Data Swirling essentially enables a live feed into your Kubernetes cluster, much like an X-ray. Now, let's take a look at Data Swirling in action. Cecevio's CEO, Nuri Golan, is going to walk us through a real-world scenario. If you have an online product, you know how critical customer experience is to building and growing your business. Let's look at an example of how unexpected and difficult to find errors in your deployments can directly impact your business and how Cecevio can make those problems go away. On the left side of the screen is Cecevio, an e-commerce designer's sock website. On the right side, we have the Cecevio dashboard, which is constantly observing your Kubernetes environment and the applications running on it. So I am a customer, and I go to Cecevio's website and start to shop and add items to my cart. Okay, these socks are cool. Let me see what other fun socks we have here. These ones look good too. After I add my third pair of socks, I decide I want to check out. And I notice that the cart has disappeared. At this point, I'm already slightly frustrated. I go to the homepage to try to find the cart, and I'm even more disappointed to see that the cart has emptied. If you notice on the right side of the screen, on the Cecevio dashboard, just as the cart crashed, Cecevio detected a failure. It looks like an application was abnormally terminated. When we expand to learn more, we can see that the cart pod was OOM killed, leading to the crash of the cart and losing all of the data in that pod. Without Cecevio, this would be incredibly difficult to find. Most observability tools wouldn't pick up on the momentary spike in memory, which led to the crash. With Cecevio, you immediately are notified of these issues, even before a customer complains. With one click, Cecevio resolves the issue and ensures that other customers have the same bad experience. Going back to the Cecevio site, we can see that the cart is now working properly. This is just one of the ways Cecevio offers huge value to our customers, including a direct impact on your sales, customer experience, and cloud spend. If you want to learn more about data swirling, see more examples of Cecevio in action, or how data swirling can be applied in your environment, please reach out to us via the link in contact information found in the video description. Thank you for your time, and we look forward to connecting with you.