 Hi, this is Yoho Sapin Bhartiya and welcome to the topic of this month and the topic of the month is observability. And today we have with us Renuka Nathkarani, chief product officer at Ariyaka. Renuka is great to have you on the show. Thank you, Sapin. I'm super excited to be here. Of course, we are going to talk about observability. But before that, I would love to learn a bit about the company still. What do you folks do? So Ariyaka is a Sassy company. We offer Sassy, which is secure access, secure edge. It's integration of networking and security as a service to our customers. We are one of the leading edge SaaS vendor, which is delivered cloud first. And we also offer managed services for our customers who want to do Sassy. Since you folks have been around for a while, you have seen a lot of technologies that were born, you know, Kubernetes and all those things, you know, 2014 and stuff like that. How do you have seen the whole networking space evolve over time? Because especially when you talk security, we talk about zero trust, security, zero trust network and all those things. So I want to just understand the origin of the space you have seen. Yeah, absolutely. And that's a fantastic question, Swapnil. So when we started Ariyaka, we were actually built to be cloud delivered. And back in the day, containers wasn't a big deal. Kubernetes didn't exist. But because of the needs of the business, the way the software had to be written, the way the software had to be provisioned, we actually did process level isolation, which is the main purpose of a container. So, you know, without using the term container and all the fanciness associated with it, by definition and by design, our software was built with that. And the other thing is we are as a service, our customers are global, they have instances which are running globally. And we have multiple instances which are needed to service our customers. So what we did was we built a monitoring system to begin with. And this monitoring system was called Eagle Eye. This was, you know, just for us to know if we are able to deliver service to our customers or not. So it was like a mandatory thing we needed because you are cloud delivered, we were as a service, and it was essential for our business back in the day. And because of the nature of the distributed system and everything like, you know, dependencies and networking and all that, we gradually added a lot of capabilities that we today call observability. So back in the day when people were doing things, we call it monitoring. And then, you know, it kind of evolved to, like today what we do is root cause analysis. And the basic premise why we have gone through this journey is earlier everything was monolithic. The applications were monolithic, they were in one location, users were in the office, applications were in the data center, and everything was relatively deterministic. Because you think of it, right? Anything that you do when it is deterministic, you can do troubleshooting easier, you can do a lot of these things much easier, right? But we have moved to a highly non-deterministic environment because users are now working from home, your applications are in the cloud as a service. And the applications themselves are microservices based. So when we were doing, you know, process isolation and running multiple instances for scaling globally, we also had to, you know, deal with those kinds of problems. And getting the signals from these disparate and, you know, very diverse ecosystem, trying to make sense of it is what observability helps us do and gets us there, right? So it's been, it's been a, you know, like first hand experience for us as a company who built this and evolved along with the industry without having these fancy terms back in the day we didn't have those. How do you have seen the scope of observability evolve over a year, you know, not only just security perspective performance and a lot of things when we talk about observability? If you break it down into like different paths, you know, your question had like multiple angles to it, right? So first of all, this whole idea about monoliths versus microservices, right? When you think about it from a compute perspective where there is no latency sensitivity, performance is not really, you know, critical, right? It's much easier for you to think of like, I'm going to do everything microservices based. Our experience has been that we are actually going away from distributed to a unified single pass architecture on the networking side of things. So what we concluded was from a data plane side of things, or when you handle network packets, you don't want to have multiple hops, you don't want to do service chaining, you don't want to do like multiple different services. And actually, we are converging everything into what we call as start to finish process, right? You take a packet, you do everything you want to do to the packet compression, you know, access control, DLP checks, whatever you want to do in the single pass architecture. That's basically where we are going. And this I'm responding to your comment where we may go back to monoliths. And I'm like, yeah, you know, there is actually a very good technical valid, you know, practical reason to implement something like sassy where you actually need that. Now, that's for the data plane for the control plane management plane, absolutely, maybe are going full microservices distributed and all of that. And we take signals from those. Now, back in the day, when when we were trying to gather signals, you would run out of like bandwidth as an example. Now there are new frameworks, we are using new protocols like gRPC, lots of data is being moved around significantly, it's been moved around. And we are able to do analysis on top of it, right? So, so things have been evolving significantly when it comes to security, again, the protocols today are much more secure, it's so much easier for us to, you know, manage, manage like SSL and encryption and, you know, when you transfer data in smaller bytes. So, so there are multiple elements to it. And again, you taught I, I want to make sure like for the performance, of course, there is an element of do you really need, you know, a distributed and can you afford so many hops? Security, of course, you need to make sure you translate it in the right, right way with encryption and so on. And then there are use cases for security, which I'm sure you'll touch upon at some point in time. But that is where observability has completely new meaning. It actually brings a completely new dimension together. Do organizations understand the importance of observability? Of course, when we talk about augmentation, there's a wide spectrum. There are a lot of organizations who are in the early phase of adoption. There are a lot of organizations, early adopters, they have been doing it for almost 10 years now. As you also mentioned, back in those was not the label or term we were using. Where do you see observability practices are there today? Observability is a necessity to operate the business. So people don't wake up and say, I'm going to do observability, right? I mean, it's not like you say, I mean, some people do, but even if you don't say that I want to do observability, just for you to run the business, the basic troubleshooting, the basic conversations of like, how am I going to keep my business going? People do things which actually are part of what we call observability, right? So earlier things used to be reactive, like things break, you go figure out what's going on. Then we said, we're going to do proactive, which I'm going to monitor you. I'm going to make sure of this. And now where people's head are is I'm actually going to do more of a predictive, predictive analytics. I'm going to do predictive monitoring. And I'm going to try to find out signals which actually have certain meanings, like these signals indicate bigger problems before they actually become problems, right? So a lot of these conversations, people are doing it out of necessity, not because it's a new fancy thing, I want to adopt new technology, spend a whole bunch of more money. It's really, you know, it's like, I got to survive. And I got to figure out what's happening to my workloads and Amazon, you know, how many S3 buckets I have and who's talking to who. I mean, there are very simple examples. We used to like, you know, at one point in time, there was an analytics, you know, we gathered some data and we're doing analytics and we found out that somebody from, you know, inside the application was actually reaching out to for credit card validation. Or there was like this weird connection that was going out, which nobody knew about, right? And because of the observability or somebody was looking at it, we were like, wait a minute, you know, there is no reason for this application to actually do credit card validation, what is going on. And then of course, you know, you figure out a whole bunch of other stuff. When it comes to observability, of course, there are tools and then there are practices. How much role does the culture play, because tools themselves are also not enough. And as you also said, you know, hey, sometimes it's just like a checkbox and it's also out of necessity, something like that, you know, security, you know, nobody really cares about it unless something goes wrong. And they're like, hey, we should, I think, oh, sorry, I wish there was a parachute in the plane or airbag in the car. So this is what it is. So, so talk about the importance of cultural change. And if you can all look at the observable tools and practices, does that question make sense? Yeah, no, you're absolutely right. See, so the thing is, especially with observability, one of the fundamental requirements is there has to be instrumentation in place. There has to be things which are, how do you gather signals? If for you to gather signals, there has to be somebody who's actually sitting and sending your signals and giving you the right information, right logs, like events and whatever those might be. And I think one cultural shift that we are seeing now is as the DevOps are developing things, there is a tendency for more instrumentation by design. So you need to build instrumentation in whatever software pieces, applications, because at the end of the day, you can only work on signals which are made available to you. And if this instrumentation doesn't exist, so there are processes, monitoring of processes, monitoring of different changes that happen in the environment, just the instrumentation to actually capture these signals is something that has to be baked in. So there is a cultural shift in terms of like how do I do, whenever I'm writing a software or an application, how do I make sure that it is giving me enough insights, it's giving me enough, it has enough like events, logs, all of that data that it is spitting out when it's functioning. Again, a lot of these things sometimes people do it with experience, sometimes like, you know, even in the basic networking things, you build like instrumentation if there's any delay, all kinds of stuff, right, in addition to whatever you do in the processing. But yes, that does require a little bit of a different mindset, which is if I don't build it right from the get go, adding it later gets harder. It's actually much more work to instrument something which is not built by that design. Are there any major challenges that kind of hindered the adoption of hostility, you know, as you said, you know, if you're too late or, I mean, there are companies who are still, you know, on the very early stage of cloud adoption journey. And then there are a lot of companies who were born in the cloud native era. What are the challenges that you see in terms of adoption of hostility where it's like, people want it, but they cannot because of some of these complexities or challenges. See the first and the most important one is the silos, right? And silos have multiple layers to it. So one is the organizational structure, like whose responsibility is it, like whose problem is it, because everybody wants the results out of it. But when it gets to who should do the work or who should actually spend resources, time, you know, headcount, whenever it comes to budget, like I want the results out of it, but I'm not ready to invest. I don't have money from an opaque side of things to actually make it work. And all of these things are actually considered like a good to have, right? When you're doing, when you're building an application like resources, whenever you think about that, right, all of these things are considered good to have. So I think the basic level is the number of silos that exist, which is, which is what observation is trying to solve for. And then on the technology side of things, people have disparate products, disparate frameworks also, right? So there are so many different frameworks within observability, and not everybody subscribes to it. And everyone has some flavor of it, but getting all of this together is a massive problem. One of the problems Ariyanka ran into is we have petabytes of data. And we ran into a problem where just the basic compute infrastructure was a challenge for us. We are like, okay, do we even have like, you know, something that can handle this amount of information? And it took us like three minutes to in fact, just run a single query, equivalent of a query or question. And for the data to be churned and produced, like it took like three minutes for us to get that, right? And of course, now things have changed. Now we have many more resources at our disposal that we did not have before. But it's a very practical problem. Like our problem was we have petabytes of data, what am I going to do with this data? Where do I put this data? Who is going to churn this information? Who is going to tell me, you know, the analysis of it, right? So it's multiple layers of problems starting from the org structure. Everything boils down to who is going to pay for it? Like if I if I adopt a like a GPU or whatever system, right, somebody has to actually invest in it. Where is the investment coming from? When it comes to observability, and when we look at, you know, these new workloads, which are leveraging these Genetic AI technologies, which are kind of creating new challenges also. At the same time, Genetic AI can also help with observability. So there are two aspects of Genetic AI as workloads and as, you know, kind of further adding, you know, helping how do you see it? You basically hit it on the head in terms of there are so many interesting applications that that, you know, generated AI can actually you can build on top of it, right? So there's definitely a lot of interest, lots of opportunity. It again boils down to how am I going to make it work from a practical point of view? Because everything comes at an expense. You know, just like I said, even if you have a large amount of data and like we are using Genetic AI for like a knowledge base as an example. So what we did was we had all these data of all the questions and, you know, what customers have asked us and the answers we gave like from, you know, like a technical knowledge and the knowledge of perspective, right? And then we use that data to figure out like how to get a better version of the answer. So now you fade, you fed all that data into, you know, Genetic AI and you got like really succinct answers to what when a human tries to write or answer it, they are more complicated, right? So there is definitely benefits how you can leverage them and, you know, kind of like shorten the work you're doing and there's cost savings directly associated with that. But to get to that point, there has to be an initial investment or I'm like, you know, where is like, how do you get this system set up? And then of course you read the benefits. So it's a little bit too early actually for us to say like, you know, where things are, but it's very promising for sure. And when you know exactly what you're looking for, it is very easy. Because one of the things with Genetic AI is what data is being fed. If you have good data, you get good results. If you have bad data, you get questionable results. So as long as you use it in the right way for the right use cases, it definitely is much easier to control and make use of. If I ask you, of course, we don't have time to share the whole playbook today, but what advice do you have for company irrespective of where they are in their observable journey? So that once again, they can pick and choose right tools, but more importantly, they can build right practices. As you said, you know, some type practices, culture plays a much bigger role there. So of course, you know, I think it is a very step by step approach and observability can do many things and can solve many problems. It's very important to understand, you know, what is higher priority? What are the cost and resource constraints? What kind of data is meaningful? Because you can very easily run into data overload problems where, you know, it's like needle in a haystack and it just gets, you have so much more that you can consume. They're clearly like security and privacy concerns. So it's very important to build the architecture, build the systems from the get go. With these things in mind, do you have GDPR constraints? Do you have things that need to be, you know, monitored and produced for audit purposes? I mean, the biggest thing is like auditability. Can I make sure that, you know, I produce right, right information here. And then try to build expertise slowly and gradually because again, there is like a, the job market, there is a lack of resources. But it's not, it's not to a point where, you know, you need to have everything. So just building it gradually, growing the teams, growing the knowledge, using standardization is another very important piece. So you can interoperate with multiple open source systems. And then managing the resistance within the organization, like going back full flesh to the, to the cultural piece. I think these are some of the things which are very helpful in the journey towards observability. It's, it's clearly a journey. I like to use that word a lot. It's not something which is on and off. It's not like you are there and you're not, you know, next day you are, you have the most observable system. It actually takes time and effort systematically across and is, is a journey which is definitely worth embarking on, right? So people shouldn't feel it. It's just a journey. Rendu, God, thank you so much for taking time out today. And of course, talk about Avjalti and also kind of also talk about Ariyaka. Thanks for all those insights. And I would love to chat with you guys again. Thank you. Thank you so much. I really enjoyed it. Thanks for all your insights and great conversation.