 OK, very warm welcome to all of you to this webinar. It's all about how you have usability in terms of pipelines, specifically in the modern context in a cloud native world. But before we head into it, a little bit of context in terms of what is happening in the current market right now, specifically in large organizations which are involved in banking, telecom, finance, petroleum, industries, et cetera, just from our experience. So in the past, it was more like Kubernetes or container technology was adopted by organizations which are new age, more so like startups. Think Facebook, think Instagram, or think something on the lines of an e-commerce website that is within your region. But with the passage of time, Kubernetes has become the de facto standard. And what typically happens in such organizations is that there are multiple sources of where they source their applications from. With that being, some of it might be developed internally. Some of it might be like CODs applications which are supplied by external vendors. For example, take the example of Finnacle. Finnacle is developed by a company called Infosys, which is part of the core banking systems of many financial institutions. With that said and done, there is a need for distributing your applications across pretty much any cloud, one being business expansion, the other being technical limitations with certain clouds, lack of availability of infrastructure in a particular region. For example, in the Middle East, it's only now that AWS is entering this scenario. And for example, providing all of its infrastructure, tying up with local providers in terms of where the data centers are hosted, and eventually giving you a Kubernetes cluster. So although business demands that you expand to as many regions as possible, typically, in mature organizations, it's quite the challenge when it comes to getting your applications there. Now, think about it. If you've got a Kubernetes cluster, the eventual end goal of an organization is to run applications over there. So before we get into that, we would like to just introduce Mothish, who's a technical lead as well. Mothish, what are the typical problems that you see when it comes to a junior developer who's working on applications and needs to get his deliverables to any Kubernetes cluster? OK, thanks Prashan for the question. So I'd like to, in general, tell about what are the problems, individual DevOps or developers, even new ones, even the old or experienced ones face every day. So the main problem I see is the time that is spent on the development versus configuration or deployments onto multiple ecosystems and multiple infrastructures. Basically, whenever a developer comes into developing mode and he'll have to set up some of the things that is required for his day-to-day work. So for testing his day-to-day changes and delivering a solid proof, solid product, he has to set up some of the stuff. All the microservices, as of now, we have multiple microservices in our day-to-day work that are involved in setting up for testing the things. So the most of the time is spent on configuring all the different infrastructure components. And then he can test out what his features are. So this place, we are spending too much time. That is one of the problems I see in day-to-day DevOps workflows. And this happens not just a single day. It happens every single day. And it happens with every developer and every DevOps engineer. And that comes in a lot of time. And it requires a lot of training when it comes to, so if a new developer, if you are onboarding a new developer onto your team, so you'll have to train them regarding your infrastructure, how the different components fit together, and for him to get up and running with the setup of our everyday development workflow. So you'll have to train those new engineers or even the experienced ones who came from a different organization to your organization. It happens every day. And it can be, we can't do it right all the time. So even for experienced developers, it happens that they'll get into some configuration issues and they'll try to debug those issues. And it consumes a lot of time in the process of development, delivering the product from development stage to the deployment stage. So this is not repeatable if we are doing it manually. And there is a, so what if some of the infrastructure goes down and you'll have to communicate with different teams and you'll have to raise tickets and communicate with the team, get it resolved. And all these things are, as of now, it's manual, which takes a lot of time. And you can't give, in typical companies, we'll have multiple infrastructures, multi-cloud infrastructure where we'll have different setups for development, one type of setup and for production, there is another type of setup. And how do you control the access between development and production workflows? That is another area where we have to improve a lot. Got it. Interesting thoughts, but just, we wanna delve a little more into the teams. So now what the current scenario that we have in our product team is such that you have independent teams like which are managing microservices and considering that it's heading towards a microservice level architecture. And what are the challenges that you foresee? Like do development teams have their own preferred frameworks for let's say quality analysis or security scans, et cetera? Like what is the scenario right now? Yeah, the typical development process from development to production, how it grows, is like when a developer submits their code, the reviewer has to go through the code and has to approve the pull request. And I can't be guaranteed that a reviewer can identify all the security issues that are happening, that might arise in the future. And it goes beyond the developers control where it'll have to go into some staging setup or an alpha setup where your quality engineers, quality assurance engineers will test it out and figure out if there are any issues and there is another aspect. So quality assurance can guarantee it to a certain level. On top of that, there is some companies will have security teams who actually try to test the security issues in the product. And this cycle will take a lot of time to proceed further. Like from development to the deployment, it takes a lot of time to validate each and every step and go through the process of delivering that solid proof and secure product. So there we need to improve that. Got it. On the terms of reusability, what are the typical challenges when it comes down to let's say a developer consuming like let's say security standards or quality standards. And again, let's rewind back to someone who's setting standards at a framework level. Like you might have Node.js that one team prefers and you might have another team programming on go. So is this a major point that an organization follows that you have a standard repeatable process? And if so, what are the typical gaps from you as a technical? So whenever I observe people, whenever there are like multitude of tools that are available today and each individual developer has their own preference towards using those tools. Even though they are outdated, they might not want to use the latest technologies and they might not want to learn new technologies. Some of some people are passionate enough to learn new technologies and try to experiment with the new ones. So most of the people I find are hesitant to move from what they know into what new features, new tools that are available, even though they are more secure and more powerful than what they are using because they learned they have spent too much time on those tools and they stick to that because of their inertia to move from one technology to another. And there are well-known vulnerabilities in the old versions of the tools. And some tools are no longer valid for the latest use cases. So this area, so if I have to move the developers from one tool to another, we have to train the developers and it takes time. So and it also depends on the practices that the developer follows in delivering a product or features. So there are some standards that we can, each individual organization can have its own standards and there is no universal standard as of now that can guarantee products, security and stuff. Exactly, that's very well put, Modish, thank you for that. So the point that we are trying to bring out over here is that you've got so many teams that are following so many frameworks. Some security scan, scanning tools do not support like let's say older frameworks like let's say Spring Boot but they're really good at like identifying or meeting the needs of a specific team. Now when we've got a microservice level architecture, the one aspect that you've got is like self-sufficient teams that is driving the whole software development life cycle into pretty much like product and platform engineering teams where product teams or product engineers are wholly and essentially responsible for all of the quality and all of the security but yes, you have platform and security coming in and setting those standards. That is where the whole cultural shift is happening and this is typically defined as a shift-left movement where it's not that you develop an application and leave it to somebody else to verify the quality or security compliance aspects. You pretty much tackle it much, much more earlier within the software development life cycle. Preferably like even like before the development environment stage that again is subject to the organization that you're in and the scale of your application delivery and the scale of the possible risks that are involved that virus over here. I mean business loss, downtime or like compliance issues, regulatory issues. So if you make a mistake, it's gonna be really, really costly especially in the sectors that we typically deal with. So again, there's one very important point I think that we need to cover before we head into the full technicalities of what reusability means in the modern cloud native scenario and that is like, you know, the skill set. So Moutish, how difficult is it? There are two questions that I have here. One is how difficult is it for developers to pick up like let's say the whole CNCF ecosystem of like, you know, Kubernetes and the security aspects, the quality aspects, compliance aspects, et cetera, XYZ. And the second thing is do they really need to know all of this? Okay. So today Kubernetes has is the de facto standard for everything that's happening in the world. When it comes to Kubernetes, it is rapidly changing and there is a huge ecosystem that is growing beyond our control that happens every day. Like every day we see every month so many projects are getting added to CNCF and so all these tools have their own purpose and they have their own ways of adding functionality to what Kubernetes offers as a base. For us to learn these things and okay, we would be used to some of the technologies and some of the workflows that we use every day for Kubernetes. So if we had developed and deployed our infrastructure in a certain way and there might be a new tool that is coming up in the next days and we may not be ready to actually accept that into our infrastructure because there will be a lot of moving parts and if we put that new tool into the infrastructure that might break some of the stuff. And it's also not easy to expect. It's not easy to learn the new things that the rapid rate at which the new technologies coming up in CNCF. So it's not really, it's not easy to learn all the new things that are happening. So we need a unified way of saying each developer doesn't need to know all the tools. So if there is a system that can provide all the things that are required and if some set of engineers if somebody is taking the time and effort to put together all the stuff CNCF projects and giving it as an abstract functionality to an infrastructure team, that would help us in a big time, yeah. Very well put. Thank you, Moutish. With that, I will hand it over to Moutish to take you through the technicalities of what is usability and how is it changing in terms of a cloud native world where Kubernetes is the de facto standard. So thanks. Over to you, Moutish. Thank you, Prashant. So extending the problems that I defined earlier. So what it takes to achieve a consistent and an efficient deployment mechanisms on the multiple ecosystems of multi-cloud infrastructure. So as of now, we don't have any standards that are defined across teams and across companies, across organizations. So each organization follows its own standards and they have their own ways of deployments and this is not repeatable. Basically, even when you have within the same organization when you have different set of environments where development teams will work on some environments and the production environment, staging environment, these different types of environments and if you have especially when you have multi-cloud infrastructure. So you should have a way of deploying your own infrastructure, deploying your own microservices onto different platforms in a repeatable fashion. So you can't just manually do it every time. It takes up too much time. So automatic validation and it takes too much time to actually certify a build and then deploy to staging and then to production. If we do it manually, it takes too much time. So in the fast moving world today, so we can't afford to have that much of time lag between releases. So a lot of developers time and even DevOps engineers time and SREs time is getting wasted in this area. So if you are not repeatable, if you have infrastructure, deployment is not repeatable. So it's not the efficient way of working. So we should have a simple and easy to have deployments and it should be error free. So what happens when you do manual deployments is it causes human errors. So if we are deploying and if it works, the common notion is like, it works in my laptop, it works in my environment. So this typical developers usually have heard of this phrase. Basically it works in my setup. So that's why it came, Docker came and we will ship our own Docker image. So even then now it exploded into microservices and whole new set of problems arise because of that. So we'll have complex communication mechanisms between different microservices and service measures is a different concept that comes into picture here and configuring those service measures. And it's a different volume and it's very difficult and it's not easy to get that done without errors. And we need a notification mechanism that can detect errors in the infrastructure and notify us about the issues. And if you are doing it manually, once again, it's not the efficient way to do it. So in this regard, how we can have standardization and an effective interface that can bridge the gap between the development and deployment, I'd like to present some of the ideas. So what if the first idea is what if we can visualize our infrastructure on the screen, on the computer screen and what if we can drag and drop your applications onto your infrastructure just like in a computer game. So what if the UI shows deployment dependencies on all your configurations? If you do something wrong in the configuration before you even deploy it onto your infrastructure, if the system is ready to show you, this is what you have done wrong. If you deploy this particular application, you'll get into errors. What if the system is intelligent enough to suggest you that kind of errors? And what if we have a whole new set of libraries, pre-built libraries for deploying into multiple architectures and multiple models or modes of deployments like blue, green deployments, canary and if we have pre-built the Github's workflows set up, and what if we can achieve deployments without writing any code? It's a complete 100% deployment automation, what if we can do it? And if we could manage all the cloud accounts, nowadays organizations will have multiple cloud accounts and they have different environments, different clouds. So what if we can manage all those cloud accounts within one platform? This is an area where a unified platform can help. So in that regard, so there is an open source tool called Tecton, which is a very useful tool and which is lightweight and generic and it's very flexible as well. So with Tecton, you can actually automate all this on the stuff that I have discussed earlier. So most of the infrastructure aspects and deployment aspects can be automated with Tecton and because it is a cloud native project, so you can readily use it for Kubernetes and there will be no bottlenecks. I'd like to share some of the problems that I see in Tecton. So basically, it takes a bit of time to learn Tecton, but not everybody knows how to create Tecton task and how to run a Tecton task. So in this regard, some of the tools that are in the market today are helping out a lot in this respect. So we call that as a value stream delivery platforms. So they're built on top of Tecton, which extend the functionality of Tecton and provide a lot more than Tecton. So it actually integrates with all these, some of the solutions that we have today are Jenkins X, Ozone, Scaffold, GainE2 and OpenShift pipelines. So some of the things I have are multi-cloud ecosystems and OpenShift especially doesn't provide you with functionality to manage your own cloud. So you'll have to go with OpenShift. Ozone on the contrary can provide you with a lot of multi-cloud ecosystem management utilities and you can connect all your ecosystem of multiple cloud accounts and multiple repositories and you can automate most of the stuff in your deployment process. And these tools have a very good user interface and Ozone especially has a drag-and-drop functionality where you can build your own pipelines with just a few clicks and you can automate most of the stuff that I discussed earlier. So yeah, these tools have some predefined templates and Ozone has a huge set of predefined deployment templates that you can reuse, tecton templates. And even extending the tecton templates like for achieving GitOps kind of deployments, blue, green, canary, there are predefined templates for that and you can manage all your cloud accounts within one screen. So it absolutely has no learning curve so it's easy to use. So with that said, we are here up to some level we have achieved a pretty good value stream delivery mechanism for on top of tecton and we are still in the process of improving that and we are trying to make the deployment procedures easy to do and repeatable. So this is where, this is how what I want to say about the repeatable deployments and yeah, reuse of pipelines. Yeah, thank you guys. Over to you Prashant. Thank you, Motish, very well put. So again, the callback is towards a multi-cloud scenario which is being driven by largely from a compliance perspective from business and lack of availability of certain technologies like cloud technologies within certain regions which again pretty much boils down to the region that you host your data at. The second point over here is that a shift towards like the traditional architecture of like how you split your teams which is your developers, your quality analysts, you've got security analysts who define policies and you've got like site reliability engineers who help out with your operations. Now, all of these are being merged into pretty much two domains which is platform engineers and product engineers, right? So with that said and done, like not everyone needs to know the underlying aspects of like how things function in Kubernetes, how like things are handled from the whole entire workflow. It's actually detrimental to what they can contribute because at the end of the day, if you look at it from a product point of view, you're pretty much looking at the value that you deliver to a customer. That is the core aspect of a value stream delivery platform where it helps you actually benefit like from a business angle. That is like you might have a DevOps process that is like in the, that is very defined or some of you may think. But the question that we definitely should ask ourselves as a whole organization is that, is it really delivering value? And by that, I mean, is the automation actually helping you out? By that we mean, is it repeatable? Is it reusable? Is it scalable? Is it secure? And more so after all of these points, are you able to deliver without compromising on quality and security? That is imperative in a world where attacks on systems are very, very common and there are new vulnerabilities that are being discovered day in, day out. So it becomes like the core system that you will rely on for agile development process which is, or agile philosophy I must say, which is like being followed by more or less everyone within the software development life cycle. So again, the ways that you might measure like how your automation processes are efficient, there are certain gaps in terms of visibility. Like for example, if an engineering manager would like to identify, is it something on the approval that's taking time or is it a build process that needs to be optimized? How do you identify all of this? And add a complexity in terms of multiple tools like being followed by individual product teams. For example, some team might be preferring SonarCube because it's already built in, everything has been baked in, it's all grandfathered in in terms of standards. But a new team comes in tomorrow and says, hey, I wanna use something like, let's say Claire or I wanna use something like Snake because it fits much better for my use case and it solves my problem. So you do want to give the product teams, agile teams, that kind of flexibility. That is where the complexity comes in. Although Kubernetes solves like one layer of like orchestration problems, but as is always the case, if you consider entropy of the world or if you consider physical concepts, introducing lack of complexity in one angle is always gonna generate complexity on the other side. Now, this complexity is being transferred to developers and developers are not in a position. Not all, I mean, there are some really high class developers who believe with it, but what I'm talking about is a generic organization or somewhere where it's widely distributed, not everyone has visibility over all of it. So visibility is a challenge. Like for example, how do you debug issues? Like you might have logs from the framework level, but what if something is going on at the infra level? Do you really want to give like Q-Batman access to those guys? Of course not. So you need to be able to like pretty much visualize all of it in one single platform. That is where VSDPs come in and help you optimize your DevOps tool chain, like in terms of continuous delivery. It helps platform engineers define standards so that they can pretty much optimize their time and efforts into developing something which is automatable and more importantly, repeatable and scalable in the organizational context. And by that I mean like from like scaling the organization. Like let's say you're 40 developers now, tomorrow you might be 500 developers, who knows? So you need to have all of these like moving parts in place right before you scale. Otherwise, once you scale, you're not going to really see the benefits of like your automation processes. And it should not be a single point of contact from most all of it. It should be standardized within a framework. That is where value stream delivery platforms come in and really help you scale your DevOps transformation processes. With that, I would like to take a pause and hand it over to Abilash. And Abilash, please go ahead with your inputs and conclusions if whoever may be in your mind as well. Yeah, sure. Thanks Prasanth and Mothi for this webinar because we've seen the term of value stream delivery platform or a VSDP come up multiple times across maybe as being created as a separate category by Gartner or blogs popping up on the net. And we just thought through this webinar if we can help consolidate all those informations through this webinar and put forth also the point that how Tecton as a framework enables you to reuse your pipeline and standardize your deployments across cloud and how a VSDP basically extends that capabilities in the form of giving you an UI, giving you a user interface that Mothi has explained all the capabilities that can, you know that you have those capabilities we are making it much more reachable across to the audience, much more accessible. So you can leverage the benefits through a VSDP and as a name itself says the platform helps you map your value stream across your DevOps phases and the end-to-end value delivery from code to customers is actually pretty streamlined through a VSDP which the likes of which like Ozone or Jenkins X or whatever platforms are there, they help you deliver this value to your end customers. So how they do it is what we have seen through this webinar and you can have more information on the respective tools, websites or there are many, we hope that the category of VSDP comes up soon on Gartner and the vendors who are listed on the Magic Quadrant I think that would basically give everyone a kind of clarity as to what exactly a VSDP does. So yeah, thanks. Thank you everyone. Thank you everyone for joining us on this webinar and please keep having questions. Please post in our channel.