 Madhagi Arun and I will be moderating this discussion. The reason we chose this topic, platform engineering on Kubernetes is because Kubernetes has been the cornerstone for modern application deployment and management. It has revolutionized the way we build, scale and deploy application in the cloud IT world. So during the course of this discussion, we are planning to cover the following aspects which is automation, reliability, scalability, monitoring and more importantly how to enhance developer experience. So we have with us the eminent panelists who are going to share their experience, insights and life lessons learned during the course of their journey. So we are going to tap into their expertise to understand real life problems and how to overcome them. So before we move on to the discussion, I would request the panelists to briefly introduce themselves. So I'll start with Bala here. Good morning all. I'm Bala. I'm work as senior SRE entity scholar. I'm happy to share this stage with our Kubernetes gurus. That's it. Thank you, Bala. Hi. Hi everyone. I'm Matunika. I'm working with Ericsson and I'm a senior application analyst there. So my experience with Kubernetes has been relatively new so I'm looking forward to this discussion. Thank you. Hi everyone. I'm Keshna. I'm a part of the modernization group in Infosys and as a part of CNCF, I'm working with the SPIFI, SPIA project and tax security. I look forward to the discussion with all of you here. First of all, thank you for having me here. I am Arult working as an engineering lead for Tata communication where primarily I'm focusing on building the cloud native application in the customer experience. Hi everyone. I'm Mohan. I'm a lead software engineer at Avala. I've been building a distributed crawling platform for the last five years. Yeah. I look forward to discussing platform engineering with everyone. Thank you for having all of us here. Pleasure to be part of this panel. My name is Ram. I am the chief evangelist at the Cloud Foundry Foundation. Thank you panelists for introducing ourselves and I'll start with the first question here. So I'll start with the very basic. So what do you mean by the term cloud native? Of course Arult, the term cloud native become a buzzword these days, right? But it was introduced decade ago at the AWS re-invent even by Netflix. Since its introduction, it hasn't had a clear definition, meaning sometimes different things for different people. However, after CNCF establishment, now we have a very clear definition and trail map for the enterprise cloud native journey. So basically this cloud native technology, empowering organization to build and run scalable application in the modern dynamic environment, such as public, private, hybrid clouds. So if you want to specifically, for your question, what does it mean in this platform engineering topic? How do you associate it? Cloud native application. Let's consider a scenario. I have a legacy monolithic application. I simply put it in the cloud. That cannot be considered as a cloud native application. So let's assume that just to illustrate this conference call, there may be a few folks who are born and bred in Chennai. They are called native to Chennai because they know every nook and corner of the city. The same way, when we say cloud native application, that should leverage all the best aspect of the cloud. Okay. Okay. Thanks for covering the basics by taking audience as an example. So moving on to the next question. I think Kubernetes has become the de facto platform for modern application development and management. So my next question in this regard will be, what are some of the considerations and challenges that we need to take into account while architecting a Kubernetes platform? Yeah. So when you're architecting a platform or Kubernetes, my first question would be why do you want to build a platform on top of Kubernetes? Because since Kubernetes came out in 2015, there has been what I would call a pass fever. You've had literally hundreds of passes and over time, some of them had a legit purpose to exist. They still exist. Most of them went away and some of these things that started as a pass, like open shift, actually became a Kubernetes distribution in the end. And then after the pass wave, you had the serverless moment, which once again was just a pass. And that also has fizzled out and now you have, let's say, platform engineering. I mean, the platform engineering term is gray on both the sides. We don't know where it starts. We don't know where it ends. But over the last few months, what it has come to mean is that everybody wants to build their own Kubernetes distribution inside the company, which is very unfortunate because in 99% of the cases, you shouldn't be building a Kubernetes distribution inside your company. Unless you have very specific use cases. You would be really better off just buying something off the shelf or even for that matter, having a managed Kubernetes service from EKS or GKE, something like that. And I would even go further and say, you know, give your teams, give your different organizations separate Kubernetes clusters. Okay. But with having a in-house Kubernetes platform, give us the most required customization. That is true. But there are only so many ways you can customize it. Given the number of distributions you have, somebody else would have already made a contact out of that. And somebody else would already have something that fits your needs most of the time. So that way, you know, in terms of customization, I don't think you will have a lot of benefits. But you still might have the benefit. But there is also the other angle, right? Standardization. Mostly they do this platform as an engineering for standardization. But in a large company, you generally would have different teams and different providers. One person would want to use, let's say, LinkedIn for some reason. And another team would want to use this year. So even within a large organization, you would have teams of very, very, I would say, you know, requirements. And by asking them to standardize, most of the time we are just restricting something. You know, all of you pretty sure are eight writing YAML. Imagine writing YAML for something that you wouldn't choose. It's just going to be more daunting, I would say. Maybe my follow-up question in this regard would be like, you want to add? Yes, I just wanted to sort of add something to that. I'm totally echoing what Mohan is saying. It is to always start with the Y, right? Why? Why are we doing this? And let me just give you a small example. Suppose you have a client and they are asking for 99.999, right? The mythical 59 of, you know, availability. You are left with five minutes of downtime a year that you've got to plan for, right? You've got to plan for a platform that just has five minutes of downtime a year. 9.995, you get 20, 25 minutes. 9.95, you get maybe four and a half hours. In order to achieve just five minutes downtime, you need to be multi-region, multi-cluster. You've got to have global load balancing. You've got to hot back up all your data to your, you know, secondary region. You've got to do all this and it's going to cost you a lot of money, right? So suppose you had that conversation in the first or second month of your project, going to the customer and saying, hey, I can give you 9.999, it's going to cost you 200K a year more. What is your customer going to say, right? So here's where I really think and encourage people to do these calculations ahead of time. Think through the engineering ahead of time. You have a lot of control at the requirements engineering stage, which you may not have later on during your platform engineering life cycle. So here's where I'm pretty much echoing what you're saying. You know, start with a why, you know, do you want to build a distribution? Why? Do you want to do heche across regions? Why, right? I think they both are in the same team, I guess. Maybe they should be separated. Okay. So my next follow-up question in this regard would be, so organization is going to have a combination of both multi-cloud and hybrid cloud platforms. So in that case, how does the organization manage this? Like they need to manage clusters and deploy them spread across multiple cloud environments. So what are some of the challenges or considerations that we need to take into account while doing this? Maybe Ram, your thoughts? There's a funny story about the term multi-cloud being invented by a marketing person who used to work at the Cloud Foundry Foundation about 10, 12 years ago. That being said, I think a multi-cloud strategy, whether or not you're using Kubernetes is, you know, kind of inevitable. There's no one-size-fits-all cloud. Most companies that I speak to and most CTOs that I interact with use a combination of DO for testing, AWS or Azure on production, not to demean the other providers out there. And there's also a ton of people who are very sensitive to like Europe-based vendors for European markets and the multi-cloud strategy is kind of inevitable. And that all points to wanting like a more common platform-based strategy. So your teams are not thinking about what clouds they're deploying to. Your teams are thinking about deploying to Kubernetes and Kubernetes has done wonderfully for standardizing that experience across different cloud vendors and different VM compute types and things like that. And so the best way, I think, to tackle multi-cloud in this era is to use Kubernetes, obviously. But that also comes with a lot of caveats and problems, which, you know, we have to take the bull by its horn and try and solve. So in the case of multi-cloud, right, my next question here would be like, how do we like those who are preferring to have some kind of an enterprise product or product that is already there in the market. So how do we bridge this gap? Like that could be one solution that is available in one cloud and that same solution maybe, what is it, mask with another name in another cloud. So how do we do about this? Yeah, so one of the unfortunate things about the open-source world and it's kind of like a curse that follows us around is fragmentation. We've seen it with Android, we've seen it with Linux. Back in the day, we've seen it with so many other products along and now we're seeing it with Kubernetes, not something new. The only sort of sane path there is to keep experimenting with what's available in the market, check out more projects and products and things like that and try and find other open-source solutions that can help. There's definitely a ton of solutions out there that are open-source that solve the problem and to some of the things that Mohan was highlighting before, they do solve the specific problem of being composite or composable where you want to swap out one component for another and things like that. So I would definitely encourage people to continue to investigate in the open-source world and make those decisions when they have to come to it. Curses, foiled by the bank. But anyways, so the Kubernetes being a foundation of your platform is almost inevitable at this stage. You might look at options. There are other orchestrators available, but if you are doing container-based development, then it is inevitable that you'd be deploying to Kubernetes. In terms of Kubernetes, this is something that my colleagues at or rather my friends at Red Hat say a lot. Think of Kubernetes as an engine. You need to build the car around it. So you might first want to look at other projects within the cloud-native landscape, which is the CNCF landscape to pick and choose what to build your platform with and after that, go outside. We are doing an annual review of all the Sandbox projects within CNCF and there are amazing, amazing innovations happening within Sandbox, within incubating and of course the graduated projects. So the suggestion is pick and choose, build your platform. If you choose within the CNCF landscape, you can be fairly sure that they play well together. I want to add that again. When we say platform engineering on Kubernetes, I want to go back to the basic again. So the cloud-native, basically the cloud-native application, the applications are designed architect to take leverage, the advantage of the dynamic nature of the cloud infrastructure. So platform engineering on Kubernetes, basically it's set up practice, principles, and technology that enable applications are designed, deployed specifically for the cloud environment. The cloud-native in the context of Kubernetes platform, basically to leverage all the capability provided by the Kubernetes to build scalable, resilient, and highly available applications. Probably we can take just two characteristics of cloud-native application and Kubernetes capability. So one of the key characteristics of the cloud-native application microservice architecture, using microservice architecture, applications are broken down into the smaller individual services so that it can be developed, deployed, scaled individually. Another important aspect, containerization. So when we containerize our application with a tool like Docker, that become a portable and consistent across different environments. To effectively manage these containers, Kubernetes platform become the popular choice for cloud-native application. So each individual microservices that can be scaled based on demand with the capability offering by the horizontal scaling in the Kubernetes. We can talk more, but in summary, when we adopt cloud-native approach and underlying platform as a Kubernetes, organization can build resilience, they can achieve the greater agility, scalability, and efficiency in the overall their software development and deployment practices. I think thanks all for your valuable inputs. I think the audience would have got what exactly needs to be done when they go for deployments, Kubernetes deployments across multi-cloud platforms. So I'll move on with the next question here, and this will be on application deployment and management. So in this context, any application that is being deployed on Kubernetes, we need to automate the deployment and also we need to find a way to upgrade and roll back the application in an efficient way. So my next question here would be what are some of the best practices or tools that we can adapt to manage an application lifecycle within Kubernetes? Yeah, from an application development point of view, we have reached a point where cloud providers provide features such that you can just automate your deployment by just changing the manifest, but that alone is not enough because as applications grow, you have a lot of microservices. In that case, you have to, at this point in platform engineering, where it's not a solid, you have to look up two tools like open source tools like Helm or Customize. That is the way forward for it. Like Helm is a graduated project in Sains Air for a reason because it's the most used package manager, and even Helm has some drawbacks. It can be cumbersome for developers to manage when the charts grow. Even we have declarative specifications like Helm file to manage that. So for most of the drawbacks, you have solutions forward. So to go ahead with that would be an ideal choice at this point. And in that context, some of the best practices would be like maybe separate your configurations like environment configuration, application-specific configuration, and your global configuration. That would help better in management. And even before starting, you have to identify your dependencies and declare those so that you're not in for a surprise at a later point in time. And other things would be like make use of features that the tool provides, like dry run for debugging and for other things like the plugins that they provide for maybe you can use as plugins for secret management to interact with other credential managers like Walt. So you have almost many of the use cases covered in these tools. So it would be better to go for such open source tools would be my suggestion. Looks like Helm is kind of a practical solution available now, and its advantages weigh down the disadvantages currently I guess. So the next thing that I want to discuss here is about the bus word in the market in the recent days which is internal developer platform. So can someone share their thoughts about this internal developer platform? I believe the internal developer platform, IDPs are the huge, having the huge value especially in the cloud native software development, right? By providing self-service portal infrastructure to the developer within the organization. IDPs can streamline the entire development process and improve productivity and collaboration across the organization. So I think IDP, the platform engineering has a huge opportunity currently. I have seen that according to the Gartner, this platform engineering wave, 80% of IT companies involve this platform IDP by 2026 with the promise of, you know, like optimizing developer experience and improving overall the product delivery and basically increasing value to your end user. Thank you, thank you. I hope. You want to add on? Yes, so I have a different opinion over there. Firstly, Gartner says a lot of things like that. You don't trust Gartner. I'll probably tell you more about that offspring. So secondly, you know, when you say IDP, again, it's just like platform engineering. Different people, for different people, it means different things. But what I see commonly on the internet is people putting together monitoring tools, logging, log searching tools, Kubernetes distribution, all these things they are putting together and they are calling it an internal developer platform. And then you also have this trend called, I mean, not a trend, a very sensible thing to do, in fact, called platform as a product. You have to treat platform as though it is a product on its own and all your internal users are your customers. At face value, it sounds really good. But why do you want a good chunk of your company's resources involved in maintaining a platform that is not your core competency, right? So, I mean, eventually, in a few years time, what would happen is, depending on whether it's a good cycle or a bad cycle, you know, execs or VCs are going to say, you know, why don't we either shut this team down and use a vendor or they are going to spin this out as a separate company, right? So, and all these companies, we saw this happen with the data platforms, right? Hortonworks, Clouder, all these were one way or the other. Internal teams are a different company, Confluent, Databricks, all these companies. I think if we, I mean, whatever internal platform engineering teams we have, looking at history, I think that's what would happen to them too. Just I want to add, whether we want to keep part of our company or external, that's altogether a different thing, but as someone from the product engineering team, I can share my pain point, how this IDP really helping. So, when we try to release our product MVP version, okay? So, sometime, when everything owned by the engineering team, right, there are numerous nuances need to take care, particularly for this cloud native infrastructure. That's very complicated. Sometime, you know, our business want to release the feature quickly to get the feedback from the customer, but often they lack of understanding on the non-functional requirement like your security, performance, scaling, right? That's become a bottleneck for the engineering team. So, because we want to do all these considerations, that eventually add a lot of cognitive load on the team. What my opinion, probably IDP removing the cognitive load on the product team, by abstracting all your infrastructure layer, whether you are GCP or AWS or hybrid on-premises, everything will be taken care by this platform. So, it allowing developer can create cluster, namespace, whatever resources they want through the self-service portal, right? So, what I see that that will significantly improve overall the product team productivity so that they can focus more on their feature, what really the customer they want. One small point on this, because you touched upon abstraction, I think in this industry, for some reason or the other, abstraction is always doubted as a good thing, but in reality, it is a double-edged word, right? When you create an abstraction, when you hide complexity behind an abstraction, eventually that abstraction is going to leak and you need to look behind the curtain, right? And what would happen at the time, like, people would not know what they are dealing with, because imagine if they had been dealing with AWS console or AWS CLI from the beginning, they would know what they should be doing. But if you have abstracted that in an internal CLI tool or an internal panel, when you want to do something and that panel doesn't do it, you have to go to the AWS panel and you wouldn't know what to do. And so, when you are creating abstractions, you need to look at the net value that the abstraction is providing. And in many cases, it is a net positive, but if you are not being careful, it could be a net negative. It will add more complexity and more problems down the line than it would otherwise. So when building, I mean, I sure believe IDP and internal development platform is something every company should look to create. I think we also need to be more careful on what that is. Krishna, your thoughts? Yes, so the way I look at a trend, right, and try to figure out whether it's being held up by Hawa, right, Hot Air or a real need is to see what it's addressing, right? What's the pain point? And one thing that we do not realize, because we are all technology enthusiasts, we love Kubernetes, we love CNCF, we love Docker, we love getting into technology, we love working with technology. One thing that we don't understand is that Kubernetes has been great for operations, right? Operations teams love Kubernetes because now we no longer need to tend to a screaming server at 2 a.m. in the morning, we no longer need to follow a Word document with 35 steps to deploy an application that's no longer required, right? Operations teams love Kubernetes. But developers face challenges because they are coders, they are application specialists, they are programming language, programming tool specialists, and now suddenly they have to learn Docker, they have to learn Kubernetes, they have to understand pods, deployments, stateful sets, all the sort of thing, right? So life has become somewhat more complicated for developers. So what internal developer platforms are doing are really addressing this need for developers to have a smoother experience with our platform. Another thing that you might notice is that backstage is all the rage now and backstage it has really sort of broken through because there is a real need underneath for internal developer platforms and every company has a version of internal developer platform, they're just doing it in a incomplete ad hoc way, right? So now someone is doing it in a more standardized way and everyone is sort of attracted by that, right? So to summarize, I do see the need. I do see the pain points that we are trying to solve using this. The need for standardization is there, though. We should not be reinventing the wheel every time. And I just want to give a shout out to Cloud Foundry. Cloud Foundry got developer experience right a decade, you know, more than a decade ago, right? So Cloud Foundry, the whole concept is you worry about the application, we'll worry about the platform, right? So I'll just maybe let Ram jump in here. Yeah, now that we are getting to the Republic TV part of the panel discussion. But anyhow, I thank you for the shout out to Cloud Foundry. I was not sure if it was appropriate or not for me to do it, but a couple of things I wanted to add to the whole discussion is there is a tag app delivery that's a part of the CNCF. So CNCF has tags that are like six, they're also like focused on certain aspects and areas. And within the tag app delivery, there's WG platforms and WG operators. So there's different working groups that are tackling the exact problems that we are discussing. Is the platform right? Is the platform wrong? Is the platform right for a small team? Is the platform right for a big team? Do we want ops as its own discipline? So there are a bunch of questions that we need to ask ourselves as a community of technologists before we jump into saying everybody needs a platform. I don't think anybody is saying that some teams need platforms because of reasons X and Y. Some teams definitely don't need platforms because of reasons X and Y. And I think it's very fuzzy to the community at large whether or not they need certain things. And so this also harks back to the opening line with which the discussion started, which is do teams even need Kubernetes? I don't think a lot of us are going to be able to answer that. Obviously, I'm not going to continue on this vein at a Kubernetes community and risk not being invited to the next one. But truth be told, that is the reality for a lot of teams. So if folks are interested in this discussion and this train of thought, I would highly encourage them to participate at tag app delivery in working group platforms or working group operators. I think to summarize, I think we had mixed opinions from the panelists. I would leave it to the audience to decide and pick whatever is best fit for them. So I'll move on to the next question. So any application should at some stage move their application into production. So I would move on to some of the challenges that we might encounter when we move an application to production. So here comes the key part, which is monitoring and observability. So my next question in this regard is with regards to monitoring, what are some of the best tools and best practices that are available in the market to monitor not only the Kubernetes cluster and also the applications that are deployed in it? Arun, you exactly said monitoring and observability. Monitoring is part of observability. So observability is a definite part in any microservice architecture, whether it is Kubernetes, cloud services, anything. So it is a de facto thing. It has to be there. If it is not there, at times of risk, it is really hard for us to troubleshoot anything. So we can primarily divide observability into three key areas or keep three builders, like monitoring the metrics, gathering central logging systems, and then having a distributed tracing for our applications and also for some Kubernetes components also. So the main thing is observability is the scope and definition of observability varies from organization to organization based on their product and also their use cases. For example, in our company, we have an internal policy of not having the observability stack inside the application stack itself. So what we have is we doesn't run Prometheus as an operator service inside the Kubernetes. Rather, we haven't dedicated Prometheus and Grafana stack outside of Kubernetes. So one use case or good thing is even if a total outage in Kubernetes, we get some alerts from the alerting manager, which is outside of Kubernetes. So instead of, you know, blindfold the whole total outage, we will get some insights and also the pre-configured metrics will also be there. So we can have some small visibility if it is outside. So it is like use cases like varies about organization to log in. And speaking on centralized logging systems, you know, as Krishna said, unless until we need something like an SLA of 99.59s, we doesn't need a robust centralized logging system which needs pipelines, Kafka, low and low latency pipelines to stream logs and all. We can have a simple R6 log which is running in each service just to send the logs to a centralized area. A simple, a much more easy area is okay. If we go for ELK and AFK and all, it is good, it is hard to maintain, complex, and it is resource intensive. Most of the Kubernetes cluster doesn't need that unless until we go for multi-cloud or hybrid cloud environments. Also the same thing applicable for tracing also. It is to gain deep insights of any application we definitely need distributed tracing, but we doesn't need tracing for Kubernetes components. We doesn't need EGR or Tempo to be deployed on Kubernetes and gather metrics of API servers. This is not all needed for each and every use cases. Rather, we can leverage the existing tools like service mesh. Most of the service mesh offers traces. They will give us traces based on EPBF and existing Kubernetes parameters itself. We don't need additional resources for that. If we need deep application insights, we can deploy traces, distributed traces like EGR and then collect metrics. The main issue there is instrumentation because each thing, each observability stack allows for different instrumentation. Even Tempo doesn't allow open telemetry. Of course, all the companies are migrating to open telemetry and open telemetry is becoming the open standard, but still there are some issues. For example, there are an interesting product called Pixi which will allow, it is only Kubernetes base, so instead of allowing instrumentation, it will give us traces based on EPBF and Kubernetes APIs. Based on this, how the application interacts with Kubernetes APIs, it will give us traces. These are some interesting tools. Also, the primary pillar for observability, I guess, is alerting. If you have a robust monitoring system, traces, everything, if it doesn't have alerting system, a good alerting system to alert the definite KPIs, critical KPIs, it is of no use right. If it doesn't have a good alerting system, and on top of alerts, we can also do automations. This will kind of have a self-remediation factor. If we know this alert is happening, we can easily remediate based on runbooks and playbooks. Also, there are some good tools like BotCuban and Robusta which will take this to the next level. If we are traveling or we are on vacation, if we have some severe issues with the Kubernetes cluster, we can easily, in a Slack message itself, we can troubleshoot and also resolve the issue. There are security concerns, but if it is apart from it, but it is easy to manage on the go, we can easily manage the community services. I think Bala is an SRE himself. I think he is sharing his day-to-day tips and tricks to us. I will move on to the next question here which will be about security. Security is kind of a crucial aspect of the security platform and Kubernetes is no exception. My question here would be what are the ways we can secure the Kubernetes cluster and also the applications that are deployed on it? Great. Another situation where Cloud Native actually introduces a bit of complexity to the platform. Stay with me here. Security is all about the controlling the ingress and egress, controlling the number of layers, minimizing the attack surface area. What are we doing with Cloud Native architecture? We have bare metal, on top of it we have VMs, on top of it we have an orchestrator, we have container manager, on top of it our applications are running. So you are increasing the surface area of the cloud and you will have attack for any attack or to come in into your system. So to address that you need to have Cloud Native security tooling also. There's a lot of work being done in this space. OPA is an amazing tool, it's a CNCF project. OPA gatekeeper will help you with your project and it will allow you to do runtime monitoring. But what I want to sort of bring out is with Cloud Native platforms you are entering a whole new era of managing, maintaining security. It's not just about CVEs, it's not just about firewalls. Instead you need to secure each and every layer with effective remediation which again requires deployment tooling like Helm and Pipelines. Because you got a severe CVE in your application you don't want to wait for 10 days to get it fixed. You want to do it right then and there. So these are all like the various layers that you need to think of in order to secure these platforms. I would recommend definitely the work that's being done by tax security. I would recommend OPA, FALCO, Tetragon. Cilium is actually doing a project which uses EBPF for runtime monitoring. So I would definitely recommend that anyone who's using these platforms, they familiarize themselves with some of this tooling and the concepts of security. It is becoming across our clients. I'm seeing that it is becoming a number one concern that clients come to us with now. And to address that we have effective tools but we really need a lot of awareness among the team, among the people running the platforms. I think we should make use of these tools and be proactive in this side of this. Next question, any platform is going to have some limitation and Kubernetes is no exception. My next question here is what are some of the limitations or improvements that we need to see in Kubernetes? How can we improve them to make it a better platform? Do you guys have any wish list as such to have or want to see in Kubernetes? There's about 1,700 open issues on their GitHub repo. So obviously getting through those at the earliest is something that would be good. But that being said in the process of resolving those there'll probably be 1,800 issues that are open. So I think by and large the Kubernetes community is doing extremely well in terms of being very open and responsive in the way they're improving the platform day in and day out. There are some things that I would definitely like to see but rather than come at it from the more technical direction there's a lot of work that I think can be done to improve the onboarding experience for not just Kubernetes itself but for the entire CNCF ecosystem as a whole. It's wonderful to see a massive landscape with millions of dollars in I don't know market cap value whatever but that comes with a difficulty where it's not easy to comprehend what path a developer would like to take as they start to navigate this landscape. And so if there are enthusiasts among this audience, among those watching, among those out there in the world who want to come up and say what can we do in order to educate people better about these platforms and these individual tools working together, working at a time, working for at a time and completing a more sane, useful experience then I think that would be something very nice to have. That being said on the technical side there are a few projects that I did follow for example the whole notion of doing multi-tenancy on Kubernetes also being done through SIGs and working groups and things like that the notion of multi-tenant Kubernetes like there was a project a year or so ago that I was looking at called hierarchical namespaces within Kubernetes there hasn't been an awful lot of progress on that people definitely recognize the need for something like namespaces the minute you look at it you love it you want to use it a lot but there isn't sufficient steam behind that so I think aspects like those and improving aspects around those would be great do you want to add anything? I have a slightly unrelated question since you brought up multi-tenancy so what are your thoughts on multi-tenancy on Kubernetes as part of platform engineering some subset of people have platforms that spin Kubernetes clusters what are your thoughts on that would you prefer that over multi-tenancy or even the Vsauce PR people were briefly doing that before they changed to DCOS they were running Kubernetes clusters on Vsauce I am going to prepend my answer with any RSEL so I think the answer is it depends it really depends on what I don't like this answer in a public forum at all for a lot of reasons but I am forced to give it today but I think depending on what the needs of the team are having Kubernetes clusters one as part of a platform makes sense in some way but I don't think that is the exact problem that people are trying to solve with platforms having platform spin-up Kubernetes clusters according to me is like a vitamin there are other important things that a platform needs to accomplish before getting to that point we can probably discuss more offline about this but I will mention that it depends is a perfectly good answer okay but having said that I am very opinionated about this I see people getting very excited about being able to spin Kubernetes clusters so they have one Kubernetes cluster per application one Kubernetes cluster per team one Kubernetes cluster per Kubernetes is designed to operate at scale and it gives you enough mechanisms to manage make it fault tolerant namespacing is there RBAC is there it gives you enough tools to manage your teams without having to spin one Kubernetes cluster per team I am very opinionated about this I have seen environments where there are dozens of Kubernetes clusters up and running my suggestion use Kubernetes for what it was intended run large platforms Kubernetes can do that for you it can do 100 to 100 nodes is nothing it can do 500 nodes before it starts becoming complicated so my suggestion is use the tooling provided by Kubernetes try to minimize the number of clusters the clusters intended to run thousands of applications you know just create large clusters don't be afraid of them and manage them empower your teams that's my impression at least thank you all I still have a lot of questions but due to time constraints we had to cut this short so I thank all the panelists for sharing their experience and insights on this specific topic and I hope the audience finds this discussion helpful and probably uses discussion to create the challenges in this ever-revolving platform called Kubernetes thank you may I request the panelists to stay back on the stage for a moment Arul Selval, Krishna Venkat Mohan Muthukumar the moderator of the show Arul thank you everyone there was a great session we will move on to the next session which is the keynote for the day may I invite our Vishnu Sharma on the stage