 Today, welcome to today's CNCF webinar, simplifying app migration to Kubernetes with an app-centric abstraction. I'm Chris Short, Principal Technical Marketing Manager, Red Hat, and also Cloud Native Ambassador. I'll be moderating today's webinar. A few housekeeping items before we get started. During the webinar, you're not able to speak as an attendee, sorry. But you have a Q&A box. We really want you to use this Q&A box as the webinar is progressing. Feel free to drop in questions there. We will get to as many as the end, and I'll be curating them. And we have another moderator on the line as well. So your questions will get answered more than likely today. So it was really cool that you're here today for this webinar. This is an official CNCF webinar. As such, it is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, be respectful of all of your fellow participants and presenters. I would be, excuse me, I'd like to welcome our presenter today, Anup, Director of Engineering at High Scale. Anup, go ahead and take it away. Thank you. Thank you, Chris. And thank you, everyone, for joining the webinar today. We're from High Scale. We've been working on solutions in the container space for nearly five years now, from the very early days of Docker and the last couple of years in the Kubernetes space. So my name is Anup. I take care of engineering at High Scale. And I have with me my co-panelist, Mayur Shah, who manages the product. And I also have Divya Venkateshan, who helps us with our participation and interaction with the community. So we're here today to share some of our experiences in migrating a particular microservices-based application to Kubernetes, some of the challenges that we face, some technical insights that we thought might be interesting or useful. And also, how we went about simplifying the whole thing. All right, so the application itself was an enterprise platform. This platform is a local development platform. So users would sign up in order to build out applications in an easier way. That's what the platform let them do. And as you can see, this is a picture of the kind of services that were there. And it had service discovery, load balancer, statefold services, and so on. So they wanted to move to Kubernetes because a lot of their customers were on different clouds. And they wanted this platform to be set up for them on the cloud of their choice. And so Kubernetes became a natural choice for this because you can set up your application and deploy in exactly the same way everywhere. And there were a couple of other reasons. So they would get user demand with some level of volatility. So sometimes you would have more users using the platform than at other times. And again, it turned out to be a natural choice, Kubernetes. And then they also wanted to make things a bit more economical. And then finally, the declarative approach and the mutability of containers were particularly attractive to make everything reliable much more than what it was before. So the pre-Cubernetes scenario looked something like this. So yeah, we had these VMs. And then DevOps would set up the applications, stack requirements, Java or Tomcat and other stuff like that, and configure all of that, and then also handle things like security patching to those stacks, et cetera. And then obviously, the deployment was very simple. So you would mainly deploy the war file. This is a Java-based application. So it would mostly be wars or jar files. And that would be the deployment scenario. With Kubernetes, this was all about to change. And it is obviously easier to talk about in hindsight. But essentially, all of these things, the stuff that DevOps folks on the team did and the pure developers on the team did, all of that had to get combined packaged into an image. And then all of this would go through as a single deployment into the cluster. So this was sort of a shift in the way DevOps happened, in the way deployments happened. I'm just going to pause here. Looks like the screen is cutting off for some folks. Chris, can you confirm if it's OK? It seems to be OK for me. All right. OK. Might be a temporary glitch, I guess. Then I'll move on. All right. So with this sort of shift that happened in the way things were deployed and delivered, there are some considerations that we had to keep in mind. With the immutability of containers coming in, any small change that would be needed, whether it's a change in the application or a change in its dependencies, whatever it was, it would mean that you kind of redeploy, you rebuild and redeploy. You never update anything. And this also meant that the frequency would kind of go up. A lot of containers would live for a very short time. There would be a change in how troubleshooting would happen, how debugging would happen. And again, fixes would require rebuild and redeploy. And there's sort of a role shift which is indicated in the previous slide as well. So this entire shift brought about three new challenges. So the whole workload had to change. There's a lot of new concepts and terminologies that everybody involved in the delivery would have to become familiar with. I'm sure a lot of you are familiar with the fact that there's new stuff to be learned here. And again, troubleshooting in-off is very different as we're going to see. So what we want to do here is to kind of get into a bit of detail on the kind of things that we had to deal with. And again, we're not wanting to present sort of best practices or anything like that in a very comprehensive manner that's available to us. But what we wanted to talk about is the journey that we went through and some of the learnings we got through that journey. And hopefully that's useful at least to some of you who would want to migrate your applications to Kubernetes. So for the next 10 minutes, I'm going to get into each of these aspects and what we did there. And then after that, I'm going to zoom out and come back to a bird's-eye view and talk about how we eased things. So I'll start with service discovery. So a lot of the applications, the microservices-based applications in particular, or even other applications might be dependent on service discovery mechanisms from the past. And some of those mechanisms may not work very well. Or even if they do work, we'll see why it might make sense to move to Kubernetes native discovery, how we went about trying to do that without making changes to the code first and then later did that. So we're going to talk about the points you see here. So a quick recap, a lot of you would be familiar with this. So in this particular application, they were using console. And so we had services that wanted to talk to other services, so just a recap of basic service discovery. So a platform service would come up, for example. This is pre-Cubernetes. And basically, there would be a sort of a post hook which would register its IP into console. And then when some other service, like a studio service, for example, which is the name of another service there, would simply query console for the platform IP and then talk to the platform. So there were several reasons to want to move towards Kubernetes native discovery. One obvious point would be that we wouldn't need to maintain or run console anymore. Kubernetes itself would do the job. And then registration is automatic. So here, we would need to write some post hooks and do stuff like that. But in Kubernetes, you don't need to do the registration. And then because this works well with Kubernetes DNS, so if you query for a service name in Kubernetes once you deploy to your service, then you would be able to get back the service IP by simply making a DNS call just like that without any extra effort or extra code or scripts on your part. And finally, if you have a lot of replicas for your service, I assume most of you would be familiar with pods and replicas, then if your service has many pods as replicas, then Kubernetes service discovery will return you a service IP, which is a single IP. And then behind the scenes, when you hit the service IP, it would automatically round rob in the traffic to all of the pods behind to the different pod IPs, which is, again, very useful. So how did we go about that? So that's plain Kubernetes stuff. So initially, from console, we wanted to not change everything at one shot. So in this application, there were 15 or so microservices. And all of those had the console library as part of the code, so the code would talk to console through that library and all of that. So we would have to go in and rip out that code from all the 15 services. So the idea was they didn't want to do that until they were sure that Kubernetes would work. So initially, we simply modified the entries that went into console so that the code would continue to query console, and console would return just the Kubernetes service name instead of an IP. And then from that point, the service would talk or make HTTP calls using the Kubernetes service name, which would simply resolve through the Kubernetes DNS. Obviously, this is not a great idea to go to production. So this is still the journey. So you'd have two hops to get to the service. But it allowed the application to get on to Kubernetes very quickly. And people could thoroughly deal with other challenges while this one was still being fixed. So from there, we came to the second attempt, which was to avoid the two hops. And then since by that point, we had shown that it would work reasonably well. So we got rid of console completely and then simply depended on Kubernetes DNS to get the service IP. There are some interesting caveats here to know when you do this. So one interesting thing is that when you try to resolve the service name in Kubernetes, you will get a service IP, even if the pods. There's not even a single pod that's healthy. So you'll still get the service IP. So you wouldn't know until you make the request to that service. And then there were issues we faced with connection pooling. I'm not sure if that is resolved or can be, because you get a service IP if you have multiple replicas behind. And so you don't get to the actual pod IPs, right? The service IP is like a proxy to the pods behind. And so you can't really set up a connection pool, which is very useful if you want to do that for a database use case. And again, you're limited to the round robinning of the traffic to the pods behind. So then we did a different thing so that we could only get to the healthy parts. So what we did then was to change things around. So instead of depending on the Kubernetes DNS to get the service IP, we would query Kubernetes API to get the pod IPs. So if you had three-part replicas for your service, it would return the pod IPs. And we would then have to cache those IPs and periodically invalidate and refresh the cache, et cetera. But now we could then, Kubernetes would return the only those which were healthy. And we could do things like pooling, et cetera. There's pros and cons to both doing the second way here and the third way here. For a lot of simple services, you simply might want to rely on the Kubernetes DNS itself. But if you have an advanced use case, then you might want to go and get the pod IPs. So after this whole migration was done in recent times, in recent months, there have been some other options that have come up and we've come across. You might want to use something like the ribbon library if you're using Spring, or there could be similar libraries in other languages, which will help you to query the Kubernetes API for pod IPs in your code. And then console, and even others like maybe Eureka, et cetera, service discovery mechanisms, they have now libraries or ways to sync, to set up a sync between, say, console and Kubernetes service discovery. We haven't really used this in production or anything like that. And we're not sure what kind of out of sync issues you might get with broken networks, et cetera. There's something you might want to check. But these options exist. So I'm going to move to the second thing. And of course, the six things that we're going to talk about here are not comprehensive. There's a lot more things that you would need to do to migrate your application. But these were kind of important or insightful things for us, and so we thought to cover these. So in this one, I'm going to talk a little bit about what we did and what we tried to do for volumes and handling data, et cetera. So again, a quick recap. I'm sure many of you are familiar with this, with these concepts. So there's persistent volumes, PVs in short. And for example, this represents, I mean, in very rough terms, I can say this represents a physical volume. So for example, it would represent an EBS volume if you're using EBS as your storage. And then there's PVCs or persistent volume claims where you kind of specify which pod this particular PV needs to get attached into. And then whenever you need to do persistent volumes, you will deploy your services as a stateful set. So typically, the services you deploy into Kubernetes, there's two primary ways to deploy them. One is you might want to deploy them as a deployment, as what is called a deployment, or what is called a stateful set. But in this case, with PVs, you want to do stateful set because then the stateful set will ensure that whenever the pod is scheduled or rescheduled, the volume attachments are properly maintained. And so that's a quick recap. All right, so some of the things we did in the course of the migration. So if wherever we had some EBS volumes, or either with data or we brought back a volume from a snapshot or something like that, yeah, at that point, we didn't have a snapshot. The snapshot support in Kubernetes was still alpha. I think it's still beta right now. So we would create a volume from the snapshot and then create a PV to represent that volume and then do all of these things that I talked about. So this was how we kind of brought back data that was existing elsewhere in a volume or in a snapshot. As of today, I believe you can also specify your snapshot IDs directly here in your PVs or PVCs and let Kubernetes handle it. That is now supported by several providers. But again, it's a beta feature you might want to use with caution. Then there's dynamic provisioning, which basically means instead of creating a persistent volume, let's say you don't have existing data, you're doing a fresh deployment. So we had a few cases where we were deploying services afresh, like for example, services that might need to store some data, persist data temporarily until that data was shipped away to some other place. In those cases, it would start blank. And in those services, we would simply create a persistent volume claim and let the claim automatically manage the life cycle of the required volume. So it would automatically bring up physical volumes when this was deployed. And that was helpful for some of these other cases where the data had to be temporarily persisted. And then there's the third case, which was a stateful set with a claim template. So if you need to have a case where you are going to create replicas of your service. So once again, we did this for some cases where we wanted to use logs and stuff, or some kind of debug data, which would then get shipped off periodically. And if we wanted to scale that service, then that service we would create with what is called a claim template inside a stateful set. And so this makes sure that whenever a new replica part is created, then a persistent volume is automatically attached to that new replica. It also makes things easier because you just handle the stateful set YAML. But anyway, that's where we use this kind of a thing. So some of the considerations. So we would sometimes get breakages here because as you can see, there's a lot of referencing that needs to be proper and you don't want to take a chance with your data volumes and stuff. So you need to make sure that you refer the right things in the right YAMLs or YAML sections, I guess. Then if you're doing volume resizing and we were using claim templates and we ran into this problem at some point where we wanted to resize, it is not possible to submit a resize patch to the claim template. So if you're using a claim template, then that would add or out. So you would want to query the claim template to figure out what claim got created for you by this claim template and then apply a resize onto it. So yeah, a bit of a technicality, but that was again something that tripped us up and we had to do that. Finally, there's multi-zone challenges that were faced. So for example, so if you're running in two different availability zones and let's say the part died and the part that we scheduled to the second zone, then your volume is stuck in the first zone. So now at some point after we did this migration, so I think we have topology-aware provisioning in Kubernetes, once I checked that was still beta and supported on the major public cloud providers. So this might not be a problem anymore, but if you're running some kind of a zone set up in your data center, then you will likely face this problem. So what we did to solve this was we don't want this product to come back in a different zone when it dies. So when the pod dies, Kubernetes will try to reschedule that pod automatically. So we don't want it to come back in a different zone, so you would want to use labels for your zones and set up what is called node affinity, and then set up the node affinity for your pods so that the pod would always get rescheduled onto the same zone where you have your volume. So we sold it with node affinity. So moving on, so we had our own load balancer and in Kubernetes, there's a sort of a native way to do that. How do we configure it? What sort of abstraction does Kubernetes provide? Let's talk a little bit about that. So in the pre-Kubernetes scenario, we had console and then we would have some code which would watch console and register all the nodes into the load balancer. In Kubernetes, you will use something like an ingress controller, right? So the ingress controller would sort of control the way it would work, and then you have ingresses, which is basically a set of rules, and these rules specifically are used to make sure that you have a set of rules and these rules specify what context path should go to which service. So if you get traffic for a specific context path like slash login, then it should go to the login service. So this sort of here is in plain English, but obviously you would need to write the ammo for it. So you would want to do this and then you will create an ingress and deploy that ingress into Kubernetes. And then your ingress controller is watching for new ingresses that you submit and then those particular rules would then get activated. So again, this is, you know, Kubernetes stuff. A couple of things that we did. So initially we aggregated all of our context paths pointing to different services in a single ingress, so you can have multiple rules. And so we created a single ingress and then the ingress controller would pick up that ingress and apply all the rules, which basically meant all the context paths. One of the problems with that is that you can, you know, you can only set a single set of timeouts or things like headers and so on for all of it. So things like, you know, headers or timeouts would be set at the controller level per ingress. So if you have all the rules or all the context paths in a single ingress, then all of them would get the same configurations of, you know, side limits or timeouts and stuff. So what you want to be able to do is then you actually want to create a different ingress itself for each of your services. So that was one interesting thing we went through. And then the other thing is you would want to do SSL configuration for some of the ingress or some of the hosts that you have. And that you would need to create a Kubernetes secret, which is the secret store in Kubernetes, right? And then you would push your certificate chains and stuff like that into the secret and then refer that inside your ingress rule. So again, we spent a fair bit of time trying to get that right. Some more considerations about ingress. So the context paths routing and a couple of things around it, like setting up different host names for a route, et cetera. So a few of those context paths related things are abstracted, but a lot of other configurations are provider-specific. So one thing I missed to mention, which was on the earlier slide, was that ingress controllers are, they're not native in the sense that ingress controllers, you have different types of ingress controllers. So you've got different providers. For example, you have an NGINX ingress controller or a traffic ingress controller and so on, right? And so whatever configuration goes into the ingress controller is sort of provider-specific. So like, for example, if you use an NGINX ingress controller, then you would need to use config maps and ingress annotations to configure it. If you used maybe a traffic ingress controller, you might use service annotations to configure it. So things like that. So every ingress controller type has provider-specific configurations and that's, I guess it's just the way it is and it's probably evolving and that's something we have to deal with. And again, another thing was that the REGX that is supported that you put into an ingress is a little different from the one that is directly supported by some of the providers like NGINX, et cetera. And one thing to definitely do is you want to set your ingress controller to watch out for ingresses only within relevant namespaces. So let's say you've got a staging namespace and a production namespace and you have an ingress controller that you want to restrict to watch staging namespace for ingresses. You don't want a production ingress to be grabbed by that staging ingress controller. So that's one thing to do. I guess that's also one way of using namespaces in Kubernetes is to do different environments in each namespace. That's how we did it. Moving on, so you've got configuration properties and templates. So there's config maps in Kubernetes. How did we do configuration earlier? You would have application code which would get deployed along with the configuration either a file or sometimes an environment property and the application would read it from there. In Kubernetes, so basically the application is getting packaged into your container and the props are going separately into the config store in Kubernetes. And then you can configure that to get injected from a variable or get mounted as a file inside your pod. And so that the application can then continue to read it the way it used to read it before. Some things to be aware of, so since you're going to be deploying these properties or changing the values in your config map independent, potentially independent of your image changes, you would want to make sure that you revise those config maps in some way. You maintain revisions by yourself somewhere. Either you get with some tagging or some other mechanism to know when what value is deployed so that if you ever want to roll back some of those configuration value changes, you'll be able to do that. Another important thing is if you're using environment variables for properties and you change a value in a config map, then those changes won't reflect until you restart your pod. That problem won't exist if you're using it as a file. So those are two important things. Now you have different configurations for different environments, so have staging, have production, et cetera. And Helm charts, as many of you would be familiar, provide a way to sort of have different configuration properties without having to write a different set of YAMLs for every environment, which you don't want to do. So you create a single set of YAMLs for your service and sort of with placeholders. And then Helm uses Go-Completing. So in order to understand a Helm chart, you need to understand the YAMLs and Go-Completing. One of the things with Helm is sometimes debugging can be hard because you don't know if the problem is in the Helm chart or in the application or the configuration that you've written. And the Helm chart author is somebody else. And if it's third party, it's all the more challenging unless you understand the chart. The other thing we found with Helm charts is that it provides a very nice way to group all the YAMLs of a particular service or an application. So for example, for a given service you might write six or seven different YAML files and that sort of gets grouped nicely with a Helm chart. But once you deploy this, then that grouping is lost and sort of becomes like just scattered and independent resources inside Kubernetes. We found the charts to be useful for a lot of off-the-shelf services and less for the kind of custom services that we had in that particular application. All right, so the fifth thing I want to cover is the manifest themselves, the YAML files. It's very important to know what goes into the Docker file and what goes into the YAML. So for example, you might have ports in your Docker file but that's not going to make any difference. That won't get honored unless you put the ports in your Kubernetes YAML. So things like that, so you need to know what goes where, especially if you're coming from the container world then that becomes important. It's also important to know what type of Kubernetes configurations go into which kind of Kubernetes resources. So what kind of things can you put inside a stateful set versus a deployment, versus a replica set versus a job and so on. So that's important. And then everything, everything that you would ever deploy, whether it's a property, a configuration, some sort of a definition, whatever it is, all of it is now YAML. So it's very important to understand Kubernetes terminologies and concepts which can get quite daunting very quickly. And then also you need to bind those things together so you've got all these six, seven YAMLs and different resources configured inside them and you would use labels to sort of bind all those different things together to work correctly for a given service. All right, so some challenges with troubleshooting. So typically in order to get logs, you would go figure out your deployment, get the pods of that deployment, then get the containers in the pod and then go get the logs of each of those containers and do that for all the replicas that you have. So it's something that gets really tedious and difficult at some point. So you definitely want to consider log aggregation. You would want to use site car agents to collect the logs and send it off somewhere. It's a very important thing. We struggled initially with this and then immediately went into log aggregation with site cars. Sometimes there's sort of a temptation to get into the pod. So it's sort of like doing an SSH into the pod. Maybe if you want to check the process tree that's running inside, et cetera. But this is something you want to do very cautiously because you don't want to inadvertently change something. Then you would lose immutability and that will cause a lot of headache. This was one place where we started to consider observability tools and debugging agents and so on. It's something hard to get away from and I guess also cultural. And finally, the other challenge with troubleshooting was the kind of error messages that you get can be quite cryptic. These are definitely messages that typically devops a skilled person or a typical application developer would not understand. So what do you do with errors like that and what does it mean, how do you then debug it, et cetera. So those were some of the challenges we faced. So I'm going to take a step back and come back to a bird's eye view. So with all of those kind of things that we dealt with to do the migration, what did we find was that there are a lot of complexities that we would require many people in the team to understand and that definitely is time consuming. There's a lot of repetitive effort because we had to create a lot of different yamls of four different services and again, do that for other applications to subsequently be started to migrate other applications and we found similar challenges in the first couple of migrations. There's some friction because everybody is working off the same set of yamls. There's some inputs coming in from developers, some inputs from DevOps and as we saw at the very beginning, there's sort of like a process overlap that has happened and then the debugging challenges would cause sort of delivery cycles to slow down. We thought to find some solutions towards all this to make the migrations easier. So one of the things we did was to create an abstraction over Kubernetes so that developers or application teams could simply specify the needs of their service in terminologies that they already understand and wouldn't have to learn something new and then automatically generate all the yaml files and the Docker files needed. This would provide a standard way of doing things and different teams, different service teams wouldn't go away and start reinventing the wheel in different ways and it would provide a... I mean, the goal was to provide a self-service way for the teams to quickly deliver or deploy. So what is the abstraction that we created? So there's terminologies that application teams would intuitively understand, right? So people would be familiar with what is the volume or a health check report and things like that. But in Kubernetes, there's a whole lot of things to know. Obviously, your service or application may not need all of these things, but it's important to figure out which of these things you should know and how those different things fit together. And hence, we felt the need for abstracting a lot of this stuff. How do we do that? So we started to create up-centric keywords. I'll show you some examples in the next slide that are very easy for teams to just put in or intuitive to understand when you read them and then start writing a tool that would then infer what needs to be done. So for example, if you specify that your service needs a data path, a particular path to where it stores data to be persisted, then we would infer. The tool would then look at that and infer that it needs to generate a stateful set demo, a PVC template, and so on and so forth. And then so all of these things would be translated into Kubernetes speak, which means translating all the relevant kinds and resources, bind them all correctly using the right labels, and then also try and provide a way to troubleshoot in plain English, I guess. So this is the kind of abstraction that we started to build from whatever we learned through that migration and also other migrations. So these are some sample snippets of what we called head spec. We made the schema for the spec, head spec available on GitHub, and you can check it out. So if you want a volume for your service, you pretty much just say that, something like that, and then essentially let it create all the persistent volume claims and stateful sets and so on. Let's say you want to set up auto scaling for your service. So you would just say, I need one min, one format, or whatever. So something like this would generate the required kinds of yamls. And similarly for things like health check, which would generate a liveness check or readiness check and so on and so forth. So stuff like that. You can check out more on the spec. And so once you've got the spec, essentially you prepare a complete spec of all the things that your service needs, which typically tends to be about 15, 20 lines for most services. Sometimes a little bit more. And then you invoke the tool. So we created a tool which would read the spec and do the right inference, generate the Docker file if necessary, generate the Kubernetes manifest, and then talk to the Kubernetes API and deploy. And essentially you will get back a URL. So a few months ago we decided to make this available to the community, and so we made that open source and it's available again at that GitHub URL that you see on the screen right now. So yeah, that's how we did the deployment. And essentially we've started building out the abstraction for troubleshooting. So if Kubernetes returns an error message like crash loopback, then we try to run through a flow chart inside the tool automatically and that would check various things and then it would try and tell you where the problem might be. So it might tell you, hey, there's problems in the start commands like your CMD or other stuff. Or it'll tell you, hey, some of your health check is failing or it's an incorrect health check and so on. So we try to translate and abstract out the complex or cryptic messages as much as possible. So we made a start towards this and this should be out in a day or two. We have some of this coming out. So yeah, I think through this whole talk, we focused mostly on the automation and the abstraction required, which is where a lot of the, because that's where you talk to Kubernetes and there's a lot of complexity. But obviously to complete the migration, we also built sort of a layer other than this, right, or on top of this automation, which would do things like integrate with your CI or Jenkins and then do container image scanning, other DevSecOps stuff, do change tracking and all of that. But again, those are things you would do anyhow and not very Kubernetes specific. So mostly we've covered this, but just for sake of completeness, we also, I just wanted to mention, we also built sort of the whole CI CD and the pipeline stuff separately for the sake of that migration. So some of our observations, especially after we employed some of the solutions that we built, you know, and some of the abstractions, et cetera. So there are some things which we were kind of able to measure, like, you know, to be safe on some of the effort or some of the, you know, like we saved a lot of, like, it's hard to measure, but a lot of lines of yaml, for example, and definitely upgrade times went down between the VM world and the Kubernetes world. So there were some things which we could kind of measure and then there were some soft findings, like, for example, you know, the learning curve went really low. So we would be able to bring somebody new onto the team to very easily deal with deployments in a very short span of time, things like that. So that's pretty much it about our experience. We would like to invite you to try HighScale. If you like what you see, please star us on GitHub and you can reach out to us at any time for any sort of queries or even ideas if Twitter is cool or if you want to do email, there's the email ID. At this point, I would like to invite you all to take a short poll of a couple of questions, just three questions. It would take a minute or so. This will help us understand better the needs of the community and continue to build out the tool and evolve the aspect. So please do take a minute or two to do the poll and then we can do some Q&A if there are any questions. Sure. Thank you very much for the wonderful presentation today. So there is a question in the Q&A, but I think it's like a broader question that we could probably both help answer to an extent. Question is, with pod process namespace sharing, that's new in Kate's 1.17, is it possible to decouple tightly coupled legacy apps, communicate through IPC, named pipe, shared memory, so forth, and migrate as separate containers within a pod? So do you want to take a stab at that or do you want to talk about this together? Yeah, that's just more of a Kubernetes question, I guess. Right. So, yeah, Tamal, this is a very Kubernetes-specific question, but I can help you with it a little bit. I don't, running multiple containers in the same pod is fine. You can do that natively right out of the box now. That's basically what sidecars are. They're just another container in the same pod, sharing that same environment. How you communicate within that pod is entirely up to you at that point, right? Like, you've got the pod network there, everything self-contained in that pod, so however you want to communicate is fine. I'm not sure the question about namespace sharing and memory sharing, right? I'm not sure where that comes in, but feel free to contact me afterwards, Chris Short on Twitter, chris at chrisshort.net is my email and we can talk about that further if you want. Another question come in from a tool assuming that you will be able to ingest existing YAML files and create Hspec for use with high-scale question mark. What about pure Docker swarm deployments? Would you be able to convert to Kate's environment constructs? Thanks. Yeah. So how would you take somebody from Docker swarm over to high-scale basically is the question. Yeah. And also about the ingestion. So at this point in time, so since we started to build this abstraction, mainly to help with moving workloads into Kubernetes, so it started off with the assumption that there is no Kubernetes YAMLs existing, right? Because that's how it works. But this is something that has come up before to us and we're certainly going to consider it. If you guys think that it will be cool to take your existing YAMLs and then provide a Hspec automatically that can be built and that's great feedback. We don't have that right now. But we can then make it possible for you to just simply use the Hspec and then continue deployments with the Hspec and let it manage the YAMLs in the lifecycle of whatever YAML changes need to happen. And about Docker swarm, yeah, so again, there's no conversion back, right? Either for the Kubernetes YAMLs or for Docker swarm. But it's fairly simpler to come away from Docker swarm because a lot of the sort of the constructs of the directives you would be intuitively familiar with. So it's not a huge amount of effort to do that change. I'd be happy to talk a little more on that. Feel free to reach out on the email ID that's there. Yeah, and going from Docker swarm YAMLs to Kubernetes YAML is possible. There's tools for that out there. I've used them. But if your swarm configuration is very, very long or has a lot of configuration options that are very Docker specific that you might have to do your own mucking around. But there's a way to go from swarm to kates, but then ingesting from kates into high scale like that is the interesting point. Yeah, that's pretty interesting. This has come up only once or twice, but it has come up before. But we would definitely love to see folks who want to use high scale and head spec even though they're already in Kubernetes because maybe it simplifies further deployments or further changes to your application. Then it would make sense to ingest those kates, YAMLs and give you back the head spec. So we'll certainly consider that. Awesome. There was a question about the license. I went ahead and looked that up and put that in the chat and the Q&A. It's Apache 2 licensed for everybody that didn't see it in text. And I think that covers it. Any other questions from the audience before we log off here? Feel free to drop them in chat, Q&A box, whatever works for you. Oh, anonymous attendee, what is the roadmap for high scale? Yeah, so there are some things around creating additional specs for things like jobs, for example, is something that we've been asked. And again, we're trying to, we're fairly new in the community. Like I said, it's just a couple of months now and in between we had all the holidays in December. So we'd certainly like to hear more from you guys. Please feel free to put some issues on GitHub. And we will also make some efforts to put up the roadmap on GitHub soon so that you can see what's coming up. And yeah, the other thing we want to strengthen is the troubleshooting stuff. So definitely want to strengthen troubleshooting. We find that incredibly useful. And we hope that you guys will too. If I just may add here a little bit on the roadmap, you guys can hear me, right? Yes. Yeah. So basically, in this journey of automating and making simply the whole journey to Kubernetes simpler, what we are doing with high scale is we started with the most popular workloads. So basically our traditional web workloads and custom application workloads. We've now taken this journey to also other kinds of workloads. We are looking at data workloads. We are looking at new form of servers and other kind of workloads. So not to this rapid. So we are increasing the types of workloads that high scale can automate. Additionally, as Kubernetes is progressing in its journey, we are almost in sort of line in in line with Kubernetes. So we make a lot of improvements in high scale with the learnings that we have from Kubernetes. And there's two versions of high scale. So there's the open source version and there's an enterprise version. So for folks who also want to are looking at more high like large scale deployments. There are a lot of capabilities in the enterprise version that we are adding around monitoring around troubleshooting around just the overall operational simplicity of a Kubernetes deployment. So all of that is getting is part of the roadmap. Awesome. Alright, we're running out of time very quickly here, but we have two more questions. What is the typical migration process involved and what kind of timelines are you looking at? Yeah, I guess that really depends on the application itself. So usually what we do is that at the beginning of the migration, we do an assessment of the application itself, even before we start. What kind of an application do you have? Is it micro services? Does it have legacy components? What kind of stateful services does it include? Et cetera, et cetera. And then typically one of the things that invariably comes up is, hey, is this good for Kubernetes? Should we do Kubernetes at all for this kind of an application? So that assessment usually helps us decide the time. There are some really small things that some teams have done, like I have these three services and can I just migrate it? So that's roughly a week to two weeks. They've got these services running inside Kubernetes because I just write a head spec and deploy. So that's very simple. But if you have a large application which is in production, a large number of users, it will go through a much bigger cycle. And the big one that I talked about in the beginning of the presentation today that took us several months to do that migration, just to give you an idea. Can iScale be integrated with any Kubernetes monitoring tools? Great question. Yeah, we do abstract out sidecars. So you can attach an agent, a monitoring agent or a logging agent in a very simple way using the abstraction we have. You can go check that out also on our GitHub page. We also have a nice tutorial on highscale.io, which will help you. So if you want to do monitoring using agents or sidecars, then yeah. Otherwise, if you're doing it separately like using, let's say, a daemon set in your node, then that's separate from application deployment. So that's how you would do it. And one interesting point here is if you're doing sidecars through highscale, then so right now we're just deploying those as simple containers inside the pod, right? But with 1.18, we're definitely going to take advantage of the flight card kind, and we hope that comes out. It's supposed to come out in 17. But once that's out, then highscale will automatically generate a flight card kind. So you get like this kind of an advantage of, like you can automatically take advantage of new Kubernetes features in that way. So, yeah, that's pretty much on the monitoring side. Okay. Well, Chris had a drop. Sorry about that. He had a conflicting meeting. So I'm going to close this up. Thanks a new for a great presentation. Thanks for everyone joining us today. I really appreciate you coming along with us for this webinar. The recording and slides will be online later today at the CMCF webinar page. And we're looking forward to seeing you at a future CMC. Have a great day, everybody. Thank you, everybody.