 So thanks for all the wonderful talks we presented from the personal experiences and from the experiences of your companies. So I would like to start with, we are seeing three main models which are coming out in the infrastructure space. One is hyper-converged, one is multi-cloud and one is private only when looking for structures. So what do you think about how the way infrastructure... I mean there used to be a time where most of the customers or most of the clients used to have one single vendor for all their needs. But right now you see there's a diverse applications and people are having homegrown software for their own needs. And they go on and on which dominated the industry over the decade. They go Microsoft, even companies like Red Hat or Apple don't seem to have a space here and companies like Netflix, Uber and companies like that have a name in infrastructure space. So do we see the transition going on? Definitely there is a lot of innovation happening from the tech space. And at this age there are a lot of open source coming up, right? So they are able to share the community and just work across the companies to deliver some product. Like I mentioned, Red Hat and White Hat. So it's been done by multiple companies. It's still been open source but a lot of companies actually can't do that. At least with this new product companies are becoming the new innovators in the space. And also they can talk about the multi-cloud, right? So there are some pre-built services for each cloud. Yeah, most feature range is AWS. But I say some things are better than the other public clubs. Like Google has a good RTA system where we can just modify and shift it forward. So you shouldn't say that we can just go and check with one public cloud because there is something which is offering at the same cost and doesn't get better. So we should just go and get to more software. Or you can say that, say, I'll just use your RTA, right? Why don't you need to use RTA? You just need to go and do it. When you patronize local workloads or the other guys are giving the system with a better performance, we should definitely go for it. And I think, personally, one of the things I would probably like to see in the morning sessions or probably in the next discussions that we have is SpotEx. So it's been one of the leading, let's say, the myth breaker into this entire multi-cloud. Essentially, you pioneer the entire spot system in AWS and say, look, here we are. We can help you make sense of the spot system, right? Again, a lot of people innovated with a lot of, let's say, scripts and so on just to figure out how to make sense of the spot system and reduce your cost and so on. SpotEx, sorry. So it essentially changed the game on its head. And it said, look, we can make sense of this entire spot system. And that was the initial task. And later on, what they did is they also said, we can take the same approach everywhere, right? You can literally plug in SpotEx to any infrastructure part. So that is one of the new thrust areas as far as this entire multi-cloud approach is coming out to be. And part of the reason is because of the stateless containers in Dockers and Rancher and SpotEx kind of people who are essentially acting as a multi-cloud broker, so to speak, right? They are essentially building in a catalog of services and you can run your multi-cloud app. So that has led to the support of this multi-cloud device. Whereas even a couple of years back, you didn't have this much, this many multi-cloud requirements, so to speak. So my concept also was exactly the same. So previously, when we were building the product, we were more focused on getting things to work. Then recently, about six months back, we increased the team and then we found out that we now have a separate team who can handle the scaling issues. All the client is sparking. So we use sparking to handle as well. So this kind of multi-cloud approach is now possible for us. And we are heavily into AWS, but we do want to start looking into Google Cloud or Azure for the features that they offer. Not only the equipment, it's not just the credits, but the features that they offer. So, for example, Azure's most TV right, so you can literally replicate your data across multiple systems. So those kind of features are, it is more useful for some of the products that we build. So those features are more important than when you have a better provider to provide those things. It's a big thing to shift. So this brings to an expression, what is the cost of this innovation? There's always a cost, I mean, most of the people in operations are developed in BC, across benefit lands and pros and cons. So what is the cost which the developers would take out the devops? So the cost is, to give an example, right? So I have seen a lot of startups have their own way of handling spot instance. I think Indix has their own method, and if I am going to write to Bangor, there was Mapo, so some of the startups have their own way of handling these kinds of spot instance management. And we also said, okay, it makes sense to build such a tool. But when we saw spot instance, it made sense, okay, this is so easy to set up. Why should we even worry building such a thing? We have spent a few months to build it and then test it and maybe maintain all those things. So that is the cost we want to avoid. And if this is just a program where devops guy was able to set spot instance ready and it was running in less than two or three days, right? So that is an advantage for us. So I think it's useful to have ready made things. But the important thing to note here is we are basically our primary IP belongs in this. And we don't use AWS machine learning tools or those kind of things. So we build our own models also. So our IP, we make sure that we develop it and we protect it. But these kind of infrastructure things, we are not too... At this stage of the company, we spend a lot of time on resources on that. That is a great problem. Yeah, I think I would like... I like that bit that you said in the morning that the first thing you guys were actually testing in a data setup was the noisy neighborhood problem. And that again brings me to that cost of innovation thing. So I have seen studies and papers and scripts. And people have written how to do predictable instance firing and they set the choice to figure out what is my ideal instance and add that to the cluster. So that to me is like a lack of focus on the fundamental problem that we are trying to solve. It's a cliche thing but Elon Musk has principles. He says like, what are you trying to solve is the first fundamental question that you have to answer. Now, there are very, very simple solutions. Like we just made a choice to go dedicated for that particular instance. You don't have to innovate for that particular context. And I can point out like literally not just on AWS. This is like cross clouds. There are people basically like innovate for the sake of innovating without adding any real value. And that can actually take your focus away from the actual problem that you want to solve. And again, coming back to the cost of innovation. So here's one question that we asked internally. It says like, what is the cost of finding cost? Right, so that's like a food for thought for most people. Like at what price do you want to save cost? So that if you were able to answer that it helps you with your product planning and so on. So the cost of innovation, you won't find on the money clock rate. So I don't think that we need to go on just to break down your app and put some part in your other code and some part here, right? As a growing company in anything. So you'll be keep adding the interest for this. So whether it can be your marketing services, maybe at the, say, in the certain stage of business, we don't have anything for big insights. So I don't use a very extensive experience solution. Where anybody in the company can see, it's like how many kids are growing, how many nodes are created, how many customers are there. Those kind of insights to do the capacity. So those kind of things we can have the workload and try it out on the other codes. So you don't need to essentially add up other cost incentives. I would say that when you're actually trying out a new code for the smaller project, it actually runs for free. Instead of writing in your AWS and putting up other stack, doing the same thing. You can very well try Azure, GCP and try out. So that if you fail it, so you're not going to do anything because you can just apply it. It's not a very critical service where you're running. It's a new service where you're trying it out. You can very well try it out. The next question. In this whole speaking world, of course, at what point do you think the company will be more away from cloud services and go to their private projects? So to be frank, maybe I will... When you have the sophistication of an R&D, an experimentation on the cloud, I would say that it will be the reverse. So people always want to come to the cloud rather than having their own data system. Even our competitors where they maintain their own data, they start going to the cloud now. Maybe they start before AWS is so popular, or GCP is so popular. But for the sophistication we have, it's easy to get a sophistication even for a company when you are running on cloud because they take care of all the physical checks on their side. You don't need to maintain all these things. And if you are building a company from India for the global market, it's like a first desk, not any particular region is on focus, you need to maintain two different things because building a data center in India is a bit of a challenge. So you won't have that gigabits of bandwidth available for free and which is reliable. Free is another thing, right? Reliable is a different story. So having two different buildings in two geolocations being there in India, I don't think it's kind of... I would never say no, but still I feel public cloud will be the future rather than keeping your own info. I think it will not be a one-size-fits-all approach for sure because we still see the big players again going back to the private cloud. So I think the private cloud it's actually a misnomer if you ask me these days. The private cloud is essentially a dedicated public cloud for you. That it is, right? So it's not really like you are lacking stacking servers anymore, buying and so on. So you are just going to talk to a big player and say, look, like, what is the private cloud doing these days? Mintra did some time back and so on. So they go to some providers and say, look, we need like a thousand boxes. They'll get that thing done. So it's more of a... essentially like a dedicated public cloud for you. But I think the good question to answer is what kind of workloads you want to be reaching to be able to justify the hybrid existence of your dedicated public cloud into a pure play public cloud? Because one thing is like, if you go to even AWS or Azure or TCP, if you get like a fairly dedicated public cloud, essentially an RA is a private cloud, right? In this purest commercial sense, an RA is a private cloud. You can't do anything with it. It's almost like you're signed a contract with a physical vendor. So it's almost like a private cloud for you. But I think the real question is, what kind of cost perth is there which justifies bursting into the public space and by definition, public as in the mighty cloud space. So that every organization will have to figure that number out. There are like you said, there are organizations customer facing which face like instance includes, for example, Netflix is probably one of the big examples of instance includes on his server all day. So you never have a dull moment in Netflix's operations life. This is what a lot of people blog and so on. So it will be different for different customers. It might be different for a company like Matt Street there. It might be different for everyone's profile. So I think the question would be to what percentage of your workload is going to be fairly static, which are going to convert it into either a private cloud or like a semi private cloud as in an RA or like a fairly dedicated instances and so on. So I think that is probably the question for a lot of us. So I also agree, 9th and 8th percentage of the time, share of the cloud if you want to do this. So that works 19th percentage of the time and unless you have very specific needs like any kind of performance you want to equal the transmitters performance or performance things then only you need to go there. But the flexibility that Amazon or DC those kind of things provide and the ability to spin up few hundreds of thousands of nodes immediately that is the thing that you can't get in your private cloud. But to say about the performance part, there are few use cases that would need this because we have been trying to do some benchmarking of our media processing on GPU and all those things. So there are few use cases where the virtualization layer has slowed down a lot of things and we feel that one of those specific pieces could make sense to move out of cloud provider into having its own dedicated boxes or things like that. But those use cases are very, very limited and any kind of, most of the other problems can be solved using Amazon or DC. So another question is we have seen a lot of change in the culture with developers and the operations of the last decade and now it's become, there has been some kind of a truce before it used to be all over the place almost fighting with sports or something like that. So personally how has the DevOps, I mean the term was also calling DevOps so how is the whole culture shift within your own organizations between the developers and the operations? So personally I have been involved mostly on both sides right from the beginning for more than a decade I have been in both ops and there and a lot of times I see people coming in and saying I'm a DevOps guy and I ask them what kind of developer have you done and I say I just handled the servers. So it's kind of a funny thing right so a lot of people say they are DevOps but they don't do the DevOps part of it they don't develop the tools or things that are needed. I think personally I have been doing both sides of the things so both DevOps and ops have been handling all the infrastructure and as well as building the products there's been a mix for me all along my life through all the startups. So for a startup with a startup rate so there is no other goal you will be forced to do the ops so there is no reason you can't say that I will do a recruitment ops, get your COE to queue so you need to, somebody needs to pick up the ops and start loving it but as I told in the talk right so now we can reach there is no difference between your code and your infra until unless you know how your infra works this is what is there in the infra where you can actually architect something like so you can't say that okay I would like to architect some key values too and say that you don't know radius you can't start creating radius so you know that the radius will be there radius will be on the cluster board it has the shatter, infra and it has a higher value all these things you need to know before you are in your code so now there is no difference between those two as I said there is like now with the DevOps there won't work so people just starting with the AWS instance so maybe it comes from an example but as it is you architect the code the code that actually works so you can only architect the code only if it actually knows how it actually has the infra lined up for you and that you see a sense of achievement or whatever right so because you are actually taking up the ownership of what's being sold to the consumers it's not like something wrong with the consumer it's not like some X-Gay is going to get caught or off-screen so the developers need to take care of what's really happening for them and start something I think Bannert Schwartz one of the people I met he is doing the startup program for this he I think recently went back to he said the DevOps as a term is dead you won't do it anymore but to be fair what I have seen let's say in the last five years is that there has been a tremendous improvement in understanding of operational concept from the developer side not necessarily the other way so that has been slightly a trend these days you have developers who are so interested in being first-ever wanting to know more about it and part of it is also that they are being forced to because you essentially have distributed components these days I don't think any fresh organization is coming up and doing the monolithic or the x-ray anymore so that's all legacy I think one of the good positive changes this has brought developers have a real solid understanding of how the ops world runs and so on but not necessarily the other way I think the other way it's still calling themselves DevOps and not really trying to figure out the programming paradigms what happens if this happens and so on so that has been kind of a slight shortcoming from the operational side in terms of no one has tried any real effort to understand how the world runs so although the other side there has been a tremendous effort so I think the CIS admin world the typical system admin needs to step on as well go to the the world more rather than just calling themselves a dev because they write scripts that is not a functional role anymore so they have to go deep into the better world understand how the latest tax works so if you were to pick real for technologies which you are really interested in solving a lot of a lot of the problems over the past the problems which you have solved and what would be like technologies like deployments, contributions contributions it could be anything like for the tools or you know for the tools which people might not have used in their stacks giving it to the devops yeah yeah so one thing we use a lot was Ansible basically we saw how Ansible previously I used to have a puppet and I tried to shift a bit and it was horrible to think in that kind of way so Ansible made sense for us also its decision it is the build here you can write here so of course one good thing that happened and I think it was a more in terms of wondering we personally use different kinds of things so Elkstack is one thing that we use a lot and we also have our interest to be cluster and static so interest to be and kibana and this is the two things that we use on the monitoring side but I have heard a lot of good things that are coming out in terms of for me just right so I do want to go into that and see how it is working but I haven't had the time to do that so this is our monitoring side and what about the container of the station system so for work we don't use Kubernetes we use AWS ECS but personally I have been playing around with Kubernetes and where it has been Elk project itself is amazing the community is also great and so easy to start your services and deploy it on the cloud and it is quite easy so I think we would want to have if AWS has something like GC for Kubernetes to be a great feature and maybe that could be one reason why we might just do GC to cut this so I think Kubernetes is a great project I think I have to second that Kubernetes is something that has made a massive difference probably like I said, slightly in-game there most of the head like from let's say your new source and so on there is a clear pattern there like Kubernetes is the better having said that I have been most impressed with two things in fact one is Lancer which is essentially a meta of the Kubernetes where you can essentially manage any container platform and make sense of everything that you need in the container organization platform and supports Mighty Cloud out of the park so that has given like a real boost to at least the stateless container deployments that I have seen and the other two important things that I have seen which has not been probably as much talked about is the cloud era stuff for the essentially the map reduce requirements and your big data requirements so not much traction but you see very good designs with people who abuse that that's again like a very very good open source project so a lot of traction and so on apart from that there is a small tool which always goes hand in hand with me when I speak of VLK I always say Elastalite so you can never run an organized VLK system without Elastalite so it's like one of the most underrated piece of add-on equipment for any tool so you can do like lots of analytics itself do like lots of regular expressions counting mathematical functions everything can be done in Elastalite itself and you can make sense of the data rather than just team close data that's for whatever so for us it's elastic search so we use for many many different things we use for monitoring we use for our own search and we use even as a data store where we can generate even the gash codes for our customers it's not for ops use tapes so because an army is going to have only scale and an army can have each other only for some level but if you knock up deep page nations elastic search can give you what role what not they argue so even ease of it and we are very thankful for Redis so Redis actually we use it a lot so without Redis having to store our state or to store the cash or anything so we use Redis so we use it in a cluster environment and I'm saying Terraform Terraform is actually scaling on the template inside it's pretty easy to write Terraform for it developer because it's easy to understand okay I want this I want this configuration this is how I like to look at this stack those things for the past 4 years I would say ops works so that is our predominant deployment tool for GCP we use Kubernetes but ops works is where we are running everything so we use that as a continuation service we use for our deployments for the management with all these questions so how do you guys typically manage your development investment for the development do you still prefer to host your dev infrastructure on your layers or anything or you prefer to have a platform setup which sort of mimics the production environment so I work for RCCM so what we typically do is we write wrappers around most of the common APIs that are available and we sort of have internal versions of all these things though they don't give the same performance levels or anywhere close to the one production environment is going to give us but at least that will help us to save a lot of dev cost when it comes to when it comes to the development what is the practice that you guys follow for your development so for the dev we don't keep the developers so we actually try to mimic as much as they do this weekend but there are spiked out of services where you don't need to mimic what they do so it's like say arrange a deployments it's like running on your bare metal if it is running on your map there won't be any difference whether it runs on your easy tool or runs on your mission so for those things you don't need to mimic but we mimic something like NSQS and DynamoDB those kind of things we can use a lot of open source by Netflix and also Spotify so where they have the same thing how do we express what they have with open source which does the same I'm slightly opinionated on this so as a principle nobody should be having servers with them so this is what we think so and faculty the cost of power, electricity and the purest state of running these racks and so on so I once saw like a conference room in Brigadier Magno full of servers to 3 racks right that rank for one month could have bought those servers for like one year fully so the economics of it doesn't make sense and you also need like a digital system and support and so on so you are far better off on the probably on the same cloud or at least like on an alternate cloud at least but it should be as close to production as possible so I've seen people like use a bit of background and so on and so forth on their local laptops that's the only thing that should run within your office anything you want to host and make it accessible my belief is that it should be on probably the same infra that your production run if you have your production on AWS using like 10 micro managed services from AWS, US states should exactly replicate that then micro services is what we have seen because otherwise it ends up complicating their cycles because there are so many regressions that can happen because somebody assumed something was there in which pre-existent production so a lot of such regressions happen and the cost benefits that you get eventually if you look at it as a holistic package is not worth it yeah so for me when I say it's local it's on your mic yeah exactly so that's the max you can otherwise though it should go to the details so our expense was similar like previously I used to use background now we started using Docker Compose to have all our jammer applications or those kind of things but when we need to interface with anything that AWS probably say is a desk use or DanmoDB we do put it on the cloud and then we do our dev run so we haven't yet tried those kind of I know that there are a few open societies which can mimic the space or DanmoDB but we haven't yet tried them out but putting it on AWS and running it is much more easier and we can just run it after so there's no cost associated with that but this is something that it depends on your style right so if you want to have everything on local and tested then you would want this but there are a few services that even our microservices itself there are a few pieces that are very heavy made and we cannot run that on local normal map so in those cases we definitely have a goal So couple of questions so do you guys who run production workloads in containers haven't you had any issues with Docker whatsoever hasn't it been causing any kind of panics or any sort of issues that brings down your infrastructure or does it just work out of the box like 100% of them all? So I've seen some comments on standalone talkers like even about Kubernetes and so on so in fact the last one thing I had there was a heated discussion on why the talker monitoring is not as good as it can potentially be and we started from that problem and which is how we came across Rancel as a piece itself so one of the things that it does is it treats every container which is obviously stateless as mutable so once you have the determination and once you are able to kill any Docker container at your own will then you solve the problem right? The first thing that can happen is like your Docker is not responsive and if you leave it it will eventually bring the entire machine down so if you are able to preempt that from happening and the first sign of that is that Docker stops responding to your HTTP or whatever API request that you have now if your orchestration solution at a container level is going to say look buy my health check this container is down that should be the first sign you kill that container or restart that container or just kill it and launch one more thing if you are able to proactively do this you will not face exponential degradation with Docker bringing other Docker and so on so that is one bit learning that we have had with this second thing is I think the limit setting with the latest kernel in terms of how much resources a Docker can use it's much more rigorous than probably when Docker stuck it out so you are able to essentially localize the problem to a local Docker now much better than let's say like a couple of years back so that has helped your experience so from our experience we don't use Docker for too heavy things so we haven't faced this kind of problems but to give you an example on the long Docker side if you remember Batman it is a memory intensive process and the actually it's going to have a mix and it's going to go and stop responding so it has happened a lot and it keeps happening and the easiest solution is what he said kill the mission and move on with the next mission treat your servers or treat your infrastructure as cattle and not quite straight so you have other missions other servers ready other containers ready and then you don't have to worry about the non-responsive state so that's the my primary thing is when you have such issues you can still get away with the resource limits problem by using SQLs I necessarily don't have to use Docker so that is why I feel it is a personal opinion me source is a very underrated tool because it helps you run non-container deployments with all the resource constraints at Docker for one of our primary reasons why I am not a huge fan of Kubernetes because I can't run non-docker container server Kubernetes whereas my NISO is not a concept I can do that and now for the latest version I have all the notions of pods I have all the notions of full deployments I have this increased control and equivalent there every all features a component of fancy comes with me with just the container abstraction I get everything with all the containers so I have I mean and also the other thing is there are containers we can say it's easy to package every deployments together and together but as an organization you don't change your stack so often it's it's only a practical thing that in a single organization you don't run like 20 or 30 different stacks that you actually need an option of an image that you need an option of a complete isolated stack and you would obviously going to use three or four languages maximum that it's it's going to go out of the family if you use four and your stack or the versioning will also be on par for most of the cases something like Heropoop it packs work most of the times this whole notion of containers I mean I understand I'm a huge fan of catons I'm a huge fan of research limitations that C-groups offers but not a huge fan of containers because they force me to use something especially a system that's running in production which has thousands of issues open on behalf of me and the common response I get is upgrade your kernel, upgrade your operating system as an office person that's not a feasible solution for me every time I upgrade you're asking me to upgrade my kernel version I'm not a huge fan of that so how do you sort of promote or how do you guys accept using a system like that in production of course there are good user stories use cases where Docker is successful but I don't know if everybody is going to have the same set of experience so do you guys have any thoughts of doing on that? Like I said it's not just about the deployment and the scaling up perspective in itself container the way I see it it's more of a paradigm for the entire dev cycles to run through and so on and I've seen places where there are about 70-80 microservices and so on in fairly large organization you might have one single language you don't even have to go 3-4 languages but I've still seen cross functional teams have better coordination localization of problems with microservices so that is one of the biggest pushers for change in the container world because let's take these examples of the let's say someone who's running a top 50 LXSI he has like potential 300 developers right now to achieve a certain sense of coordination and to ensure that one guy's commit does not break other guy's code but of course you can track them project management and stuff and so on but it's essentially localization of the problem via microservices which lends itself for much better auditing and profitability and I think that is probably the reason why Docker in itself is so popular but I think apart from that the newer generation tools like for example if you look at Rancher essentially doesn't care whether the underlying info is Docker right so I think it essentially becomes like a package to you you can even run a big binary like your Go and so on or you can run your resource infrastructure so I think Docker or the container in itself it's just another of those phenomenon five years down the line we might not have what it is today but I think the overall concepts of having mutable instances and being able to deploy them and package them in a very very proficient and basically like a rapid clip from zero to thousand so those are the fundamental paradigms which will stay and people will learn from Docker and it will just like any other technology probably like five years ten years down the line it might not exist the way that we see today but the learnings of mutable instances rapid provisioning of containers via bots or whatever those are the main concepts in a multi-cloud world that will stay rather than the actual like to say it's like a very tricky jet top issues page if you see right so but a lot of people do work with it and have seen like fairly top 10, top 50 sites run and Docker and so on so not major problems as such so it's not as bad as the comments or the issues page looks like but it's still something that is still a work in progress nobody can say Docker is so mature that's for sure so one of the things which I have observed is so we redistribute Kubernetes as open shift in the end so this is the thing the adoption of it is growing as in we want to try it out but if you take a look at the number of top applications that people are trying to run on container orchestration system are straightforward applications like your databases and all so there has been problems but also the thing is the community is pretty open and you're getting it's a growing community and people are trying to solve these problems together like if you take a look at the communities that have grown our own Kubernetes with CNCRF and there have been standardizing processes around what these interfaces need to look like and what's the response time from each of these API calls so other companies or communities can write their tools around this or adopt their tools with these tools yeah it has been a paradigm shift as Inran says because it's becoming more in the sense that you're not going to go provisioning these systems in this data center in this place it's more like this is my requirement what do you have give this requirement to me it's it's changing for the operations so yes there are problems but yeah even bigger companies are throwing their weight behind these technologies and for adoption also so hopefully it should be sorted and we will not recommend the latest kernel to be there in all the production systems but we do recommend having the most stable interfaces in their production systems if you don't need any more technology and stability makes more sense to you most of the governments or financial companies usually they say they usually say use the N-2 version of this one so it's more stable don't use the N-2 version of anything but if you take a look at it container operations systems are giving confidence to people for running the N-2 version itself nobody is packaging it into a Debian package or Arkin package anymore they are actually deploying their GitHub source codes to containers and then taking it to production so this is what I'm going to say any other questions thanks for writing thanks a lot thanks for sharing your experiences thank you