 Hello everyone, my name is Libra and I am here to put a few things related to the hypothesis in the cloud. One of the biggest challenges that everyone has today is network standardization and learning questions on multi-cloud optimizations and so on as Shehna has introduced to the conference. I'd like to take on the same handle over here. Some of the biggest concerns that we have today in 2018 are the same things which were there in 1990s and 80s and so on. So earlier it was there with test cards, user conservers and so on and today this has manifested on the cloud. So all of the abilities those days were more concerned about freedom of choice movements and so on. Now this has translated into similar challenges on the cloud. Of course for the people who are studying and they are studying for this, partners say that being back of standards is like a very, very pressing challenge for most people and again there is a huge amount of challenge for customers who perceive a cloud lock-in similar to the vendor lock-ins of the 90s where people are talking about similar challenges. So with that as a baseline, so let's get started. So this is again a simple slide that I wanted to explain to you on why infrastructure could be bigger in a cloud-agnostic manner. So typically what happens is whenever you go for cloud-adjusted services, you are paying for the software, not for infrastructure. So that's a fundamental change in paradigm in what we are paying for. Typically we are so used to paying for infrastructure on the cloud. Now that has gone on transformation and with cloud-agnostic services you are essentially paying for software being served in a SaaS model for you. For example your data is used as a service, your queue is a service, you are... So that essentially has an impact on though pricing for software is not instrumental. It's what product managers come up with. Pricing for infrastructure can be predictable and universal. That's the fundamental difference in how we build infrastructure for the cloud in today's age. Now if you have to look at some why people want to go cloud-adjusted, there is lack of that, then obviously when you are offering service there is obviously a cost-over on the cloud. Now if you do a mix and max in hybridization of this infrastructure you could still do some key components, let's say which cannot be replaceable like an experience which you cannot really replace unless you do some of your own humanism. So unless you do that, you cannot easily transfer. So let's say 20% of your infrastructure can be on cloud media, your dynamic release and numbers which cannot be easily replaced to let's say open this and now go and so on. And the rest of it could be your complete cloud agnostic infrastructure which has got nothing to do with it. And it's not to say that cloud agnostic means that you move out of the cloud. You can still be on whatever clouds that you are running on any of the top three or the top five cloud service providers but what we are packing for here is a cloud agnostic. Nothing is dependent on the native services of the cloud and you can do a multi-cloud deployment and migrate your workloads between public clouds very easily. That's what we are asking for. So again after that setup you typically need a bit more of a typical suspended and developed therefore can you do the typical cloud native service but the challenge in cloud native services is that you are essentially learning on a particular platform. Which is like a big skill. There is nothing here that is very technical but the point I am trying to make is like this is again like an advocacy of the free open source thing. All we are saying is for every native service there exists an organ native in the cloud agnostic space where you can do your own setups and you can be completely free to migrate your workloads so this is just a little bit stage to slide multiple other tools available for doing the same thing. Let's talk about scale which is the first thing we want to discuss today and one of the fundamental challenges to scaling is to know where you scale and you cannot scale your workloads unless you are pretty sure about what you cannot scale. So that's the fundamental understanding that we have to have before embarking on a scaling challenge. On the system, your threshold test, alerting and so on and so forth are not screens or pretty much giving you the comfort level to look there is some other thing happening. We all get a false notion that with this we are safe. But the problems to be monitored are something else. We need to monitor all the final text errors that happen on any infrastructure. So find out there is a listening problem because obviously it assumes that there are certain back-end capacity issues we can fix them. Find out false on find out false these actually require a bit more effort because they are related to application timeouts and the API will come to fail and so on. So that is something that you have to monitor very well and these are the parameters that you have to fill into a decision engine to actually go ahead and scale. And surprisingly for average paid flow times people still do not monitor it as aggressively as we should probably we see how guys monitor this which is an irony in today. And people still do it like at first people still do all synchronous lookups and so on. So if you look at three major sites today you would see that there is a fundamental time to first write which is a potential slow loader for you especially on 3G networks and so on. This TKP problem is increased time loader. This is a reference architecture for a cloud environment. Again it is a micro color not a global architecture this is just a rehabilitative view. So instead of doing your localize yourself from the cloud and so on you are typically designed on PCS based H-approxies you are doing your GD clusters you are setting up your web servers with Helsing this is typically to ensure that you do not have any single ones for failure. So do not rely on a storage mechanism which can act up all of a sudden. So completely single point of failure you have your master's, your PCS, your Mongo cluster, your architecture so everything is completely agnostic here which means that there is no dependency on any native services of the cloud. Now this architecture or similar to this architecture is reproducible across clouds. You could be on HLJOS, Azure, GCP, Alibaba, Tencent wherever you go this architecture is exactly reproducible or anything which is quite not possible if you have a cloud native environment because you will always be complaining about one single service which is not on the other cloud and that could be a blocker in your migration plans. This is not to say that you should always navigate your infrastructure this is to have the ability to migrate your infrastructure. Again coming back to the same challenge I mean know that the critical monitoring metrics are just that it's very critical you need to concentrate on things like the find out for using elastorized find out to using elastorized and so on now that we know what is the parameter that you want to scale it's also important to know what is too bad that it can't use so proactive and reactive so the proactive method or the yet to method is used up you have enough capacity built-in in your infrastructure typically let's say you create a 30% to 40% of additional capacity in your infrastructure this is with the assumption that any immediate scaling can be done without having to break a sweat it looks like a lot of preliminary delays for example if you are doing some fancy hardware for your databases with custom SSD, NVMAs and so on so obviously this kind of hardware cannot be on-demand there is always a finite latency so even let's say 4 hours that is still a finite latency it's not on demand that you care and this also uses that you have quality test problems quality test problems are things like so there are applications there even if you throw hundreds of servers which you cannot scale because the cache has to walk up everything has to be re-populated so we know of enough architectures of big major size then the entire mcache has to be populated on a local machine before it will start serving request so those kind of things are there and this is typically done for systems which are on the packet for example your DB Masters and so on you cannot dynamically scale your master notification so that is how we scale instantly is going to help you focus on what can we scale very very rapid so again, assume that you can get your instances or containers or other basic unit that you use it can be provisioned in a very very fast clip and container has the fastest names to do that so you typically have to have containers up and running in a few seconds even though virtual machines for that matter not a one kind of cloud you use assume a few minutes delay at any time so typically what happens is during let's say a same period or you have some activity put in at a particular period of time by the time you are scaling a lot of them takes in it's a few minutes of delay where all the machines are calling and so on you cannot react to that kind of sudden workloads so this is a container with the fastest way to do those rapid provisioning systems and ideal for anyone who will log on the container doing those things up where you are doing container size or e-commerce size where at the sudden mention of your handle on-trigger or some group exponential for those kind of requirements this is what we recommend and anyway you need new ways, data pipelines to help with your scaling decisions your monitoring systems is not going to help you scale like to introduce to you to one container dynamic platform which is a lot of success in very very rapid provisioning like dynamic scaling this is the answer, it's an open source for almost all of the station frameworks so this doesn't say so-and-so container orchestration solution is better than so-and-so all it says is like I support everything and I give an overlay layer for all of it and it does like service discovery all by itself, it does all the storage it takes care of that it can do like all the LVs by itself so typically every service has a nature proxy embedded in nature so your load balancing at the service level it's taking care of that before for a lot of things that you have to do along with containers it's actually abstracted out here as a service for data data so for even performance of the containers you can even respect containers if it creates to make certain parameters and so on that's about it again, a lot of regulatory features like your deploying interventions you can build a backup of your services etc. but overall it provides your integration with all sorts of in fact the Kubernetes a lot more heavily used as everyone in this room might know Kubernetes seems to be slightly aging ahead in the container management phase before you actually say I'm going to launch more instances it's very important to know what's a good cause if you do not have a good box having more boxes of the same kind of box is not going to solve the problem so let's say your primary requirements are about 6G so typically on the safer side what you will do is over-commit and say look, I'm happy with 7G pure ATV for this no now let's say you look around the decision area when you're acting 4G now what will happen is like the moment your 4G will be able to sprawl your pipelines, all you will find out your elastic alerts everything will start saying look, this is bad, you're meeting timeout requests and so on and your office position solution will fire more and more circle the same part of 4G behind so what happens is like it's an exponential degradation because anytime you build a container and put a new container in it there is a dispatching board some more requests are lost so a lot of things like are deteriorated in an exponential manner assume always certain over-commit on a workout basis never size it to the moon for example, I'll give the example of a 6G pure ATV again if you have to have a 6G pure ATV just go 7G pure ATV because no matter what kind of provisioning systems you use there is still a small finding agency let's say you work with containers up but it is still a few seconds away so you are better off over-commit on a 5 hour basis and certain dosages must be seen first instead of having to fire a new container to re-dispatch all your project just to a new container and therefore planning is again super important for example how much food you have how much high-up skin you deliver on a 1 hour basis because a lot of requirements these days convert your local IOA route and with a lot of containers and so on typically everything is modeled in the service in your microservices architecture so the bandwidth required to have such huge amounts of data needs to be ready on that otherwise you are just converting your web scaling challenge into one network scaling challenge right? essentially your compute problem becomes like a problem because you are not pushing out enough data in your notes so whether you require engine networks whether you have sufficient IOs all those decisions are very important because unless you analyze your single box by going through these decisions no scaling algorithm is going to save computer machine there is a lot of problems with other data stores and so on so make sure that you solve and at least understand what you are scaling in your network because don't assume that you can suddenly shard your way out of a crisis situation you cannot exponentially again you will have a lot of problems to solve in fact there are people who say that no matter what happens don't shard so make sure that you understand the limitations of your scaling it's very easy to be managerial about this and say this is just another box away for us to scale but in factors and in whether traffic is this is not how you work but you have to be absolutely right for the first time DNS lookups for example that's a very simple example that people often make mistakes with you cannot scale it up passively so for example let's say you use a movie screen so you know make a movie screen as well if you are doing millions of requests to let's say a weather portal like that to measure or if you are doing a score track based app where you have to look up quick and forward a lot to an API and so on do an API request to run this game on the DNS lookups so you have to make sure that you set up something like and wow on any of the competing open source software which can do millions of DNS lookups without any visual latency so those kind of things have to be ensured that you are getting a little set up right making sure that you know what you cannot scale so this will ensure that you are not surprised when you have to actually go ahead and scale so in the meanwhile shoot any questions if you have in the meanwhile and just figure out anything very quickly everyone has a comment or anything to share actually okay so I wanted to actually discuss the Lancer architecture I am not sure what will happen essentially what will happen is you have static workloads for your TV and your workloads for your device stores that will help you on the traditional model your VMs and dedicated servers and so on and you are the web-scaling part of it that is going to be completely generalized where you can essentially build a microservices and essentially launch your massive scale anything that you want so this will be the thing that I was speaking about so you will have a core network I am not saying the text are too visible over here so people typically have a loading of change on your core network so this is your network setup this is your firewalls and this is your MAC page and so on where the typical you will have a very low rate of change and there is like a high cost of change because of the pancreas of this processor so that you have to spend a very long time to make this up and there is low tolerance for disruptions right people don't want to revert their TV schemas or TV changes any time in the middle but the app-specific network your APIs and the web layers and your services that you are working on on your daily basis this has a very high rate of change right so the need was going to speak next to me so they actually are how do you really are happy because of this so very high rate of change on your daily basis and the cost of change is very low because if you are doing a container network container is all expendable so just take out anything and there is a high tolerance of disruptions because you are really doing all your AP testing and releases all the time on this app-specific network there is a good network nothing much changes unless you are doing like a a passive upgrade on your core layers or you are increasing more bandwidth on your firewalls or doing something like that first ones and so on it can nicely tie into your a cshad pipelines check-ins or bamboo whatever it is that you want and this essentially supports not just your local behavioral boxes but you can also use this to on this day all your power clouds I will just give you a small example so this is an example of a multi-cloud on nature so you can see that I have my infrastructure by two places AWS and Google and I can also do a lot of labeling where I can say look this particular service for example big data I can tag them and say big data services should always be launched on Google right some EMR services should always be launched on AWS I will just give you an example of how you can set custom scheduling policies essentially you find out houses for courses right wherever is the best service that you get at the best price you ensure that you run your work on that so cloud Google two nodes of cloud AWS look at your container distributions one native here one full base and one native in one country for sure all of them on your so this is a very very simple US version example of one full base of one native of course there is an LB which ensures that for any entry point to any of the service you have a nature proxy load balancer so this can do again that's the golden standard for load balancer I think comes in close to that you can do millions of requests all the time this approach essentially means that the same container can be applied in multiple cases so you don't have to essentially change the way you deploy you don't have to change your CI CD pipelines this is just new additional infrastructure so this could be on any company so this is what they say again quickly summarizing if you want to be a real performance and cost oriented make sure that you go to the cloud and if you want to exponentially scale up your web services volume app services go for microservices approach based on containers and you have to make sure that you understand what you cannot scale in an instant and make sure that you sufficiently overcome it and supersize them and this is very important so new age methods are needed to monitor and scale so don't rely on the critical uh uh-huh uh-huh uh-huh to help you out you need to define new age pipelines and this is again a prediction on which we have already seen some solution so any web-scale app uh-huh of an app of today and tomorrow uh-huh could be multi-sold and multi-club uh-huh recommendations that we have for everyone we have to expose enough challenges in your uh-huh nativity of your cloud to make sure that you are prepared for eventual multi-club governance so if you can value a separate public cloud you will know enough challenges and essentially then you can create performance uh-huh issues over here alright I am done uh-huh thanks so much so any feedback or questions I would appreciate it