 is it okay okay my name is Vikrant Rathod and today I will be talking about building next generation API using Python, PostgreSQL, JWT and LXD hopefully you spend your evening here maybe you might find something useful in this talk all right so my self-interaction is already there so no need to talk a lot about it basically I am setting up my running my startup in Singapore Roka Technology in which I'm trying to build one multi-channel commerce platform okay so instead of giving this one I will directly come back to the session how I'm creating this app now how many of you are aware of Python Python all of you you must have programmed in Python right I will talk to you about why I choose Python for my works that would be the way I would start so since it's Python right the very first reason I chose Python was because very important thing was it's a CDO code which runs so Python looks like a CDO code but it runs right here control X control L API craft here is it's just addresses this one right it's just a simple CDO code I can just print it right it would run all right it's a CDO code which runs that was one feature the next reason why I choose Python was instead of putting it in a presentation this this fits my thinking and my philosophy beautiful is better than ugly this is Pep 20 if you are aware of Python and it fits my brain very very clearly I especially like you know the one where explicit is better than implicit it was I had a decision to make which programming language to choose in 2003 when I was starting building one new app 2003 or 2004 at that time Ruby on rails was just starting they had one small book out at that time it was not even 1.0 version and I bought that book which was written by David handsome I started reading that book the problem with me was that in that one you use an active record pattern and Python you know you active record pattern automatically picks up the objects based on your mySQL schema right a lot of magic I did not like it at that time I got introduced to Django where explicit is better than implicit so all models were defined that was one of the reasons and since then I picked up Python I did not give it up since then I've been working with it I think all of you are aware of this Pep 20 right and that's one of the reasons second reason I chose is a lot of people have problems with performance in Python I never had it because whatever is performance optimized code I can interface it with C and I can work it like the way they have done in machine learning using NumPy arrays and using a lot of scientific libraries I started using them the most important thing for me was another one was readability counts and looking at the code I can read and I can understand because they have a very explicit rule of white spaces many people don't like it they hate it but for me I always like that because obviously now if you are following go lang I don't know how many of you are doing go lang development but if you're working in go they also have go fmp2 but nobody questions that why it has to be same format for everybody in Python also because editors can take care of your four space rules it can you know format your code so you don't need to do much with this one okay that was the reason I chose Python for my work right then okay then my second question was why post-dress equal okay the first time I started working with post-dress equal was 1996 when it was just a pure C library and second reason I chose when my sequel became very popular post-dress equal was still not popular I still choose post-dress equal because it was sequel 92 compliant so the queries which I can write inner joins at that time but they are outer joints were not there then they came left outer joints they keep on enhancing their features later on and bring sequel 92 conformance second reason I chose was it has been constantly developing the clusters have been okay it it again fits my brain that was my reason for choosing post-dress equal but today there are a lot of other reasons to choose post-dress equal because today they have a very native json b support which is as performant as MongoDB okay I use you ID types because which are default in built-in type instead of using ID integer ID or auto increment fields now I use you ID so that I can give everyone a unique identity in my system okay that was the reason I choose my then now I come to what is JWT and why JWT so how many of you are aware of JWT okay so you must already know what is json web token right the reason I choose json web token instead of relying on OAuth interfaces and other libraries because in JWT token I can make my applications stateless stateless in a sense that if I have 100 API servers right they can work independently second thing I choose JWT is because I am authenticating a person using encryption keys I don't do any database lookups I take away all those calls away I'll show you in the code later on basically I have our example code which I have done on that one JWT part and the payload is very small for JWT when you send along with HTTP the payload is very very small it's not very big I send it as an header or you can send it as a body if you want to send it as a body the last part come back to is LXD why LXD not Docker I used to work with jails before so that is jails or BSD jails if you call it and in Linux also has jails in the old age CH root actually that's CH root in which you can you know change root and then all the processes running in that change root belongs to that root they don't go beyond the boundaries of those roots all right I in 1999 actually 2001 we launched one server based on basically exactly the same virtual servers using is before open vz before open vz just using jails nch root I was familiar with this when Docker came up I started working with Docker the problems I faced in Docker was I did not like working with Docker file and supervisor D or run it because I'm really learning the whole process again and normally in LXC containers if you have used LXC containers before anybody has used LXC containers it already supported in its scripts so I do not need to rewrite those supervisor demons or I do not need to rewrite using run it or system D that was one reason then whenever I need to troubleshoot or log I always had my own Docker images with SSH so that I can go into the Docker images sometimes check the logs because otherwise I need to have a centralized log logging system another system to maintain it so that was one of the reason now how many of you are aware of LXD only one right okay when Docker came up what Docker provided you is an API using the underlying C groups underlying C groups and jails to implement a container right in the first version of Docker they also use LXC as the underlying machinery but later on in order to build take away in order to enhance the feature why I do not still did not understand their reason why they bifurcate so I wrote a blog post on it actually that when they get and then LXC 1.0 was launched in that one there were three main advantages which came up one was it supported unprivileged containers unprivileged containers means you can run it as a user alright so you don't have to have the main container accessing your kernel resources because you can restrict the user processes by user security first thing second thing was that it provided it worked I do not need to work with any initialization scripts all worked seamlessly as I will do with any bare metal or with a Ubuntu server or with the redhead or with any other server now after LXC build up they build up one demo called LXD LXD provide similar REST API as Docker to manage your containers using LXC LXD can help you build your own cloud of containers what Docker can do actually LXD can also do then LXD consist of three different things as you know is a system wide demo a command line tool LXC and know a compute LXD you are aware of open stake already right so LXD has know a compute plug-in with which you can actually launch your instead of launching a VM based on KVM this one you can launch LXD based VMs and this can be managed directly if you do not want to go into open stake and everything right you can just use your pure plain LXD APIs they are very simple simple REST APIs which you can use to connect it the beauty of that one was what you know suppose I have an application I have an application I created my application I created my container I frozen my container correct I frozen my container now the first step is I create an image out of that container and put it on a repository so whoever wants to use it can directly call that image right right if you go in a Docker solution you need to use some central repository or some outside repository but in LXD already has a built-in support of those images as an image server and you can restrict it because it's hosted in within in your your own premises your images are not accessible to people not authorized to use it right and then you can launch as many containers as you want from that image the amount of time it takes is the same as Docker because still it's a container maybe you run bit more services than a Docker but the overhead is still not that heavy that's the reason I choose LXD container now this is all so now I will come back directly to applications okay all of you have worked in Python so you know already the virtual env and everything right yeah yes it's a small right come on plus yes okay actually I don't have a mirroring so it's I cannot see on my screen 720p right it's okay perfect okay okay here you go okay so what I did is I created one Python app using flask how many of you are aware of flask flask is a micro framework and I'm using this micro framework with blueprint and factory pattern I don't know if you know application factory pattern it makes the code modular just in a short way right so you can write your own modules in this one right I go with microservices architecture in flask but I still try to have you know conceptually similar APIs in one single module I do not try to separate it into separate microservices this is my design but you can prefer your choices okay so I have developed this simple app which is showing you how I am implementing authentication using JWT and how I can use it across my multiple apps right now this app consists of just a very simple so this is the config file for the flask app which is this one okay this is the actual K op API code what this does is is very simple this k-auth API helps me to set up the app just set up the app I do not write app code here the app code will be in authapi.py here so here actually I wrote my code right now I used the decrypt for encrypting because you know before allowing user to get a token you still want to log in and password from the user for your any app right that would be the first thing so you need to create an API for login correct obviously I have created it beforehand so okay here I'm running the app at 127.0.0.pi I have a tester one and tester okay sorry sorry I haven't started the server right here once I logged in I got my access token correct this is how the access token looks like all right and my application also use needs user information for you know profile information on details right now once I got this access token I do not need to do any database lookups here is the create token code which I wrote right precisely create token code here I'm doing since yours was about to talk about JWT so I did not cover it in detail sorry I remember you were saying that you will cover up so I did not cover up so I did not explain IIT role but the beauty of JWT token was that the basic problem in your application when you want to do authentication is first thing is you need to authenticate whether this user is a valid user correct and second problem you always try to solve is whether this user has an authorization to use that resource or not these are the two problems you want to solve in your authentication right what I have done is it in this one I pass on in the payload the user name and the role all right and when I so this is an encrypted token all right this encrypted token is going to the client client doesn't need to know which is the user what is the role he just need to send me back this encrypted token and based on that I can determine what is his level and based on that I can give him an access any doubts here now I have done a create code here I do is valid token I'm using pyjwt library directly to validate the token all right create the token and now the beauty of python here comes here what did I do it is I wrote one ducker order require valid token and whatever resource I want to restrict I just include that decorator yeah see here he's a required valid token right right so I try to call this API right correct now this is not the this is the new token right now the problem with JWT token is since there is no central authority issuing this tokens or keeping track of this tokens right if somebody gets this token he will have unlimited access to your resources correct the way to solve this problem is using timeouts because as soon as I introduce centralized authority like OAuth it becomes a single point of failure again so if I scale my application then I need to scale those central authority again so for truly stateless 12-factor app what I do it is basically I use this one you can do another thing to restrict it currently I'm using you know symmetric algorithm HS512 512 byte algorithm right so I do not check for public private key right if I want to restrict client usage I can issue him a client certificate and only the the the the client which is encrypting his data with that client will get access to my information so I can have public private key also using JWT tokens that is another way to implement restriction I find it it's much easier for me to scale and work and what did I do it is basically he had token expect as I said you this is the older token not the new token correct okay for just for your information the command line utility I'm using is HTTP HTTP it's again a python client which gives you a very beautiful overview of client instead of using cars it's again simple now your question will again be coming this is just purely running on my desktop correct how would I scale it and how JWT helps me to make my application stateless correct right that would be the question right this can be done by any other application as well right how I make my application stateless is first thing is for authenticating the user I'm not relying on any lookups I'm not maintaining any sessions I'm not maintaining any cookies you send your header with a part of HTTP header with a bearer token I'm fine with it I don't need anything else other than that OAuth which is pure can also do the same OAuth can also do the same but if you use OAuth then you need to maintain OAuth server somewhere down the line OAuth is its own advantages disadvantages whatever you call it actually and for every accesses it will need to connect to that server and check whether it is valid or not valid okay but in JWT all the code is only within your application itself right now suppose I build one microservices which I call is at call it as authentication authentication alright so I have one microservice I created a separate Python app I created a separate Python app called products which gives products right now the only link between this two is only one thing which is secret key as long as both of them use the same secret key they can decode and validate the token and this key setup will be done when I initially set up the application one time this is basically a simple microservices using Python this code took me around five hours to write actually to write everything in this one there is one more service I wrote the problem is for your token right for your token you want to give a timeout of like five minutes or two minutes or three minutes actually I have defined here JWT timeout timeout here alright I can change this timeout settings and I can send it right but before this timeout do you want your client to again login and register login to get another token no what you want to do it is basically that he sends you another API another API request renew token what it sends you renew token with my older token which is a valid one so if I receive a old valid token and read token I will get in return a new token which I can reuse in my application so this way you can safeguard your application the only access which somebody can get access to your application if by chance you lose your token will be like five minutes or whatever timeout you set up okay what I did was okay I developed this Python app everything I can show you one place where I wrote here you can see these are the two different apps one is the product API where I put all the microservices related to product API in one I put all the auth APIs and now here if you see product API config.py the token I'm using is the same and since I'm using the same secret key you can use JWT across your app yes please go ahead no that that's where I try to tell you that this key which is there I do not keep it in version control when I set the application when I you know I'm writing NCBAL scripts to deploy my application okay and when I deploy the NCBAL script actually I created one utility called create token create secret key so it will run that create the secret key and change this when it deploys changes the deployment secret key because secret key has to be changed and all that should be same and you don't do it in your version control you do it in your deployment scripts and you run a program to generate a secret key on the server I don't even generate it here I generate it on the server so what happens is why I use NCBAL there is a reason and why I use LXD because NCBAL is much easier to work with to bring your machine to a particular state right I did not create all those examples here but it's pretty easy to do actually so the secret key which is on the production machine is entirely completely different from what is there it is just completely two different things because this was one of the biggest problem I you know when I was discussing with my development team I told them specifically no keys no password should ever be in your version control except the test data which you run right right yeah what I do actually what I do okay I want to deploy an app right whatever do it is okay we come back to that for topic right away I was saying okay this all what you saw was so you may have like one server that allows you to log in it produces the token but you have 10 other servers and you want to check that the token is actually original for that one what I do it is I show you how would I do that part how would I do that part sorry this is I'm starting VM on my system okay I'm using Vagrant because I'm using LXD and LXD only works on Linux okay now this is my suppose if this is my production host as I'm using as I said you what I'm using I'm using pure LXD right I'm using pure LXD server so what I will do is LXD list I have stopped this containers right this containers can be created automatically by LCP script so just for the demo I'm just showing you this one I when I create this containers I keep a copy so for app server I create one container which is API craft app one sorry you cannot see it actually I can make it bigger is it okay now okay these are the two containers you see right there initially I created this to container using an NC bus script from these two containers here I set up the DB and my private keys and every keys in this containers and now this containers of our production right I created images out of this container which is LXD list images okay sorry let's see you see here so you see the images here you have API craft app and API craft these are converting this containers into an image and now from this image I can launch new containers and every container which I'm launching will be using the same key which was there in that container so even if I run millions of these containers it will have still have the same key I move it to hundreds of different deploy to hundreds of different servers still it will have the same key but but to create this container the key is created dynamically using NC bus script so you can do as many as you want right now okay now I'm running this server I see LXC now I divided currently everything I was running on one machine now I divided into two containers one is the DB container another is the API container correct so these are the two containers already started correct right now the basic problem with me always was sometimes I like to go inside the container and check things sometimes okay so I use a SSI jump host I don't know if you use it but I use SSI jump host to log into containers to check my things okay so here I will go here now I'm running server locally 127.0.0.1.115 right now I'm sending an API to my VM which I'm running here all right and VM is getting this information from this API server so my VM is the front my VM which is the physical host whatever I I can have a bare metal or it can be a Google Cloud VM or it can be Amazon Cloud VM whatever it is that's the friend I'm running engines on it what I'm running I'm running engines on it and then I created a load balancer inside it which is now connecting to my API server behind and then API server in turn in turn is connecting to DB which is here actually which is I think it's here okay the IP address which I used is of this VM you can check this one 172.128.3 and if you look at here it's the same 172.8.3 all right you can see here I have decided upstream backend I define only one server right now correct here and then I'm just proxy pass to this one I'm sorry upstairs here proxy pass backend correct but if you see above you would notice one more thing here is that still I am forwarding my request to port 80 of my container I'm not using any specific port I'm still transferring to port 80 of my container because in container I'm running another engines to forward it to this one I can use directly any API server also it's up to you all right but why I do engines is because I can do throttling I can put ready server in between I can do a lot of things with that one caching and a lot of other things this was one of the basic feature of restful APIs layered systems if you know and this is following that layered architecture now here we see currently currently I'm running one API server correct let's launch a new API server because I haven't tested it myself yet I just created it home and came here so let's create one more API server from the container run the server and let's check whether we can load balance it and we can still respond to upstream or not let's see launch app this is the image name API craft app and API craft app to I give the container name as to it will take some time sorry yeah okay done yes please go interesting in scaling using the containers do you have somehow speed it up if you are making like tens of containers I'm just doing the experiment in front of you that I created one container now I'm creating the second container what I will do is I will set up in the front-end engines now I got the IP address right so what I will be doing is I need to do one thing because right now I did not convert my flask into system D service yet because if I go into production I will convert it into system D it will automatically start when my container starts or stop but I haven't converted into system D service right now so what I will be doing is basically is starting the service manually but you can assume it it can be done automatically yeah so this is okay right now I'm going to F2 correct it will accept the keys I'm locked on using the same keys you have seen I did not change anything I'm using the same old key as I'm using for that container because I have I wanted to automatically using Ansible scripts and Ansible script should be obviously I put a secret key but if you deploy through Ansible scripts you will not put any secret key there no no I set up all the container using Ansible script I can do that without touching it I mean like the Docker you create a Docker right I can create this whole deployment using pure the container which is created currently right I have created a container right I already saw before in the deployment script see currently I had one container right I had another image only image which I created from this ready made container right I launched this container it has all the dependencies already built in I don't have to change it the dependencies may change or you may use the matter English before you generate the secret key okay okay my strategy would be my strategy would be if I'm deploying an LXD container right if I'm using LXD container if I want to use it like Docker what I would do it is if I want to use it like I don't get if I want to use it like Docker I will do very simple I will write an Ansible script to create the complete image what I will do I will write an Ansible script to create a Docker image and save it and then in the next step I will launch the containers using LXD APIs is this clear right now suppose so in that case if my needed dependencies come up now in this one the beauty is what you know the beauty of LXC is here before updating anything on LXC I can take a snapshot a pure snapshot if and if you are using btrfs or ZFS the file system if I'm using btrfs or ZFS file system what I can do is I can create delta backups that on a regular basis the performance is really great actually I have tried it on the production machine on the host on one single host you don't need to bother about multiple host but on a one single host you can just if you want to deploy it with more than one of each one oh it doesn't you have something in a cloud you just want to shuffle it okay if you want to do it like suppose give you an example give you an example you created one LXD container image for your app correct is this clear okay you have it on my computer you have it on your computer actually this image can be transferred to any cloud as is using LXD clone using LXD clone you using LXD APIs and actually using LXD APIs I can do live migration of a running container from one machine to another machine the only the only thing which you need to be taking care of is that the LXD versions they are using on the destination in this one is the same because it uses CRIU there's a anyway that's not the main topic is basically you're running an app here on your laptop and you want to move it to production actually you can do live migrate to your Google Cloud or Amazon Cloud if you want to do that okay so the problem because especially on some cloud services you have pretty old Linux version so you need to basically have the same Linux version on all your cloud services no they have a minimum kernel minimum kernel the minimum kernel version which supports CRIU yes but the CRIU support and that there there's a minimal kernel version which is required for supporting CRIU so that kernel version you need to be bothering about obviously I'm a fan of bare metal I'm not a fan of cloud that much no there is there are reasons for it but that's another topic you say that you also have containers which are machine on bare metals that the issue is the same somebody asked you to deploy your API on bare metals a day on they don't want to upgrade them for a system no because you know for me I did not look into Kubernetes or Mezos or you know Docker swarm just because of one reason first thing was for me personally I'm talking about myself I did not look at it because using an LXD I can already create 100 200 machines cluster by writing simple Python programs and which I understand very well which fits in with my team very well 200 feet and you know for a startup like us when we are starting I'm not going to 2,000 servers in one night or 3,000 servers in one night you know that is I was very practical when I was thinking about the systems right I need mezos or Kubernetes when I go to data center skills where I have three or four data centers two data centers where I create pods and all those things but I'm not that size so I thought from practical point of view and you know with LXD we can do all the experiments on our laptop and then once the image is finalized we can just push this image wherever we want and as long as we have the backup of this image as long as we have the backup of this image we will never lose anything okay I was coming back to you API server to here for you here I logged into the API server to now there are two servers right two containers running right here I will go to example app auth API okay source dot dot slash bin slash activate okay and here I run Python sorry it's running and now what I will do it is for my load balancer every time I load I need to change my load balancer configuration right so what I will do it is I will go to my load balancer this is my load balancer which is the main machine okay so the new one is 10 dot 1 dot 126 right I restarted my engines right now when I call this API it which server it will go to I don't know yet it is still going to the old server not to the new server you see it is automatically redistributing automatically load balance if one goes down another will still be up if one goes down another will be up I scaled my up right here in itself without doing much work and why I set up the container only one time I created an image and then I can deploy it to as many places as I want now I can do this you know why because I use JWT if I'm using sessions then I have a lot of problems because I need to have sticky sessions that this session is connected to which server and then I can only connect to that app server but in this case since JWT request can be handled by both in the same way I don't bother where the request goes to but you must notice there is one still a problem here I don't know whether you know it and which I'm trying to solve all the time is still it doesn't take away the need to have a clustered database it doesn't take away that you still needs to have it so using python right using JWT using postgreSQL because back-end database it is using postgreSQL again if you look at my code now here the beauty of this one is control x control f I will show you one very good one part in this one is projects API craft example a auth API models.py when you're working with JSON APIs when you're working with JSON API why you would use protocol buffers why you lose protocol buffers can anyone answer if you worked on any protocol the reason protocol buffers offers a lot of advantage because it offers a type system and once you the generated code will always confirm so the client will not send wrong data and even if you receive wrong data you can already validate through the library you don't need to write a code for it correct obviously I'm working with JSON schema right my biggest problem is I need to still validate the JSON schema generate the responses to the customer which is properly generated this one now what I do is first thing is I created obviously I need to represent this object in the database right obviously if I'm using MongoDB I can put the pro object itself but here suppose as an example I am using this is the user model simple okay okay it is using ID username and I'm you only storing password hash I'm not storing password itself you know it this is a best practice that you only store password hashes you never store passwords in your databases and this password a hash also is using bcrypt with 10 rounds minimum so that it is very difficult even if you get the raw data it's very hard to reverse engineer the date all right and in bcrypt I think it is advisable all of you are aware of bcrypt right why to use bcrypt because in bcrypt the key goes along with the encrypted key so even if you are able to hack one password you will not able to see other passwords because each one is a separate key obviously I'm using I don't know whether you know this one I'm using Python UID dot UID four functions to create a unique UID automatically which is a default so the way you use auto increment field in the database instead of that I'm using auto increment UID field this guarantees that if even if I separate the users across multiple servers they won't collide so I can scale my apps without so you know when you want to scale your app you design from ground up in that way yes yes this is the question comes up now this part already make sure that if I'm using any of the Python API to write data to the database it already checks this is the model which defines the constraints alright but this only checks when from Python code I'm writing to the DB there is another part when I'm receiving the request there is a JSON information coming to me correct for that also there are two ways I can do I use Python Marshmallow library it's a human library it works like I define the schema and then I pass on my request data to that schema and it will automatically validate and give me errors and okay I will show you I think I have here is the schema I created you see here right you get an error right length must be between 6 to n 36 you know why this comes from here so what did I do it is obviously I can generate this schema automatically from the DB I can use a Marshmallow library to automatically convert my database the model which I define for a defining user to use my schema but I don't do it I said you I follow Python second part explicit is better than implicit sorry explicit is better than implicit so I really implicitly defined it and I think this was it for today actually what I wanted to discuss about any questions on this one please first problem with protocol buffers for me was okay when I try to work with protocol buffers right when the developer works with the protocol buffer because you want productivity when a startup the speed to market is also important along with accuracy both the things are important right I want it that whatever JSON request and response they are coming they should be validated okay second thing was that protocol buffers they say it's a binary format so it offers you know the space advantage I don't know if you work with corba I work with corba in the old days tau if you are done is tau framework you know they had one time that holds software will become assembled one I don't know whether you read that theory in the old days tau all the telecommunication systems are based on those corba protocol corba was binary protocol it was very difficult to debug very difficult to understand so we standardize on JSON we said okay don't go with protocol buffers maybe it's very new very shiny but still JSON works the space we found out is basically you remember I put engines in the front right engines in the front when I send the response through engines I zip it I use gzip compression like the web page and it has been already very advanced actually you can compress really very compact so the difference between a compressed JSON and protocol buffers is very little second part is I show you one very important part which I did was I was working in a domain where the schema is constantly evolving it is never fixed because you know I was working in consumer product data if you take one single camera and look at all the parameters of that camera and go country by country the amount of feature in descriptions required each of them is different right so you need a flexibility of schema if I use protocol buffers my problem comes up is this I cannot be flexible all right and and here I show you one you see this is my product schema very simple everything is a json b column correct it's a pure json b column where I need to use relationship I use relationship but I use json b but this inside has schemas json schemas so what did I use is I created marshmallow models of the schema so that still it confirms to a standard but marshmallow is malleable it can be changed by programming at runtime and that's the reason you know I choose this one over protocol buffers because of the domain of the problem if I'm building some telecom system where the interfaces are very clean and clear I might go with protocol buffers any questions from anybody please see PG pool you cannot have master master you can have read read only a replica so your read queries can be distributed among cluster but write queries has to go somewhere correct and if you want to have read write master master cluster you will use citus or Bocardo citus Bocardo and there is one more tungsten so there are three or four solutions available for that one but citus data if you go to citus data they have an open source version where you can spread your cluster into master master cluster no I keep it in in this example I put it in the data container itself because it's a lxd container if you can build a cluster database right if you can build a cluster database if you build a cluster database right and you build up a container setup like this then if you use a scale out solution for postgreSQL then I will keep it inside the container every container will become part of the cluster because you don't have any performance penalty you understand you don't have any performance penalty because you are really using bare metal the idea of the container is for whatever reason the container blows out of the water I recreate it from the from the image and then I don't have the current data in there so I then need to synchronize it typically the volume where the database data goes needs to sit outside of the container no because you will see you will have higher what I'm talking about here is high availability cluster with your backup you will never give up your backup I never say you should give up your backup this I never say no you have a cluster and it's a scale out cluster so even if one image goes wrong let it go wrong it's still my other clusters can balance it because the scale out horizontal cluster not once your database gets to a reasonable size the recent time I say if you're if you're second cluster comes back this part becomes quite substantial so you typically work in a large production in my storage the storage system the high level of your set 86 all right no no this is where I was trying to tell you is that containers sorry please sorry sorry sorry so sure sure please so sure sure sorry yeah thanks a lot thanks a lot for your time hopefully it was something better what you were asking me the question right I checked the performance you know I don't go without benchmark tests right and my thinking was I worked with you know red storages I worked with Gluster and safe the problem is the performance given by IOPS and performance given by local storage can never be compared with whatever network storage you talk about it's very hard to get that performance and containers can use your local IOPS so you underlying sorry