 Thank you very much for coming So I know deployment of rails is not an interesting or exciting or remotely as cool as Kubernetes and containers and Docker nowadays So I'm gonna get that out of the way But today I thought I'm gonna show you what we've built what we've got and how we can help you and your team Do rails deployment? But before I do that There's a little bit of kind of history around why we did this why why we ended up doing the Non-interesting solved problem of or we already have Capitano. What's wrong with that? Kind of problem and how we got to where we are so Classic six is started as a as a different company with a different name doing a different thing and we were a rail shop We're still a rail shop and we did this Like everybody else we started with a pass provided our friends over there heroku. We used to be their customers so like everybody else we started at heroku and The So I've been doing this for a long time not not classic She's but I've been you know like you you guys been doing rails for a long time And I've worked in large and small companies and every time I started heroku and I leave her Now the question for me was why why why does why do we all love Paz? like engineer or heroku You know we really like it, but why do we all leave and and We started asking this question as a diff has a different company that we wear when we started thinking about leaving Paz We started asking this question from other people like why do you leave do you start? I'm just gonna risk my whole career right now. I'm gonna ask you who has a started something at heroku in this just trust your pants You guys done heroku before Okay, and how many projects like say, you know, you start with heroku and you end up not moving out Have you guys are you still on heroku? Or what you started? You are Okay, so this audience might be a slightly biased because you are here I'm guessing that you want to answer that answer that question, but what we see is kind of what I call a Paz cliff We all start with Paz and we fall off of that cliff pretty soon And the reasons are kind of different. A lot of people say cost gets expensive very quickly Which is an odd one because you know if you're doing really well Then we should be able to pay the bills and but somehow it's not sometimes it could be something around flexibility Oh vendor lock-in Regardless of what that is I've even had cases that I used to work in a bank in large bank and you know I wanted to put a proof of concept or like something you know Just just a feasibility study sort of thing project together and I did that at heroku and just like a pass I just went out to my colleagues and showed it off and I really loved it and said great Tell me have it. I just want to put sales people on you're gonna sell the hell out of this thing I'm like It's kind of running on a past thing. It's like whoa. No, no, no So ops guys coming and they take over and they want to run it in like in a kosher way and on all sorts of things I'm just like, okay. I'm never gonna start with Paz again so we jumped and The jump turned out to be a different business. It actually turned out to be our business So this is kind of what it looks like you jump And then you realize there's a lot of duct tape and that kind of stuff involved So today I wanted to instead of talking about this I wanted to show you What we've done and what this jump looked like what we had to build ourselves To be able to deploy a rails application in a way that gives us the same experience that we really love about a pass provider But we are in charge. We are in control of the cost We benefit from the cost-cutting or the price wars that are going on between say Amazon and Google and they keep cutting The cost of every CPU cycle and we don't get the benefits of when we're running on a pass. We also wanted to Be able to experiment with new technologies like you know We have time-series data and then we want to go and use a time-series database or use boats or some other cool technology That our developers want to use and all of a sudden we can't find another pass provider So we have to go and add it as an external entity into the whole deployment and then it creates these things called snowflakes and you know when you get an email about Amazon having an issue with heart bleed or Meltdown or whatever else and they say you know what just shut down your ec2 instance and restart it and it moves into an improved thing And you go everybody goes oh That's there. They want to shut it down. We have some stuff on that and you know It's supposed to be just shut it down started run the puppet or a chef script again And it should work, but we all know it doesn't so let me show you a quick demo of What we've got what we built So this is class 6 6 dashboard what it looks like and On the left I have one container based Kubernetes ramp all the cool stuff Application that I use for other demos when we are talking about our container products We have four products two of them are framework rails and node and the other two are skycap and maestro Which are about Building a Kubernetes cluster and also deploying your application into an existing Kubernetes cluster So Maestro builds your Kubernetes cluster for you and skycap Deploys your application on to any Kubernetes cluster that you might have where there is Maestro or you get it from say gke From Google from Amazon or whoever else, but the but the focus of what I wanted to show today is on On the rails product So How do we start what we wanted was we wanted to build something that Starts from the code and ends up in production. So if you think about You know your your typical paths experience on one hand You have your code and it then goes to production. Well, that's great And every time we commit into git it builds, you know does the tar balling and all that sort of stuff and then rolls it out What we wanted was to just slightly modify that image that model and say on one side We connect your git repository and we get the code on the other side You connect your cloud provider of your own choice under your own account We fire up the servers that we need We then configure them provision those servers to run your application and we do the deployment and the catch here was How do we know what kind of service we need? It's not like a generic server that runs everything, right? So if I go and read my rails application, it says I have Dali So it means that I have memcached D and I have some sort of redis gem So I need redis and I postgres as well Then what kind of servers are we going to create? I'm going to create all different servers like one each or put all of them into one. How do we go about doing that? So this is how a flow kind of looks like and these are the different products that I mentioned so I go to rails and I'm gonna Choose one of our sample projects. So this is a simple rails application as you can see It's typical rails app that we put together It uses my sequel but You know, you can see there's a docker file here, but this is this is not used in this demo You can also see there's a classic six folder here with some Extensions around like you know environment variables that you want to put in you don't need to use any of that This is not necessarily use use part of this app, but we'll get to that later So I'm gonna just start deploying this take the git repository URL Go here Use that. It's a public repository. So I'm gonna use the HTTP Use the branch master call it rails com Three and I'm running a live demo so demo gods are gonna help hopefully But I have some that I've already prepared earlier I Don't have a video of it. So if there's a Wi-Fi I'm toast you're gonna have to improvise So what have what's happening now? We connect into the git repository. We pull the code out and we look at some, you know pointers Jem file Jem file lock database YAML Also some other configuration files that we all put there So the great thing about rails is we all know it's convention based so you can you know Use a lot of things from those conventions. So what did we find out? We are using rails in production Which we are specified as the environment. This is using rails for two eight and the ruby two one five and we've Flipped everything to run on fusion passenger here. You can choose to run unicorn Puma if you want But that's the default setting which means that it runs engine X with fusion pat passenger on top of it The back end is my sequel obviously wasn't like a super genius thing to find out. It was in my gem file and Deploy hooks are what we have In the dot cal 66 folder which basically says I have this custom package I want to install on a certain type of servers So you kind of use it that way and the reason for that is instead of you know jumping on a server with a bash And just installing the thing so the next time you don't remember what you've done on the one specific We want to kind of try and nudge people into doing it the right way and scripting it out instead of doing it post deployment We also do support proc files, which we all know if we have used Heroku, but it's basically back-end jobs and think that one So if you think about classic six we do a lot of dog fooding we saw we installed a deploy classic six with classic six It's kind of an inception sometimes but The idea is that If you think about our process and how we manage these things so we run rails God six is that itself is rails and the back-end jobs are on a sidekick Now if you think about how long does it take to get a server from say AWS about a minute Maybe two then you have to install packages on it and reboot it and all sorts of things So end-to-end your deployment might get from five minutes to 20 25 minutes We have customers in about 110 countries There are about 12,000 developers on a daily basis about 600 of them use classic six to deploy about on a five six thousand times You do the maths you realize that there's no time that any sidekick is going to be quiet So how do we deploy and that's the kind of the thing that we had to build and it turned out to be a good product as well to build the business around and Kind of solving God's problems. How do you do those things? So when I mentioned that around proc files I wanted to tell you that those are the things that are we try to take care of automatically for you because rails is such a great Framework that's got lots of opinion baked into it Here you can add environment variables if you want I've locked in my Ruby version here in my gem file But if it wasn't locked, I would be given a drop-down that I could choose what I want As you can see, this is a very rails specific Or has a deep understanding of what rails is so you can you know understand things like raise acid pipeline Or debu schema loads and things like that now on to adding a cloud provider. We do support About I think nine providers from the second as well as bringing your own service So if you're running on any of these cloud providers, you are all good You can just connect it to your account depending on the authentication method of that cloud provider If there's AWS you can just get the API keys if the solution is off and all sorts of things But if you happen to run on say head there or VH or some bare metal provider Apart from packet, which we do support here natively. You can then bring them in with something We call registered service, which is drop a bash file a bash command Onto the server it calls back home registers itself You need to have a bunto on it and that's basically it and it becomes part of your what we call a cluster So I have added my dissolution account here as you can see and I get to choose what region up to Dislotion I want to deploy to and each region has whatever server types that it supports I get to choose those So here's a part that I was I was I was telling you about how do you kind of thought what's a topology to use a very big word But how do you actually distribute this app? So for us the rails part sits on a a server at this point and On the other part my sequel in this case, which is the only other component that I have in this example You can choose to say share with the rails, which is not good practice for production, but for the purpose of this demo I think it's fine You're gonna forgive me or fire up a new server on the same cloud same account same data center same availability zone if you want to or Use an external server from like, you know registered server somewhere else Which brings me to hybrid you can combine multiple plans with this as well as that What you can do here is that You start with a small skeleton of the application We don't want to say tell me everything about this up front you have production You don't know how many servers you're gonna need you don't know how it's going to actually you know turn out to be So we start with one server of each type if you want and then you can scale out as you want all right So let's just go ahead And click the appointment now what's gonna happen now? So we're gonna connect to the solution if I have my dissolution log in still log it in so in a minute That guy is going to fire up a dissolution Server here There we go So that started that we give it an animal name you can see gazelle in color sometimes they come out like compassionate pigs and some weird names, but I think you're lucky today I'm just gonna give this a Different name I can rename it here. I'm just gonna put along here and the reason for that is we have a process in the background As you can imagine we fire up a lot of servers on a lot of clouds So we call it janitor and comes and cleans up all the servers that be fired up If you don't put along in there in the middle of the demo the server is going to disappear Which is a cool thing you'll see how it how it works But um so what's gonna happen now we fire up the server or server is that we we agreed Then we check for the compatibility based on the base best image that's available on that cloud provider Then we check for the kernel if there's a need for a kernel update. We do that Any packages that need update is all sorts of things setting up automatic updates for security You know installation everything or anything that's needed. So for example in your gem part Let's say you have image magic then image magic requires a bunch of things that you need to install We install all those all those things if you have any custom packages that you need to install that's also done and All of that then ends up being a server with an IP address that then called back calls back and then gives us the IP address If your cloud provider supports private public networks, for example a private Nick and a public Nick we do get those IP addresses back the IPv6 and all sorts of things around that we get it back You get a chance of kind of drilling down into all the locks What preparing for example of the server is then here you can see you know we install a ufw as a firewall manager All sorts of Actually in activities that are happening right now. So let me go back to one that I created with the same stack So this is what it looks like at the end. I've got my server created At the top when was it deployed just earlier today? I have a rail server and SQL server and a process server which holds my Proc files if I go in there So there's a worker in a schedule. I think they are Sidekick that are running here and my my SQL server Yeah, I Can I can see my my SQL server with my SQL things that are running on it? But it's sharing a server. It's obviously sharing a server with with my rails one Okay, so let's just see What the sites like It's just a simple rails after dump whatever it gets in the header If you notice here, it's a DNS record that's created for this and this is quite an interesting thing. So Obviously every server that you fire up has a public IP address You can access that and you can block that and say I don't want this application to be accessible publicly You can do that We also create a DNS record for you the same way that for example for oku does and that is pointing to the head of that Stack if you have a single server it points to that one if you add a load balancer Then it points to the load balancer it changes that and it's a short response in a short TTL to point to that server As well as that you can have you can have a DNS server that points to every single of one of those servers And you can change your DNS record to point to that So, okay, so now I have my stack great now that was a that was easy now What else can I do? So now at this point you have your application up and running? It's working But it looks like this Now here you can add a load balancer or add your backup backup to your databases so As you notice I deployed this on digital ocean and I choose to deploy it without the digital ocean load balance I want to have a chip proxy I can do that if I were to deploy that on to say Amazon Then here the load balance option would be an ELB or an ALP that you can deploy and that Decouples you from the intricacies of that cloud provider which means that you can get the application and move it from one cloud Poverter to another one with the same topology you can say I have a unit might you know a load balance under my sequel I can just move it from one place to another one So you can do that you can also create database backups for my database So here I can say I want to manage backup which means the backups taken up your database Sizz all databases So if you have like say my sequel postgres and red is all of them as a snapshot are taken at the same time The files are then compressed encrypted with your passwords in the ship the closest s3 Region that you want or the one that you specifically want if you're for example running it in say Germany There are data resins and residency requirements means that you have to move it only to the Frankfurt s3 All those things are taken care of how frequently you want it What do you want it to binary one which means it's much faster or you want to text one which you can then inspect manually And you can create a backup here. So this is stack the one that I showed you earlier This one has both databases back backed up and load balance And here I can download it or restore it And when I say I want to restore it It basically goes to all the databases that I have postgres red is everything that I have and it just restores that back So if you combine that with the code and get ref that is deployed you have a time machine You can go back and forth back and forth in there one note is that you can use this with Your own database if you want so if you're using you know aroku postgres or cloud sequel postgres or RDS from it Amazon you can do that if you want to You don't have to use the databases that come here and the same with load balances and the last one around here Is that as a certificates if you choose if you have your own as a certificate? You can add it here And it will add it to the load balancer or to the server engine x depending on the topology So you just say I want to have a cell and it will add it to the load balance as a load SSL termination if you have a load balancer if you don't then it will be added to the engine x and if you If you say I want to start to small and I would just want to add an SSL certificate to my only server that I have at the engine x level and it just works and sometimes if your colleague comes and says Why is this thing running without any load balancer? They will go and add a load balancer. We then detect that we move the load balancer We move the SSL certificate from engine x all the way to back to the to the load balancer So it doesn't disrupt your service. So that is not going to break anything And if you choose to go with let's encrypt we call let's encrypt API we get the SSL certificate We put the reminders in there and we do it automatically every three months So you have that we are in the process of supporting version two which has wild card as well So that gives me the kind of the basic running of the rails how do I set it up and all sorts of things now Here you can see that I have two processes and they're running the both green all good Now database is a little bit more on that so Here we have a we have a database a single database. It's my sequel What I can do here is I can create replication. I can simply say I want another database server That's essentially scaling up the server database server for me Now here what we do We natively support replication from my sequel postgres redis memcashd elasticsearch And I am MongoDB So we do the replication for each one of them the native way they actually support So you talk about my sequel we do a master slave. You want to do postgres We know what write to head files and how to sync them all sorts of things and Not only do we do the replication but the system also supports replication verification across different data centers So you oftentimes when you set up replication You see that you have like two different availability zones and you want to have like high availability So you put the master on one side and then the breed only a slave on another data center and the connectivity sometimes not great If you have a lot of throughput in terms of writing a lot of inserts the slave might need time to catch up And they go out of sync So when you set up a replication with classic six rails stack Repetitions set up is verified and it's constantly being checked For whether it's in sync or not and we keep it up to date or if we cannot for example You just lose connection between the two servers will let you know these are all the things that we had to build to just get back to where we were on a path provider and On the other side if I go to my source code Now in this case, I had a backup verifier as well. So This is a as you can see it's in my sequel query what happens is this is a backup verifier So if we find that file Called under that backup verifier for my sequel for production that specific environment It means that every time I run a backup I want you to put that ship that backup to another server unzip it decompress it unencrypted Encrypted and then put it into Into that specific database then run this query and this query should return a true or false Which means we verify the backups every time and you get a green tick for the verification of the backup as well So it's not only about You know whether you have a backup but the backup is going to work when you need it because that's the worst thing That could happen if it didn't Now onto the deployments here's my deployment and This is the one that's live and every deployment that I do is going to show up there So I can see the commits that have gone into this as that gives me a visual history of what's happened on Configuration files now, that's an interesting part. So This is not a you know pass obviously so it's not lots of things are hidden But we don't didn't want them to be hidden like for example, we install engine x We want you to be able to shell into the server that you whether we've set up for you And you look at things you look around and kind of feel that you've done it yourself It's just done for you. It's not an alien black box as a result. For example, if you think about deploying engine x What would you do if you were to deploy engine x yourself? You would write the configuration file for engine x deploy engine x and apply that configuration plan and this is what we do here so if you're familiar with engine x configuration you recognize that that as a Engine x configuration with a small difference of these double curly brackets around here Now if you've used Shopify You know about Shopify liquid for example, which is a rendering way of the safe way of rendering or markup language If you will for for files and this supports that so you can say you know number of working processes Comes from this variable or you can have if statement for example if you want Around this what happens here is that then we get those base template That we have here and then we render it with the values that you have and then we generate a Real executable engine x configuration and push it to all the servers that we've even installed engine x on What does that give you one it gives you consistency across all the components that are installed So there's you don't have these snowballs and snowflakes that have one of them is engine x is slightly different Sometimes it happens and this happens to us that you know something goes wrong I'm doing the weekend some up sky jumps on a server shells into it and fixes that and nobody knows what's happened One of the servers go out of sync so we detect those differences and we bring it up and we show you that there is an issue There is one of them is different this not just again will not just into Doing it the right way, but if you don't you still get that notification The second thing that it gives you is that it gives you kind of a constant update so We did this for a long time ago. I think 2013 13 if I'm not mistaken and Right after that The Snowden thing happened and they said oh not only NSA reads whatever you say, but they record the encrypted traffic So if they won they can then run something afterwards which meant all of a sudden all of our customers were asking for perfect for secrecy Which will prevent that from happening So what we did with this was this is back by git as you can see there is a git repository backing this So you don't need to use the web UI if you're familiar if you if you're more comfortable with using git Which like I am I go to command line clone that git was there and do my thing But cloud 66 acts as a yet another developer in your team So let's say tomorrow Snowden says something else and then we're gonna have to add that you're gonna have to go and say You know I need to add perfect for secrecy number two to this or there's hard bleed or engine x patch We pull that repository We make the patch and we try to merge it like you're you would do and If it goes in great, you get to choose to say I want to deploy that if there's a conflict You see the diff file use you can resolve the conflict and then push it in that means you get to benefit from All the things that happen in our network now as I said That's a six product rails product power is about Two thousand five hundred unique workloads something about five thousand to six thousand different applications with different sizes So oftentimes we find ourselves in the position of running in front of our customers finding out issues like a canary in a Kind of a coal mine and then we bring it back to our customers before they before they face it They see that there's a patch for some contribution file. That's there But if you want to manage that yourself you know feel free you can do that and if what you do Doesn't conflict with any patches that are coming in great You're going to benefit from that and as you can see this is installed This is available to you for all components that I installed. So it's not just the engine expert, but also You know any my sequel for example has its own native one As well as my hf boxing so I can customize everything to my heart content if I want to Now onto the network settings Here is the firewall settings that we have so From cloud 66 which we have a publicly available set of IP addresses We open 22 if you want to have a bastion server in the middle to lock us out when you don't want to when there's no Deployment happening or you want to have a bastion server that we do have access to but your ops guys has to give Permission to your devs guys to deploy during a certain window of time you can do that But these are the kind of statically created Dynamically created burst that if you applied a Set of fun configuration files Sorry firewall rules and here I can create the ones that I want for example from anywhere To my rail servers. I want to have TCP on a specific port that I have now The best thing here that I find is this We had a case of a customer who had a CRM application managed by one team The CRM application had a database the database had a list of customers the marketing guys have another thing that would generate some spam email And send it to some of the customers the issue would come up that two teams were taking care of those two stacks Sometimes there was a need for a load to because there's a campaign for example going on They would scale up the servers on the marketing side that will fire up new servers will give them new IP addresses Then those IP addresses don't have access to the sequel server on the other stack Which had the list of customers So every time that would scale that would have to go through the other ups up to the other The other team and say can you add this IP address list and worse than that? There was fall down the scale down the stack on the on the marketing side And then there are a bunch of IP addresses still open to the my sequel server of the customer list Which now with GDPR is like actually even worse So and then those IP address are you reused by your cloud provider for another complete client of theirs? And that's just an security nightmare So what we had to do was to say you can't create dynamic firewall rules You can say from any rail server of this stack that I don't have access to I don't know what it is To all my my sequel servers on this side and these are dynamically calculated. So as you go and scale up your my sequel They inherit the right firewall rules as the other team goes and scales up their worker servers on the other stack Then whatever rule dynamically is created this generator. So this is we call it kind of an application century firewall as well as that I can I can I can be naughty and Access the servers directly if I want to with having a lease We just opens up with 10 to 20 minutes access and it shuts it down based on the IP address That I have if I want to really get into the you know the servers themselves traffic redirection rules and And traffic access rules, you know basic things that you have there One of the final things is active protect which is Luckily, I don't have anything there What we do is that we monitor your servers for you at different levels at the OS level at the patch package and pack patch level if there Is any known vulnerabilities that are there as well as that For every deployment that you do for every component that you add to that There are certain files that you want to monitor if you add Docker for example There are some files that you want to monitor if there's my sequel you have some files that you want to monitor for change Whether it's change inadvertently or maliciously you want to be sure that you know about that change and those things will show up here As well as that if there's an attempt For brute force access to your SSH or hitting your web servers repeatedly, they will be blocked and shown here And that's the change files that I was I was talking about environment variables played pretty much, you know self-explanatory with a small Kind of caveat that you can refer to different stacks environment variables as well So for example if you get in the example of the two stacks that I want to talk to each other If you need the access credentials to my sequel of another stack But you don't know the in the password and it might change in the future because of the password rotation You can use the environment variable coming from another stack that you don't have access to and it gets rendered and Sent over to your servers without your developers having access to the real password itself and every time it changes on one stack it gets changed on the other side and This is a basic kind of notifications that you can get on slack email web hooks about pretty much anything that happens in the system and Lastly live logs So we didn't want to build a lock shipping and storage facility There are really good solutions out there from you know, log lead to others that did that do this But we found that oftentimes we wanted to take care of just be sure what's going on right there as a live Logging mechanism so every time you deploy something new or your application the log files for it will be added here And you can just look at them here. You can have a look at the context around them If you want or you can have kind of a basic search around it and this is real time We don't store it It just gets shipped as you enable it and it will be shut down after 30 minutes It's a very good facility that we found for debugging Now This this this stack, I guess I'll have a connectivity issue But on the ones that I've created as you can see I have this tiny thing called stack score here That gives me an a to be a to F green to red Scale and if I go there gives me some hints about what's what's wrong So code we do some code quality if you want to connect it to cloud code climate You can do it as well. We don't have any backups on this production. So that's bad. That gives me an F score Connectivity I'm sharing on production. I'm sharing like a database on the front end So that's also bad and you can fix all those or ignore it if you want and These kind of show you and you get emails if you the stack scores drop or improve as things kind of change So that's what I wanted to show you Very quick run on everything that we do support obviously you have the basics of You know account management as well as the teams which are very popular with with agencies that we work with If you look at the organizations, you can create multiple organizations that you that you have and you can switch between different organizations So if you if I switch my organization to another one, I see a completely different set of stacks that are there and And you have full access control on every single one of the actions that you can do Give it to different team members and we see that in especially in agencies You have people are assigned to projects and are taken off the project Or you want to get you hand over the keys to one project when it's done to your client And the client wants to get one of your engineers onto their stack again because of some maintenance or improvements And they can invite your engineers into this and then take it and take the rights away So it's got all those kind of features in their built-in as well as that you see you have a you have audits if you want Of every action that's happened Where did it happen? What was the action for example to change in the environment variable to keep the history of all this is basically kind of very specifically built around rails, but with some Good ops practices around it that will allow you to run your run your operations the way you used to having a pass But it just works much more smoothly Now I've got about two two or so minutes if any any questions, please Let me know Yes, okay, so the question is how do we manage memory spikes or manage your memory memory usage of a process that That's running on the server. There are three things here one is that Based on the heuristics that we find so we fire up on a summer Thousand-something servers a day not all of them keep running But you know people fire him up and they fold it down and based on those We collect a lot of metrics about the behavior of the cloud provider with a specific component So if you run redis on this solution Netherlands data center am s1 with this kind of server This is the kind of a footprint that we see and based on those those configuration files that you saw the custom config We try to map to change those variables the variables that get ended end up in the rendering based on those heuristics So if you deploy my sequel on to and I'm talking about these kind of prepackage processes like my sequel if it deployed to a Ws and then you take that stack and move it to this lotion It might be that those variables change you can influence it you can fix it to whatever you want But we apply those heuristics that we've learned on to that to optimize the performance of that That's the first one. The second one is about the spikes and everything else so we do collect the vital signs of the server by the memory the processes and disk and CPU usage Those are kind of collected and you can see it for the past I think six months You can you can have a look at it and the disk you can also had a threshold and a trigger that gives you an email Something goes wrong. You can get it Or you run out of this because operating systems don't really gracefully handle this just But it's and it's based on processes So you can get the process metrics out so that's kind of on the metric side We also allow you to run side cars and side processes as well to shut down something So for example, say you have a huge you add a huge asset to your rails application and the npm Asset pipeline compilation just blows up You know gives you an o.m. And just blows up the whole thing We can take care of those things as well some of them are rails specific like we see a lot of that around npm I'm sorry note asset pipeline compilation that we take care of we know what it means for your rails and stack some Of the word generic you might be a stack You know memory leak in your sidekick that you have and we take care of that by either killing it Or you are reverting back to the kind of a native OS behavior around memory management. Does it answer your question? Cool. Any other question? Sure. Yes So the question Absolutely, so the question is what's our process for taking a workload from my from Heroku for example on to classic six There are signs of something You know we can see as you know as you know we would detect for example my sequel if you also detect something's been running on Heroku our signs to tell and if we detect those things we actually are more proactive We show you something in heroku per pole that says it seems that you've been running this. Wow. That was efficient You've been running this on heroku here are the steps that you can take if we cannot find those tellful signs Then still there's documentation around how to move each one of those things around code database and traffic All right. Well, thank you very much for your time. I hope you enjoyed it You can check us out at classic six comm we also have a booth at the exhibition You can come and you know ask more questions or you know see the project in the demo for the things as well. Thank you