 All right. Hello. Cool. I think we're going to get started. Hello, everyone. Thank you very much for joining us today. This talk is from zero to cloud in 12 easy factors. Before we get started, here's the obligatory fire exit announcement. It will be by far the most important slide you read today. Please take note. My name is Ed King. I'm an engineer working for Pivotal out of the London office, currently working on the Cloud Foundry Garden team. This is my colleague, Jatin, who works with me in London as well, as one of the engineering leads down there. Someone once told me that every good presentation tells a good story. This is really a story of transformation. Transformation of a legacy old-school.net application into one that is much more modern and suitable for deployment to modern cloud platforms such as Cloud Foundry. Specifically, what we wanted to talk about today was the 12-factor application manifesto. What it is, where it came from, why you should care about it. We thought that the best way to do this would be to go through each of the factors step by step and take a look at exactly what each factor is, and to then demonstrate that factor in practice against an actual application. Jatin and I have developed what we're calling a zero-factor application. This is one that is completely unsuitable for deployment to cloud platforms such as Cloud Foundry. With that in mind, let me hand over to Jatin to describe our example application. All right. The example application that we'll be transforming today is called Slowgram. It's just like Instagram, but it doesn't much slower. The application itself is very simple ASP.net application. It uses the.net code framework. What the user can do is upload an image and that image will be a specific image filter will be applied on that image, and then the image will be displayed back to the user. The user can see two queues, like a queue of unprocessed images and a queue of processed images. The application itself is very simple. It uses a thread for background processing. There's an interesting way it is deployed. It's currently deployed on a shared ISS instance. It uses the file system to do everything. All the images that have been uploaded just go and stay on the file system that the application is running on. It uses a local data store, like SQLite, that it connects to for storing its relational data. The way that you update the app is you go to your.net project, click publish, get a DLL. Ensure that the version of.net that you have installed on your local machine is the same as the server. Email the binary to the server, to the operator, and then they will put this DLL in a specific folder in ISS and wait for all the processing to get to a pause and then restart the server. We have been there. One of the problems with this application is it is very difficult to update, and if it receives too much traffic, the only amount of traffic that it can handle is the capacity of the server that it is deployed in. Once it starts to get a lot of traffic, it will start to throw four to twos. What's happening now is this app is really going viral, and we really need to find an effective way to scale it, and we know that CF is good for that sort of thing, so can we just push it to CF? Yeah, so I think that's really going to be the crux for the rest of this talk, is we're going to be trying to take this old school legacy application and successfully pushing and scaling it on Cloud Foundry. But before we get into that, I think it's probably a good idea just to get a high-level overview of the 12 factors, like what we're actually talking about here. So I think maybe the best way to describe this is there really is a set of best practices and learnings that developers can apply to their applications as they're developing them in order to output something that is suitable for deployment and development on Cloud platforms, such as Cloud Foundry. So the idea originated from engineers who were working at Heroku. So Heroku is like another modern Cloud platform. It's obviously not quite as good as Cloud Foundry, but that's where this originated. And so at the time, the engineers there were witnessing and overseeing thousands and thousands of deployments, like on a daily basis. And over time, they started to notice some of the patterns and best practices that were emerging from those. They had a good insight into the kinds of things that were working, what wasn't working, what was making life difficult. And they essentially gathered all of that knowledge together into what we now call the 12 factors. And I think, at least for me, I think the key point about this is the 12 factor application. It's focused on statelessness. So state is bad. State is very difficult to manage, and it generally makes your lives very difficult. And so by designing your applications in a way which minimise how much state that they're depending on, or the way in which they depend on the state, it's going to lead us in a better situation. So just briefly, that's what the 12 factors are. It might not mean too much to you right now. But we're hoping to cover most of these. Probably won't get through all of them, as we haven't got a lot of time here. But we certainly cover the main ones. So let's get started. Let's get started with an easy one. The first factor is code base. And the code base factor states that every application must have a single corresponding code base, and that that code base must store in version control. And I don't want to spend too much time on this because I hope everyone is already sort of doing this. If you're not, you really should be. Why do we want to do this? Obviously, to allow for proper versioning of our applications and to support collaboration between teams and developers. How do you do this? Well, just use git, basically. Cool. So, okay, so let's actually run through this. You know, whenever you read any tutorials or look at any documentation online, it all just says that all you need to do is run CF push, right? So we thought that we would actually try this. But just before we do this, there's just enough time to mention this other factor here, which is build release run. So this factor states that there must be a strict separation between the build release and run stages. So the build stage is where we take the application code and convert it into an executable. The release stage is where we apply the deployment, sorry, apply the deployment's configuration to that executable. And then the run stage is when we actually run it. And the reason I'm mentioning this here is because this is basically what's happening when you run a CF push. So Cloud Foundry just does this for you. So let's take a look. Cool. So we're not actually doing this live because this takes quite a lot of time. And we would just spend the whole time kind of watching the screen. But there we go. All right. So as you see, like, as Ed mentioned, Cloud Foundry is running its build prep process, which will create binary for you and attempt to run it. So you don't really have to worry about the code, like the framework version mismatches between your servers. You just have to give your code to the platform. But we see that the push process has failed. It has failed in the compilation step. So the way the app was deployed before, so it's saying that it cannot find the image processing library. So the way the app was deployed before, this image processing library was essentially shared by a lot of other applications that were also deployed on our shared assets instance. So the operator used to go in and install this for us. So we cannot really rely on such processes when we are pushing to the cloud, so which brings us to the factor of dependencies. So this is a key requirement for two-factor applications. Like, all of your dependencies should be explicitly declared so that the build process can go in and pick it up and install it for you. .NET applications are pretty good at this from the get-go because of the DLL experiences of the past. But there are still some dependencies or some administrators which will allow shared dependencies to be installed on the server just to save on artifact size or whatever. We cannot rely on such optimizations anymore when we are pushing to a cloud platform. So the way that to fix this is just to use dependency declaration tool, so they're a lot available for .NET. We are going to use Nugget, which is the default now for .NET core. So this is our current dependency declaration. We just explicitly add our image processing library there. And let's try to CF push again. So, yes. So as a side note, while this is happening, like, if you have some specific OS level dependencies, like binaries that have to be available on the server for processing, you can create a special folder called LD library spot. And then the build pack will take care of making those libraries available when the application is running. Cool. So I think now we have pushed again and we've got a little bit further this time. But now it appears as though the push is hanging. And I think eventually we're going to see that it fails. So what's happening here? As we can see, it asks us to run the CF logs slogram command in order to get some more output. Incidentally, like, logging is actually another one of the factors. I'm not going to talk too much about that now. Maybe if we've got some time at the end, I might come back to that. But for the purposes of this, let's just run that command and see what it's telling us. All right. So we can see the output there. And there's a fairly obvious error message. Failed to make a TCP connection to pull 8080 connection refused. So in order to understand this, I think we're going to need to talk about two things. Firstly, the Cloud Foundry health check process. And then secondly, it's our next factor which relates to config. So the health check process, this is something that's periodically running against applications that you deploy. And the way it works is it attempts to connect to a port. And if it can't connect to that port, it assumes that your application has crashed, at which point Cloud Foundry will bring up another instance of that application. And the way that this works is Cloud Foundry needs to know the port. Sorry, let me start it again. So the reason this is failing is because, as Jadam mentioned earlier, we had some fairly strict networking requirements in our current deployment. So this is kind of maybe a fairly contrived example, but bear with us because I think it kind of demonstrates the point here. We are hard coding some configuration into our application, right? So right here, it's saying that you must be using port 9000. And that's why the health check is failing, because Cloud Foundry expects to provide a port for you that your application should be listening on. And so that brings us onto the next factor, which is config. And the config factor states that you must ensure strict separation between your code and your configuration. And on top of that, you must store the configuration usually in environment variables. And the reason we want to do this is because it allows you to deploy your application in many different places and make changes to the behavior without needing to make code changes. So in this instance, we need to remove the hard-coded port, which is part of the configuration, so that we can pass the health check. So all we need to do is remove that line that has the use URLs. And then as we're building up the configuration, we can just add this line here, which is the add environment variables line. Once we've done that, we can try and push again. So hopefully this time we'll get a little bit further. As we can see, we're going through the staging process again. It's looking good so far. And there we go. We now have actually a running instance. So this is pretty cool, right? We've covered a few of the factors, and we've already managed to go through and get something that's working on Cloud Foundry. So now that that's succeeded, let's just give it a test run and make sure that this is actually working as we intended. So all we're going to do here is upload a few example images, make sure that the processing is working, we're using pictures of giraffes here. So we can see that they're eventually going from the raw queue and they're getting processed, and everything seems to be looking pretty good, which is great. So just to sort of like cover, like we've covered like config, dependencies, code base, and the build release run, and that's been enough of the factors to take our application and get it working on Cloud Foundry. So what's next? Scaling. This is the whole reason we wanted to do this in the first place, right? We are currently our app's going viral, it's crashing under the pressure, and we have been told that Cloud Foundry is amazing at scaling our applications. And again, like whenever you sort of read anything about this, read any tutorials or whatever, it says that all you need to do is just run the CfScale command. So let's do that. Let's see how that goes. The actual command here is scale minus i2, which means give me two instances rather than one, so in theory, twice as fast. And that's been successful. Pretty easy, pretty quick as well. We can see that we now have two instances of the app. So let's go and check. Let's make sure that this is still working as we expect. And there's a problem. We're now missing some images. This is clearly not working as it should be. So clearly we've done something wrong. So over to you, Jatin to help explain. All right, so what's going on over here? So as we described before, the application that we had developed was extremely stateful. So it used the local file system and a database which also lived on the local file system. So essentially what you had is the way that the application was running before was there was a load balancer and there was a container which had the file system and the database both in it. Once we have scaled up, we have two stateful applications and a load balancer which randomly directs requests to either of these applications. So for making one web page, there are multiple web requests that need to go through and those are going through randomly to any of these servers. And one of them does not really know about the state that we injected it into it earlier. So that's why we see this page which is incomplete, which really brings us to the next factor and this is probably one of the most important things about 12 factor applications is that they have to be stateless processes. So the application should make no assumptions about resources or disks that will be present on the operating system where the process is running. So this alone has a lot of implications, which means that no sticky sessions, no local databases, no local storage. If you compose an application this way, it's very easy to scale out. But what do you do about the data? The app does need data to serve user requests. So that really brings us to the next factor which is backing services. So the idea of backing services is very much related to the idea of config. So all the data services should be, all of the data should be stored in backing services, which are injected through the, as config. This includes stuff like databases, file systems, not file systems, but like external resources that you would use. And all of these resources should be attachable to the application during runtime. The whole point of backing services is that you should be able to switch backing services without making any code changes. So this mechanism really allows us, this creates a mechanism for us to externalize all of our state to make our application stateless. So what are the scaling concerns in our application right now? Right? So the first one is fairly obvious, like a local database. So we are using SQLite, so it just connects to a file on local. We should not do that. We are using file system all over the place. So when you upload an image, it just stored on the local machine. So it just does file.join to reach those images. We probably should not do that as well, and we need to move that out. The third one is a bit non-obvious. So as we .NET implicitly uses the file storage as well for storing encryption keys. So by default, all forms are protected by cross-site forgery protection. So which means that each form will be added, a secret token will be added into each of those forms, which would be validated when the request comes back to execute a post request. So if we don't move these data protection keys to a place where both the containers can see that a request originating from a different container won't be processable by the other container. So yeah, so we'll have to switch to a different mechanism for all of these use cases. Cool. So we've identified that we basically need to extract away all of our state. And fortunately, this is a solved problem in Cloud Foundry. This is where CF services come into play. So in order to update our application to make use of that, we can, first of all, run the CF marketplace command. This is going to list all of the services available. And for us, we're going to use MySQL. Next up, we need to create an instance of that service. So all this is going to do here is basically go and provision us a new database that we can use. And then finally, we need to bind that service to our application. And all that's really happening there is all of the connection information required to connect to that database is being injected into our application via a special environment variable called vcap services. So this is great. We've now sort of attached that backing service to the application. But obviously we need to update the app to make use of that. So first of all, we need to make use of a couple of new libraries. We're going to specify them as explicit dependencies. And for the purposes of this, we're going to use the Steelto connector along with the Pomelo and MySQL adapter. So just as an aside, like the Steelto library or Steelto project is really interesting, especially if you're developing microservice style applications, I certainly recommend you check it out. But right now, we're just using the connector, which basically allows us to parse that connection information from the vcap services environment variable. And then we're using the MySQL adapter so that we can use a MySQL rather than a SQLite. So in terms of code, all we need to do here is as we're building up the configuration, we can just add in this add cloud boundary, which is going to go and grab that connection information. And then we just need to parse it through to the MySQL, the useMySQL function. So that kind of solves one of the problems. That solves the database problem, but there's still the problem that we are using the local file system both to store the images, as well as Jatin mentioned those data protection keys. So in this case, all we've done is just created a little adapter class that basically provides us a connection into the Azure Blob Store. It's perfectly possible that this could have been built as a cloud boundary service that we could have just attached to the application as well. Unfortunately, we couldn't find one. So this is why we've kind of done it ourselves. And then all we need to do to make use of that is just call this azure.blob. And then rather than storing that information on the local file system, it's now in a remote blob store. So after we've done all that, has it worked? And it appears that yes, it has. So we're refreshing the page and the application, sorry, the pictures were staying there. But there's actually one more problem or one more thing we need to think about. And that's what happens when we scale down. So as we've just seen here, we've uploaded another image. And before it had a chance to process, we scaled our application instances back down to one. And now what's happening is the image is stuck. It's not ever being processed. So Jatin, what is happening there? All right. So the way the application is written today, to do the background processing, it just spawns up a thread in the ASP context. And it just processed inside the web server. And when we scale down, the web server essentially goes away. So if the web server that goes away happens to be processing the request, that request will never be processed. And this image will forever be in the raw stage, which brings us to the next factor, which is disposability. So applications in 12-factor applications should be designed in a way that they can be stopped and start at any point of time. So if we do that, it gives us a lot of benefits around how we can update them and how frequently we can update them. And we are not constrained by any application-specific state or we are not constrained by what the application is doing when we are updating the application. One factor which is closely related to this is concurrency. So we are also treating the 12-factor, the concurrency aspect of the 12-factor framework talks about treating all of the background, all of the processes as first-class entities. So currently, our background processes essentially live with the web server. And this really allows us not to be horizontally scalable very efficiently. So we want to get to a world in which we could scale our web servers, workers which serve websites independently of the servers which process images. For example, like in our application, it takes much longer to process an image than it takes to serve web requests. So we would like to make those two things independent. So the way that we can do that is the current offending line in our code is we just have a task.run which will start a thread. What we would like to do is switch to any background processing library. So there are a lot available for .NET. So there are squads, flu and scheduler. We are using a library called Hangfire. So Hangfire actually uses a backing store. So once you say background job.nq, it would add something in the Hangfire database. And the Hangfire library gives us at least one run guarantees. So it would run that task at least once at some point of time. And which also allows us the disposability aspect, which means that this process can be killed at any point of time. And when the server comes back up, Hangfire will ensure that that task is run. The other thing that we want to do is we also want to move the processing to another thread. So what we have done over here is created a new console application, which is only for doing the processing of images. So the library again helps us a lot with this. So all we need to do is connect to the same backing data store and ask it to process the jobs that we put in there through our web server. Cool. So I think just to sort of explicitly show that, what we're going to see here is if we run apps, we now have two separate apps, the slowgram and the slowgram worker. And this means that we can scale both of those independently. And this is really nice. Cool. So we can see that we're scaling the web server up to two and the background worker up to four. Cool. So I think let's just take a quick step back and look at everything that we've covered so far. There's a few there that we haven't mentioned. These are perhaps not as important for getting your application at least up and running on Cloud Foundry to begin with. And I think actually that it's all that we have time for or all that we have covered. So hopefully that's been of some help to you. Hopefully the examples weren't too contrived. And thank you for listening. I think we've probably got a bit of time for questions if anyone has any. Yes, I will post the slides after this. DiningRossHopper Slash, github.com, DiningRossHopper Slash, what is this thing called? A slowgram. Cool. Thank you very much.