 Okay, so let's start another talk. The topic of this talk is 12 factors to cloud success. Dive with us into the 12-factor methodology and check how each one of the factors can be applied and even empowered by Linux container technologies such as Docker and Kubernetes. Talk is presented by Rafael Benavidez, who's the director of developer experience in Red Hat. So welcome everybody to this session. Let me confirm, yes, it's turned it on. So as my friend Joseph just announced, today we will see about the 12 factors to cloud success. By the way, these slides here that I will show that I will present to you are available as an open source. So everything that we do in Red Hat, we like so much about open source that even these slides are available freely. So you can suppose that you want to reproduce, if you enjoyed the session and you want to reproduce in your logo jugs, in your company for your friends, you can just copy these slides. They are totally available in this URL here. You can also follow me in the Twitter, this is my Twitter handle, or if you prefer to send me any email, my email address is benavidez at redhat.com. I've been working for Red Hat for almost, well, in fact, more than seven years. I started as a consultant, I was a software engineer in the Red Hat developers materials. Now I'm working in the Red Hat developers program. I don't know, how many of you are Red Hat employees? In fact, let me change the question. Who is not a Red Hat employee? Wow, thank you everybody. I'd like to invite all of you to visit and join the Red Hat developers program. You can access and join and be part of this program by accessing the URL developers.redhat.com. And what is the developers program? It's a program to empower, to help developers to build software with success. In this website, you can find free access to Red Hat products. You can find free eBooks. You can find webinars, blog posts, and everything that we think that might help you to be a better developer. So let me start this session with a story that happened in my life. One day, well, in fact, I'm a Brazilian, okay, that's why I have this accent. One day I was in Brazil walking in the zoo with my niece. She's like four years old. And we were walking in the zoo and she saw the animals, she saw the elephants, she saw the snakes, the birds. And then she came to me with a question, uncle, I have noticed that the elephant has its power, the snake has their poison, the birds can fly. And so she asked me, what are the human superpowers? That question was tricky, but I started to wonder, what is our superpower? And the first thing that came to my mind is to say, well, we have the power to love. And she said, no, that's not fair. We're four years old, believe it or not, she said, you know, my puppy, my little dog, my mama dog loves their children, their small puppies. And I said, okay, that's fair. And I started to wonder, what I could say to a small children of four years old. But I have realized that our biggest power is thinking. Because thinking has taken us to places that we never imagined or have been before. So with thinking, we became problem solvers. Well, that until we decide to become programmers, right? That's why we are here. We are all developers. So let's think what we do to solve a problem. The first thing that happens when we identify a problem is to analyze the cause and the effect of the actions that we take. And once that we find that we identify the causes and the effects of a solution, we identify what is the better solution for that problem. And once that we find a solution, when we implement, we have a kind of feedback loop, a continuous improvement of that solution. Because we continuously try to find a better solution. And we try that, and if that can be better, we try it, and that keeps improving. And if you think in your life, everything is based on the feedback loop. For example, when you are learning, when you are a small kid, you are a small child, and you are learning how to walk, you have the feedback of the gravity. You know that you are almost falling, and you can improve your way to walk. When you are learning how to speak, we have the feedback from our ears, because we know that the pronunciation can be improved. We have our parents, our mama says, it's not that way that you should talk, you should talk that way. So we have feedback in our life to keep improving the solutions. And what should we do when we find a solution? Well, the best answer to that is to share. Because when we share the solution with other people, we can have more people joining the feedback loop. We have more people saying, you could do that way, that way would be better. You can have a friend telling you, have you tried that other solution? So sharing makes us, together we can be stronger. So that is what happened with the 12-factor app, which is a methodology by Heroku. Who have ever heard about this methodology called the 12-factor? Cool. Let me explain for those who have never heard about the 12-factor. Heroku is a cloud company. They are a platform as a service provider. They started to realize that every customer of them who are providing cloud solutions, who are providing cloud applications, they all of them have more or less the same problems, the same issues with their application. And what they did is once that they identified a solution, they aggregated all of them and released it as a methodology called the 12-factor app. You can find the 12-factor app in this URL, let me show you here, 12-factor.net. And they consolidated every solution and shared it with the world. So once that they shared that with the world, more people could join, could try. And it became a methodology that has helped us to build better software. It's based on 12 principles, that's why it's called 12-factor. You don't need to memorize each one of them because what I will do in this session is go through each one of them to show how it helps us to create cloud-native applications and also we will see how containers are the perfect fit to the 12-factor. So let's start by the first one. The first one is called codebase. The description of codebase it means, says, one codebase tracked in a revision control and create many deployments. But what does that really mean? It means that every application will be hosted in their own repository. It will be created by, released as a package. And then that package will be deployed in many environments, in our development environment, in our QA, in our production to finally reach the production. If you have another app, you will have that in another repo, another package, and many deployments. But that seems easy, but what we should know is what not to do. Because for example, some companies, they have their application hosted in a repo, then it becomes one package that's deployed in development and in the stage deployed in the development. But for the QA or for the production, you have another repo store that creates another package. You should not do that because you are losing the control between the synchronization of these two repositories. What else you should not do? Have one big repo with your all applications there. Because you have mixed history for both of them, or for three or for all of them. So for example, as I said, we will use an application to demo these factors here. So I have an application in this GitHub repo. The URL is github.com slash redhat-developers, redhat-developers-demos. It's a place where we store all demos from redhat-developers. And we have a repo store called 12-factor-app. We have an application here that we will deploy in all those environments. So let's continue to our second factor. The second factor is about the penises. It says that you should explicitly declare an isolated penises. But what does it mean in fact? It means, for example, if you were a Java developer, you should not store your JAR file inside the repo store. If you are a Node.js developer, you will not store your NPM package inside the repo store. Most of the modern language, like Pyro, which has the C-Pen, Ruby, who has the gems, allows you to easily declare, you have a file where you declare your dependencies, you declare the version. You don't need to store the dependencies in your Git repo. So in this case of this application here, which is a Java application, we have the PoM XML where we have our dependencies declared. So we don't need to store the Java. It's easy for every developer to work and compile these projects following the same dependencies. But now, before going to the factor number three, we'll take a little detour to the factor number five, because it makes much more sense to show how to build this application. Because the factor number five says that you should strictly separate the build, run, build, release, and run stage. But what does it mean in fact? Well, in fact, let's think about it. When we are building our software, what will we release? We will release, in case of a Java application, we should release our war file or our jar file. And who should perform that? It's the continuous integration platform. By the way, who uses Jenkins here? Okay, that's good, because Jenkins will provide a way to integrate your software, and then you can release your war file or jar file to avoid that kind of problem. It works in my machine. By the way, it reminds me about a story, another story of my life, when I was a developer, like almost 15 years ago, I worked in a company. And every time that the phone rang, there was a guy who worked there that we bet among ourselves how many minutes it would take for him to say, it's working in my machine every time that the phone rang. It's a matter of minutes to see that happening. So to show you that this application can be built, have their build steps separated from the other steps, I will run here the build. As you can see, it's just a plain maven build, where you generate a jar file, okay? Now the next step is to release the software. By release, I mean releasing what? I will release the container image. Why? Because with containers, we can have faster deployments. We can have rollbacks, easy updates with different deployment patterns. By the way, I will talk about deployment patterns on Sunday, so you are also invited to join that session. I will talk about rolling deployments, canary deployments, AB testing, blue-green deployments. So all of those patterns are easy to perform with containers. And that's performed by the continuous deployment mechanism, okay? So let's release our image, release. And I'm releasing where? I'm using here OpenShift. OpenShift has an internal image, internal Docker registry that allows you to push image to there. And something that is really important is that each release has your own release ID, okay? Each release should always have your unique release ID. And that is okay to perform with containers. Because every container that we release has your own hash that allows us to specify. For example, we can create a document and say, this is the homologated version. It's the image with the hash ID number of whatever, okay? So I just created an image and released the software here. The next step is what? It's run. Okay? I don't have anything running here. So you run. That will take the image that I deployed inside OpenShift and create the application for me. But you might wonder why it's deploying. Let me explain another thing to you. These three stages, the build, the release of the image, and the execution are three different things. But it doesn't mean that it should have to be to perform it manually. It should be made by a continuous deployment pipeline. So pretend that I'm in Jenkins. If I was Jenkins, what I will do is perform the build in one stage, perform the release in another stage, execute the application in a testing environment in another stage, then check, run my tests, run my integration tests, all my tests to make sure that it can be released in the production. Then my next stage would be released into the production. That's why we need to make all those steps individually. So let's verify if the application is running. I can check here the URL. And finally, I have my Hello World application working. So let's see our next step to cloud success. We are now in five. Let's go back to the factor three. The factor three is one of the factors that I like the most because it says that should store the configuration in the environment. But what does it really mean? It means that if you have to repackage your application, you are doing it wrong. And that's true, for example, for configuration like connection to the database, the connection to the database in a production environment is different from the connection to the database in a development environment, of course. We can use this factor also for feature toggles, so suppose that you have a feature in your application that you don't know how stable that is at that moment. You can just use a configuration in the environment that enables that feature. If the feature is not performed the way that you like, you can disable that feature by changing the configuration. But you don't need to create a new release. You don't need to create a new, you don't need to package your application again. So what I will do now, now that we have built this software, I will change this message here by simply changing a configuration. So for example, I will update the environment variable inside OpenShift that automatically triggers a rolling deployment. What does that mean? It means that we will execute this new container instance here with the environment variable having the new message that I want to show. And once that is running, it will replace the old one and I can just retry and see that my configuration has changed without needing to repackage my application. Okay. And what about the backend services? The backend service means that you should treat your services as attachable resources. So for example, if you are using MySQL locally today, you should be able to change it for MySQL hosted in the cloud. If you are using a volume, a local file system, you should be able to change to a volume in a storage or a volume in the cloud easily. Your application should be able to perform that. And to show you how you can perform it with that demo application, what I will do is attach a MySQL instance to that application. So I will deploy a database here. You will see that if the MySQL instance is executing, right? So okay, I don't have any registries in the database. So now let's populate the database. Okay. I have connected to my machine. I have attached a MySQL to my application. Again, I didn't need to repackage. I was able to do it easily with containers. And later I will show how this application was able to find the new MySQL instance. We will talk about that factor later. If I'm not wrong, it's factor number seven. Okay. Another thing that we should care in cloud-native applications. It's about being stateless. Because if we think in a situation where you can scale, where you can remove an instance from a container and start a new one, you don't need to have a state among these containers. Because containers can die, you can easily update them. It's easy to perform that when you are in a stateless architecture. Of course, this application here uses REST API. REST is popular because of that. Of course, you could use any state on REST. But you should avoid, please avoid sharing state. Now the factor about port binding. And that's the factor who tells you how to find another resource in the network. Because for example, let's think about Java applications or even node applications. Most of them will run in the port 8080. We can have, for example, MySQL running on the port 3305. But suppose that I have multiple databases. To avoid port conflicts, I can have one database running on 3306, another one running on 3305, another one running on the port 5000. So you should not depend on the port. You should always do a port binding and tell your application what's the port and what's the service of that resource. And you can do that using, for example, environment variables. To show how you can easily do that with OpenShift and with containers, if you notice here in the OpenShift console, you can see that it's connected. I have the port 3306 exposed and it's connected to the internal port 3306. But suppose that I want to change that port to the port 5000, just like this slide. I will simply change the port configuration. As you can see here, it's now saying that MySQL is running on 3306, but it's exposing the port 5000. And how my application knows that? Let me open the terminal here. It's true environment variables. So once that I changed and I specified where it's running, OpenShift automatically exposed that the MySQL service port is 5000. So it now allows you to just continue executing your application and you will keep the access to the database. Another factor that I like a lot is about concurrency. Because concurrency means that you can scale out via process model. What does it mean, in fact, is that, for example, suppose that you have an increase on the sales application. You don't need to scale out everything. You can just scale that part of your software. I like to say that this factor is the best advocate for microservice architecture. Because with microservice architecture, you can just scale up and out what you need. So for example, and that is easy to perform with OpenShift and with containers. Because for example, I have now, this is not a microservice application, right, it provides just endpoint with my Hello World. But suppose that I want to scale that to three instances. My demand has increased and now I want to have three instances of my application. You will see that now I have just one container. But it's so fast to start that once that becomes available, you will see the replies from the other containers here. By the way, this from is the name of the host. So I have three hosts replying. If I want to do that by the console, it's just click on the arrow. And once it's okay, you will see now three or four requests. I have another factor that I like a lot. And the way that I like to explain this factor is about the comparison between pets and cattle. Who have ever heard about the theory about servers being treated as pets and cattle? That's nice. I like that a lot. And to explain, for those who have never heard about that, if you think about how we treat servers for many years, we tend to treat them as pets. We give them names. If they are sick, we take them to the vet, right? It's funny. I worked in a place that the names were names of philosophers. I worked in another place that were names of kettles, what are names of stars and planets. And one day, I just arrived in the work and the operator guy was almost crying and saying, Plato has died. I thought it was his causing. But in fact, it was the server. And he was literally crying because that server has crashed. And that's why we treated servers for many years. But we should treat them as kettles because if we lose a server, we should easily replace it by another with no regrets, no love. We just kill it. It's not performing the way that you want to just replace it. And it should start and replace it by a new one easily. So for example, I have here the four servers attending my request. And I want to kill some of them. I destroyed two processes. And OpenShift will automatically create two new processes again to me. It will be fast to start as you could see. And it's done. Our application is still working because it's stateless and it's disposable. So another thing that we should consider as a factor that's important for us is related to the dev and prod parity. Because how many problems we could avoid if we avoid to deploy the application directly to production? We should always try to deploy it in a development area than deploy it to a station environment. But if we think about how we do it actually, we always have separated the environment. We have a completely different environment for development, which is normally a small computer. Then we have the QA, which is not a great machine. Then we have the staging, which is a little bigger machine. And then finally, we have the production that's a powerful machine. And we should try to do as close as possible, but it's not always like that. So the way that I suggest to you to do that is use the same cluster of containers and separate it using namespaces to avoid problems of being used in a different environment. And of course, using a continuous integration and a continuous deployment pipeline. So for example, I have here this application running in my development stage. It's a cluster. Of course, this is a small cluster, it's a cluster of one node. But suppose that you have a cluster of 100 machines. If you want to try that locally and see that it works, it's easy for you to deploy that in a staging area using the same cluster. So what this will do is use the same image that you have previously deployed. You can see here, I have the image 1A7B, right? So in a staging environment, I will have the same image that was released. I didn't change the release, remember, my release ID, the same number that I specified is still the same among the environment. And the environment is still the same. What's changed here is my namespace, okay? So I have my application running in the staging environment, changing just the namespace. And it behaves just like the same. Now about logs. Logs is important nowadays. It helps us to take decisions. It helps us, for example, to use, I stack, just like Elasticsearch, FlintZ and Tibana to find the behavior, the business behavior or even the technical behavior if your application is not performing well. But the way to use these stacks is to always treat your logs as a stream of data. Of course, it will stream a lot of data and this stack of like Elasticsearch will allow you to find just what you need. But if you think your application should not be concerned about storing the log, how long it should rotate the log, it should always just stream the log. And then something else will take that stream. If it needs to be analyzed, it will be deployed to a stack that you analyze. If it needs to be stored, it will be collected by something else and stored that data. But your application should not care about that. It should just perform the stream of data. And if you think well, containers already provide a stream of data. You can collect the stream of data from the container. You can also get that from inside OpenShift. You can follow the log, for example. And every request that you do, it will be streamed to the logs. And finally, the last factor is about the admin process. The admin process means that if you need to perform something for management or for admin tasks, you shouldn't try to avoid as much as possible to be on the same process of the task that you are running. So for example, containers allows us to do that easily because every container runs only a single process. When you want to attach a batch process or a MySQL process, for example, you create another process. Just like batch, okay? But that can be easily performed by containers. So let's see, for example. I have here a script that will open a process in my container. And as you can see, this batch process here is separated from my Java process. I'm not connecting to the JMX console directly. I'm trying to avoid as much as possible to, for example, do a thread dump in the Java process. Of course, now that I'm in the batch process, I could do that, but I should avoid that as much as possible. And these are the 12 factors. If you follow all those steps here, all those factors, I'm pretty sure that you have a successful deployment. And notice that this is not stuck to a specific, this methodology is not stuck to Java, it's not stuck to Ruby, to Node.js. It's agnostic of technology, but it's very well adapted and can be very well performed when you are using containers or platforms like OpenShift to orchestrate your container. It's very focused on DevOps, and you can go to the 12-factor.net. And if you have any doubt, of course, you can ask here and go in each one of these factors and read more detailed information about each one of them. If you want to know or to have access to those scripts that I've used to see what I'm doing when I performed the, for example, the release or even the scaling, you can go through, you can go in this URL, github.com, slash redhat-developer-demos-12-factor.app, and feel free also to use that demo. By the way, don't forget to register in developers.redhat.com, and if you have any questions, it's easy to find me in Twitter. So it's time for us to, for questions. Don't be shy, people, because that means that you understand everything or understand nothing. Do you have any questions related to any of these factors? Or I want to have a question specifically about the OpenShift platform, something that you have tried to do, or want to share any story that you've tried to deploy a cloud application and you faced an issue that could be solved by any one of these factors? Cool, so the question was, you could perform, well, the methodology came from Hiroko, which is a cloud company, but with RPMs, yeah, with RPMs you can use most of these factors to build the RPM, your meaning to build the RPM. And why it's good, for example, to deploy, to release your application as a container instead of RPM? Okay, so that's a good question. Let's think in that way. When you are releasing our RPM, you'll have to handle, for example, some RPMs are supposed to run on RAIL 7.2, but when you try to run on RAIL 7.3, you find that the dependencies are not the same, you have to repackage your application. With containers, you are not stuck with the file system, with the Linux version that you are running, because when you package your application in a container and create an image of that, you are using a specified version of RAIL, of Fedora, of CentOS. For example, if we take a look here in this application, the Docker file of this application uses one image provided by JavaJBoss of OpenJDK. If I'm not wrong, this uses CentOS and OpenJDK, for example. So I can deploy this image in Fedora, I can deploy it in RAIL 7.2, 7.3, I can deploy it even in another alien distribution that is capable to run OpenShift and to run containers, but the version that I'm using, for example, I'm using here Mac as my laptop, but the container is running, let me see if I can find, it's running on CentOS 7.2. So my container is always running on a very non-distribution, no matter where OpenShift is running. My container knows the environment, so that's the advantage of running containers instead of running RAIL RPMs. You don't care because it's Cattle.task, you have some containers, but it's even beneath it. Yeah. Because you don't want to be dedicated with containers, which will result in this server going to an old container, which will open itself from some other version, and somehow your application depends on a lot of other stuff that you don't control. Yeah, that's a good question, that's why I want to show you something. Red Hat, as a provider of supported versions, it has their own supported Docker image, where you can find, let's see if the network helps us, but it provides a blessed image with CVS, with check for vulnerabilities, and if, for example, Red Hat detects that you are using, for example, that image has an issue, it will release another image and tell you, hey, you have the option to update your image, your base image, really release your application using the new version. So let me see if I can explore the catalog and show you how that works. And while we open here, I'm open to another question. Yeah, but that would be, I will respond to that question. Let me just open here. Let's see about this image. Okay, the question is, his company has a situation where the same application is hosted in two different repositories. But as we saw in the 12-factor, it should be avoided. But how to handle the source code, how to handle the problem for enterprise development, where you have a code base that's been developed and another one that should be supported for many years. So that way, I suggest to use branches because it's easy to see the difference about the branch. For example, if you're using Git, the developers are not allowed to do branches. Okay, but when you release, you freeze and you create a tag. So suppose that you create a tag 1.0. And then you find a bug on 1.0. You will create a branch called 1.0.1. Yeah, because think about the open source projects. Yeah. But believe me, it happens, especially in the open source word, because we have people from the whole... Yeah. What I'm trying to say is that people from all over the world, from different companies, are working in the same code base, creating a release that it's a release based on a tag. It could not be changing anymore. But if you find an issue with that tag, you will create a branch that could be easily replaced. And just to use the time here, in the red hat registry for Docker image, you can see, for example, the recent tag of rel 7.3. But this image has been updated like 66 times. It was updated 10 days ago, probably because it found a vulnerability. And you can see here, it has an image advisory. You can see the technical details, the package list that has been deployed in this container image. And if something is wrong with the release 66, it will probably be released as a 67. And see, this is one example. Rel 7.3 has a problem. But you can always create a 0 1, 0 2, 0 3, and keep evolving this specific image. Well, we have run out of time. I'd like to thank you so much for your time here, for your questions. And please, feel free to add me in Twitter and send me any other questions that you might be shy to ask here. Thanks so much. Thank you very much. And I would like to remind you to post your feedback to this session. And who's interested, please sign up for a lighting toss to today's lighting toss in the evening. Thank you, too. Oh, I'm from you. Yeah, almost forgot. And another one, announcement. We found a cap, so who miss it, it will be there.