 Olen vielä kerrottunut myös, ja kiitos minulle, että minulla on tämä oppimus. Täällä puhutaan täällä. Tällä päivänä olen tullut minua, jossa olen tullut minua, jolloin täällä Helsinkiä. Mitä olemme? Olemme aloittaneet mobile games, aloittaneet marketteja, josta on työtä kaikennaja järjestelmässä. Me saamme taas taas vaikka, miten joilla on implementoittu. Miten on vaikka, miten on implementoittu. Saaminen taas on, että vaikka eteenpäin on toppen mukaan toppen teemppu. top five in up store, top crossing ranks, and if you reach top 20 or top 10, you will be basically make a lot of money, so it might be one feature change from A to B, might give you a lift to be better than your competitor and you make more money. That's about the company, but we do. Brief history about the stack that we had. We started, I didn't have time to do the timeline, actually I was vacation last week. We started, I was not there in yet, with PHP and MySQL on Google Cloud. It was a demo pilot, actually no real customers tested it. It didn't work. Then, of course, then we moved in, I only do Java, so I rewrote it with Java, I moved to MongoDB, and we put it on open system line. So from the day one that we have run user pilots and production stuff, we have been in kind of sandbox. We have been using gears and cartridges, so from the day zero, we have had a mindset that we are in sandbox. We are not using VMs, we are using some kind of pass or containers or gears, so our code and the installation was pretty minimal. And we moved to open system line at the end of last year because the new open system line, because the old one got end of life. The official end of life was August 2017, but we got some format extra time to move our workload. What we run, actually, My had written with Pencil, with actually Apple Pen internally. So what we run, we have services one, two, three and four that we run. We have made those services. I cannot say that they are microservices. They are kind of micro-ish services. We could easily split each of those services into smaller pieces, but we don't have to. It works. We have key clock running for SSO, and we have so-fans user interface with Angular that I don't even understand how it works. I do mainly the backend. Nowadays, I don't code that much. I mainly concentrate the operations part, the not operations part, but how it runs. Do we get correct alerts, the metrics, APM and stuff like that. We have several different services, the red ones that are external. Let's send grid, intercom, new relic, a lot of other services that we integrate. Because we don't have time or money to build everything ourselves. And for Amazon S3, we distribute all public static content. We don't want to get that workload on the open shift. Everything, any images that we pull from up store, we either direct from up store or from S3, we distribute. And then the MongoDB is not running on open shift. It's running on MongoDB Atlas, their cloud offering. I think that one of the smartest moves that I did to move it outside open shift. It worked fine with open shift. Even in open shift 2 version of online, and in 3 it worked fine. But I have to say that the MongoDB knows better than me how to run MongoDB. And it's a nice replica that I can increase the machine sizes. I can create redone replicas to regions. It's much easier than running in open shift. And we have one virtual machine running on Amazon. Running grown jobs. We pull off a lot of data and it is just more effective to run them as a grown job. It's a plain Java process. We could run them as a jobs in open stone line. It's fine, but we like them with grown jobs. I have a couple of key points that I would like to promote here. So first, externalize everything that you can from your application to secrets, config maps, whatever you have in mind. All the certificates, all the configuration properties, either to config maps or to database. So that you can have your container as one single container, built for dev, you test it, and then you deploy the same container. Everything else comes from the outside. And this is the key point to have basically the immutability of your containers. And if you run your open shift online, you have to provide your own certificates. So if you use Lent Encrypt, you will have only 90-day certificate lifetime. You have to also create a process how you externalize your certificates and how you redeploy them. And this is one good thing. Have a champion in the house. It happened to me that in our company we have four versions in development team. You need to have a champion in the company, in the team that knows the open shift. There is no point from the day zero to say everybody to go to the open shift trainings and learn this stuff. Have a champion, distribute the champion in different teams so that the champion is not part of one team, but it says the knowledge how you should do things, how you should not do things. It was easy for us because I worked for 9-5 with open shift. So it was easy for me to be a champion in the startup in the game refinery. And that actually meant that it took half a day, maybe day for our developers to figure out how things work, how they can push and deploy. They didn't have to learn about open shift. But they started learning about open shift bits and pieces when they needed to create a new service, create a new route or add a new line to the config map. So they learned the open shift on the way. They didn't have a crash course to open shift. Now you have to do it like this. But they had a champion to learn basically holding the hand on the road. Deployment and delivery. This is basically the pipeline that we have. We build everything dynamically. We copy production data, but it's not us. This is what we have. Even that I see in my Red Hat work, I see nice stuff that companies do. For example, in our innovation labs, there are awesome pipelines and stuff. And I have a huge eagerness to basically, can I build this and do this? But if you don't need a huge pipeline for deployment, a huge automation pipeline, don't do it. If everything works fine, start from the easy stuff. Just use the, we use just plain short image with the kid hook and we build it from there. For now, we don't need anything else. We deploy the staging environment automatically, every commit. And then from there, we need to have manual QA because the user interface is pretty hard to test automatically because you have to basically read the stuff and understand what the user interface does. And then when it's done, when it's verified, we promote the image to production. You don't have to be this kind of Netflix or let's say whatever Uber. Just do what you have to do, get your code running and start from there. Like keep it simple and don't overthink. Like I could, as I said, we have four micro-ish services. I could split each of those services to 10 different smaller services. But I don't need that because it works. It would be nice to have a smaller services, but we have actually currently one active backend developer and me doing like a 10% of the work managing those four different services that have written me Java running on my fly. And it's working fine. So no need to split. And the user interface is whole not a story. It's so weird. And even that I just mentioned that keep it simple, you have to keep in mind that internet is global. Currently we run from Europe. Our data is in Europe. It's actually running on the open shift online, European region. Our MongoDB Atlas is in Europe. Everything is in Europe. We have customers from US. We have some latency, but they are not that much annoyed yet. But we have to know that the code has to be and the application has to be so that we can easily make it accessible better from, let's say, Asia, APEC and US. Our solution for that is that that's why it was one good choice to move to MongoDB Atlas. We can easily spread the database to have read-only replicas on regions. We can easily deploy the same code to APEC and US open shift online. And just have one global load balancer in front. And we have local reads. So we can do that, but we haven't done it yet because we don't need it. So we only do what is necessary to basically get more customers and get more money. And one good choice was to use the MongoDB Atlas for us. Second one was to have an APM. So application preferred monitoring to know what is happening. It's funny that New Relic is here. This is also New Relic screenshots. But unless you have a view of what's happening, what the application is doing, in the near future, open shift online will probably have an Istio, multi-talent Istio, so that I can use Istio, distributed tracing from there. But even so, and also Prometheus, it's not there yet. And the Hocular heaps test stack doesn't give that much information. But even though I get Istio, you might be good to use some APM. Let's say there are different datadoc, New Relic, multiple vendors to go inside your applications and know what's happening in your application. But if your containers are stateless, there are still processes running in there and you need to know what's happening in processes. Without the information what's happening, you don't know what actions are slow, what you need to do to make the application work better. And also another one is this basically chat ops. Try to have some single, let's say, pane of glass that displays information, alerts, notifications. We get a slash, slack message when one of our services is running and currently if key glow goes down, then I have multiple messages there because the authentication doesn't work. I know that there is a company that does this a lot. They have a, not slack but similar. So there is no point to go in to the, let's say, I will not go to OpenShift Online and check is my application running. I will see it from slack. And sometimes I see it from slack that, okay, OpenShift Online was updated because service is down about 30 seconds. Because sometimes it happens that node goes down with key clock databases and that's bad. And always run two replicas, always at least. They are most often running on different nodes, so during the updates you will have one replica at least running. Never run on single replica. Sharing is caring. As Diane mentioned, common briefing. This same topic I have written a blog post, blog OpenShift.com. If you have something, you can all reach your local Red Hat team. We will guide you. You can also contribute there or you can Google OpenShift Moving Online. That's easier than the URL. Thank you.