 Hi again Red Hat Developers. This is Jason with the Red Hat Developers Program. I'm here with Mark Yellen. He's a developer advocate on the OpenShift team. And he's here to talk to us about OpenShift on the Google Cloud. So thanks, Jason, for introducing me. So I will repeat just that I'm Mark. I work with the OpenShift team. And I'm going to tell you something about running OpenShift on Google Cloud today. So generally, what we need to understand, and I already expect you to know that you want to use Google Cloud. So I am not going any details about why you should choose it, that's your decision. So I am expecting you to be already decided that you want to do it. So the generic way to install OpenShift is that you need to install Ansible, because the official installation method is Ansible. Then you need to create the inventory file for Ansible so that Ansible knows how to connect to the machines, how to install everything. Then you run Ansible to actually do it, to do all the hard work, and then you can profit from that, right? But this is the generic step that you have to do. And how to do it? There are three different modifications, three different paths, how to actually run it. So first is bring your own infrastructure, where you go to Google Console, you go to the GCloud tool, and you provision your VMs, you provision all the infrastructure, like the virtual network, the firewalls, or the rules, the DNS. And then you gather all the IP addresses of the nodes that you are trying to install it on. Then you write your inventory file, and you specify, this is my master, this is my infrastructure, this is my nodes. And then run the Ansible to actually install it. The problem with this method is that it's quite error-prone, and it requires all the manual work, that you have to do it manually, right? So that's boring, and if you do it several times a day, you don't want to do it. So the other way is to use the dynamic inventories that actually in Ansible allow you to provision the infrastructure directly from Ansible, right? So you need to set up the inventories so that they have access to the Google infrastructure. You have the JSON file with all the keys, and you tell Ansible to actually use it. Then you run Ansible, Ansible connects to the Google Cloud, it setups all the VMs, it setups all the infrastructure that is needed there, and then runs the Ansible Playbooks to actually install OpenChip on that infrastructure, and then you can profit again. So in essence, in this method, you don't have that much manual work because everything's done automatically using the Ansible script, which is nice, but actually involves you to writing some Python, and you need to actually understand how it works. So we came with one tool because for our workshops, we have been actually creating environments like several times a day because you have two workshops, three workshops, you need to create a new environment for new people. You need to automate a lot of tasks, like create me 100 users with a generic username, then set up a new project for every user, deploy me some smoke application directly into that project, et cetera. So there was a lot of tasks that were involved and it would require me to actually work in Python, and I'm not a big fan of Python, so actually, and they were to do something else. Yeah, Diane is making jokes here because she is the Python DJ, right? But yeah, I'm not a Python guy. So we actually created a small tool as written in Go, and it wraps two different, or three different things. It wraps Terraform that setups the infrastructure for you. So we generate a Terraform template, either for AWS, GCE, Azure, possibly other providers in the future, but right now we just focus on these three. Then we run the Terraform to actually provision the infrastructure for you. Then we run Ansible against that infrastructure has been provisioned, and then we use SSH to connect to the cluster and set up the user account, set up the projects and everything. So everything can be automated with a single description file, right? Yeah, so I essentially said what you have to do. So you create a directory on your file system, and you put private and public key for the connecting to the SSH. Then you write a small YAML file. It actually describes how the infrastructure should look like. I have it on the next slide, so you will see it. And then you mount the directory into Docker container that has all these tools available and know how to run the YAML file and generate all the different stuffs for you. And then you can profit from the running cluster. So, can you read it? Is it readable for you? Yeah. So this is the YAML file that you can use, that I used for my workshop yesterday. So I am provisioning on GCE. I want to deploy Origin and the version was 1.5. That was the latest release, right? Then I need to provision the DNS. So I need a DNS zone on the cloud DNS. And I need a suffix as something after all the domains that Google is going to use. So in my case, it was pixio. So that was that simple. Then you SSH and you specify the name of the file with the key that was going to be used. Then you say what components you want to install. So we have cockpit installed. Cockpit gives you basic access to the underlying operating system, like on the Kubernetes slash rel level. So you can see containers. You can see different things. Then you have metrics. So metrics give you visibility on what CPU, how much memory is being consumed in the cluster. And logging, I disabled. It's an elastic search-based component that streams all the logs into a single place. And then you can do some analytics on top of that, which I didn't require, so it was disabled. Then I generated one user called admin with some password. And that user is omnipotent in the cluster. He can create any projects. He can restart masters. He can do anything, right? So that's my administrating user. And then I generated 75 users that has username user, one up to user 75 with the number suffix. And the password would password, right? So that generated my users. Then we had one node. We had only one node that was hymem16. That means 16 CPUs and 104 gigabytes of memory, which was quite enough for our cluster. But you can also say, it was only one, right? So one node having all everything, routing, logging, and everything was in one node. You can say that you want to do infra true. That means that all the infrastructure parts, like metrics, logging, is going to be on separate node than the master. And then you can specify number of nodes that should be used for containers. So you can deploy the whole cluster for you. And everything's dynamically created when you run the tool. Then you specify the GCE JSON file with the keys for the service account. Then there is the region to deploy to, the zone to deploy to, and the project that is used for that SA can have access to. And the last thing that you have to do is to execute a container. So the container is on Docker Hub OSCVG slash open shifter. Then my directory was root slash workshops. That was on my machine. I mounted it into the container as root data. And the big Z means that I want to skip all the SC Linux access problems, right? And when I do this, I actually deploy the whole cluster just by writing that small YAML file. And this is probably the simplest way how to deploy right now to the GCE. It's usable for workshops. It's not usable for big production, like if you want to deploy OpenShift for your production environment somewhere, it's not designed for that. It is designed for setting up one-shot environments when you need to do it all the time again for workshops, classes, or something like that. And it can provision on GCE, which is the topic today, as well as AWS. And because Terraform practically has connectors to almost any cloud provider, we can write the templates for anything like DO, Linux, Azure, and other providers as well. So how is my time right now? I'm fine. So I have no more slides. If you have any questions, please ask. I love questions, actually. Questions love. So yeah, what do you mean? You can run the Ansible from pretty much any distribution you want. It doesn't matter. But it is hidden in the container. If you use the tool, the Ansible and the PlayBooks are in the container that is sent us based. So you have the environment preset for you because there is a bug in the PlayBooks that requires a specific version of Ansible. And you need to pull the specific version of the PlayBooks for a specific release. So there is some tweaking and magic to actually get the environment up and running. So we did it and put it into the container so that people don't have to do it themselves. So they just run the container and everything's there preset for them. I had Ray from the Google GCE, Evangelist team yesterday, setting up their workshop environment using this tool as well. So actually the guys from running the Google workshops have been using it for some time as well. Any more questions? Thank you very much. So see you around and come and have questions if you have.