 Great, so without further ado, let me introduce to you Yong Jie. Thank you. Hi, Ron. Oops, okay. Hi, so today I'm going to be talking a bit more of rancher. So just for the benefit of those of you who haven't heard of it or haven't used it yet, I'll just be walking some of the more introductory steps in rancher. So a little bit about me. I'm 26 this year, so I've been programming about half my life. I graduated from CS in NUS, and I came here for Carnegie Mellon just last year. So I've been with Greens for about three years right now, and today I'm going to be talking about these things. So just an introduction to what rancher is. Rancher is, a lot of people think they compare rancher to, like, like, humanities to, like, Apache mezzos and so on, but that's not what rancher exactly is. It's not only a container orchestration platform, but it's a, they try to advertise rancher labs, which is the company behind rancher, tries to push it as, like, a complete container management platform. And by that, what do I mean by that? It means that you can sort of, like, swap out components within rancher, like, it's divided into multiple layers. So in the previous slide I've seen, I showed you, like, logos of humanities, docker swarm and so on. That's because rancher is a platform that allows you to mix and match and pick which container orchestration layer you want to use. It also provides additional functionality on top of that, including provisioning new hosts on major public cloud providers. It also provides an application catalog containing applications that are ready to deploy. On top of that, it also gives you, like, access control management right out of the box. So if you are running a big enterprise, you want stuff like Active Directory, LDAP, or Shibolev to be there. So rancher is all of that too. So this is, like, kind of like the stack, rancher stack, which you can probably find online. So, Glyn's just a little background about us. We are a funded startup based in Singapore, and we are focused on recruitment and career development. So at Glyn's, we have been using rancher in production since about three years ago, and it serves a significant portion of our stack. So rancher, if you want to test it out, it's actually quite easy to install. It's packaged as a docker image. Yeah, so just docker run in whatever provider you want. And you get this nice user interface, which you usually would, you wouldn't get with like a stock deploy of Kubernetes or something else. So one of the base concepts that is in rancher is the environment. It's a logical container that isolates your resources. So rancher is role-based access control, right? So you can give permissions to people, different people within the organization to a specific environment, and that's where you choose which orchestration layer you want to deal with. So rancher is a cattle, which is kind of like a no-frills orchestration layer. You can read about it more online. As for host registration, it uses docker machines to provision hosts on public clouds. And if you are running stuff based on like some VPSs that are from providers that are unknown, like for us, right, because we are in Indonesia, and Indonesia has strict data protection laws that they don't want their citizens' data outside of the country. So what we can do with rancher is that we can just go to any public VPS provider and like say, hey, just give us a server, and we can register them as rancher hosts and manage them from rancher entirely. So this is how you register a host. You can pick whatever cloud providers, what image you want to use, the size and so on. And rancher just provisions and the two hosts that you specify entirely on its own, not much user intervention required. So how about running applications? Because having hosts that don't run any services is kind of useless, right? So there are two main ways. The first way is you could run standalone containers just like you would. And you just need to specify the image that you want to use. In this case, I specified engineX, and I mapped pod80, and you just create it and then the container gets created. And you can access it directly through the nodes IP address. You can see the tops like 139.59 somewhere on DigitalOcean. And you can also have a look at some of the metrics like CPU usage, memory usage, storage network and so on. So this is fine for one container, but you could also run your entire stack on it. And for that, rancher introduces the concepts of services and stacks. For those of you who are familiar with Kubernetes, a service is essentially kind of like a same, almost similar concept as a service in Kubernetes, except that it kind of combines replication controllers into the same concept. So similar definitions. And one interesting thing is that if you really use Docker compose offline, you can use your Docker propose file as a template, upload it to rancher, and it will just provision everything as if based on that Docker compose YAML file. And on top of that, each service gets its own DNS entry, and they can find other services in the same stack by just the name, which is also pretty similar to what Qt provides. So adding a stack is simple. As you can see, you can provide a Docker compose file. And for services, you can, scaling is handled entirely on rancher. You can choose how many containers you want to run. And it depends on what scheduler you are using. It also provides some options for scheduling. So you can see like I created like a NGINX service. And you can see here is like the allocation of this service field because it needs a host with the label this. So I can also attach labels to host. In this case, I just go to the host screen and edit the label. And there you go. All the services are running once you add the label. So the nice thing about this is that it gives you the option to scale whenever you want to. You can just click the plus and minus. You also get access to the shell and the log files while the service is running. So one interesting bit about rancher is that it provides this thing called a rancher catalog, which brings together stacks of services that can be quickly deployed as a unit. So one example of this is that rancher comes with a default set of catalogs and includes things like WordPress, GitLab, Goals, and so on. So you can just click on one button, view details, and install. And you can see a rancher or provision entire stack for you. And voila, that is pretty much almost the fastest way to get WordPress running. You can also define your own catalog for your own organization. You just need a Git repo. So if you have some tools that your organization usually uses, you have many organization units which maybe use like a recon, which is an open source version of Trello. They can deploy whenever they want to. And all you need is a Docker compose file for that. So debugging applications of rancher, as I mentioned, you get shell access. You get access to logs. You can use any of the Docker logging drivers to send to like GreyLog, LogStash, and more. And the web interface gives you like a pretty nice terminal emulator that you can use to inspect your container. You can run top and so on. So here's the meat of the presentation, which is how we use it for review apps. Background information about review apps is basically the CICD procedure where you can set up dynamic environments for every branch that you have in Viva version, source control repository. The benefit of having such a platform is that you can enable rapid testing and you can enable different teams that work on features in parallel without disrupting the state, without having to push the staging environment or pre-production environment. So you reduce the blockers in your team, increase your developers productivity. So one example of this is that I can push the repo called like feature slash magic, and I get like this really nice URL quoting the stack name and the service name. So like api.featuremagic.dev.forceazure.org. So for those of you who have used Heroku or some platform service implementations, they might provide this as well. You get locked into one single vendor, which is Heroku. So how to implement this thing? So first, you need some source control, some version control system somewhere. So a feature branch gets pushed to GitLab and then you will configure whatever CI system you have to build an image, a Docker image, and push it to some registry somewhere. And after that, you also use the CI system to call the Rancher API and that's where you create new stacks and services. So in the Docker build process, you can see that this is for GitLab because we use GitLab internally. You can see that we build the image on the CI system, we tag it with the appropriate version and then we push it somewhere. And as for the deployment logic, this is not any shell language or anything, but it's just a pseudocode. So you can see if the commit is either master or staging, we just upgrade the service on either of these environments. If it matches some feature branch name, we would create a stack, create a service for it, create a load balancer and tree for it. And Rancher gives you access to a console where you can add and remove API keys. So you can use the API. And the API is quite discoverable. There are quite a number of endpoints that you can look at and it's a fully introspectable interface. You can, without using postman or curl or anything, you can just click on the buttons and run the relevant actions. So in our case, there are a few things. I have to create a stack on Rancher and this is the endpoint it uses. At the end of the day, I have to create a load balancer and tree to make sure that the public has access to whatever feature branch service I just created. So this is an example of a feature stack. So one feature branch corresponds to a stack in Rancher, in our implementation. So you can see this stack has like five different services and these services are accessible from the Internet using the load balancer. One of the key caveats of this deployment system is that your services, for example, like API, it must be able to locate other services by DNS or through other mechanism. So if your app has like a hard-coded, like MySQL server address or host name, you might have to change that for this deployment to work. So you can see in this case, this is a very handy thing that Rancher has to, it allows you to generate graphs of your stack. So as you can see here, the balancer leads to every other feature branch that we have and each of them has its own stand alone, database server, rally server and so on. So the last thing remaining is to make everything accessible which is you need something to point to your load balancer, right? So for us, we use a wildcard DNS record. It's not a best practice because it can be prone to abuse because anyone can insert anything into the wildcard and it appears that it's from your site. But for quick implementation, you can create a wildcard DNS record pointing to the load balancer and whenever you do something like api-featurebrunch.dev.forceasia.org, it goes to this IP which is the load balancers IP. So an alternative for this is to actually use the API to dynamically create DNS records. So if you use Cloudflare, DigitalOceanDNS, you can just create the host names dynamically which avoids the problems with wildcard DNS. One other nice thing that we have is that we have a branches of WebSockets endpoint that you can listen on. So you can write a simple server and you can post a message on Slack when something gets upgraded. They have this blue-green deployment system where you have one live host and one staging host and you swap them over when the staging host is done. So we have this thing on Slack where you can just click Finish Upgrade. The service gets upgraded. So last thing, just try it for yourself. Rancher is an open-source platform. In our experience, they have great documentation, a lot of resources, and they have a forum that's really, really active. And they do address issues quite quickly, so I encourage you guys to give it a try and see how you can benefit your team as it is done for us. So thank you. Questions? So much, you're here. Does anyone have any questions? Yeah, please. Can you mic? Do you start to move to Rancher 2, which is currently in technical preview? I think I can. Anyone can hear me, right? Okay, good. So yeah, this is a nice thing about Rancher. Rancher is that they plan to move to Kubernetes as their default orchestration layer. So for those of us here at Glintz, right, we have been tracking the Rancher updates quite closely because we want to make sure that the movement is as seamless as possible, so there's some plan for it. So the nice thing about it is that the Rancher team has commented on the issue that said that if you want to migrate to Kettle, there will be an easy option for you to do it. When the time comes, I'm not sure how the UI will work yet, but hopefully it's just an easy deploy new image and everything gets moved forward to the new layer. But the default for existing deployments, like Rancher 1.6 to 2.0, if you're on Kettle, you'll still be on Kettle unless you obtain. And I think the word from the Rancher team is that Kettle will still be maintained as a simple orchestration layer, except that, I mean, it's still suitable for smaller use cases. And for those of us who are, because we are growing, we are going to add much more services, so Kubernetes will eventually be something that we would deploy for 2.0. So I'll answer your question. Okay, any other questions? If not, thank you so much. Round of applause please for having our next speaker.