 Thank you for joining our webinar, Managing Production Deployments on Kubernetes with GitLab and Bitnami. I'm simply going to be your host today, maybe in Ruiz. I'm our partner marketing manager. And with us we have Rick Spencer, our VP of Engineering, and Jason Blum, our Senior Build Engineer at GitLab. Just some high-level agendas. On the left you'll see what Rick will be covering today. He'll be explaining a little bit about Bitnami, Bitnami's container, giving you an introduction to Helm and Monocular, giving you a demo on Monocular, and introducing one of our newest open-source projects, Cubeless. And Jason, he'll be going over the benefits and features of GitLab, the DevOps toolchain within GitLab, GitLab's Kubernetes integration, along with a demo on how to get that started on Kubernetes. And finally, we'll have live Q&A at the end. So any questions that you guys might have, feel free to put that in the WebEx chat window, and then I will field those at the end. So with that I'm going to go ahead and give the presentation permissions to Rick for him to take it away. Okay, so thank you very much, Mavian. I appreciate you getting us organized for this webinar. So my name's Rick, I'm the Vice President of Engineering at Bitnami. So who is Bitnami? What do we do? Sometimes I like to say that we're the least-known company that everybody has used. So Bitnami is really known for making awesome applications, making awesome software available to everybody. And one of the main ways that we do that is by packaging applications, mostly open-source, but there are some commercial applications that we package. We make those titles available to developers and to end users in all kinds of formats and in all kinds of places. For example, on any marketplace, on any public cloud, you can find Bitnami applications to launch and use free of charge, except for the cloud hosting costs, of course. We also offer things like virtual machines that you can use locally. We even offer native installers so you can install software natively on your laptops. We are popular enough that we have over a million, well over a million new deployments every month. So those are users coming to the cloud marketplaces and to our website to download or to simply launch our software that we packaged into the cloud. And you can see some of the logos of some of the 150 applications that we make available. Why are we so popular? Well, first of all, all our applications are ready to run. We pride ourselves on one-click experiences. Typically, to get an application running, you simply click a button. You might need to fill in a small form for your cloud hosting provider, but everything is configured to run and we put a lot of pain into pre-configuring things so that as a user, you don't have to figure out configuration options to get up and running. We're also very popular because we keep our application catalog very up-to-date. So there's always new versions of software coming out and we are the best place to go get those latest versions because we use that ready-to-run configuration with the most up-to-date software. We also, when you use our software, say on a public cloud or in a format like in a container versus a VM, you're not getting the lowest common denominator experience. We target our software configuration for the environment that it's going to run on or the platform that it's going to run on. So for example, if you're going to run on a certain public cloud, the software will automatically be configured to use the monitoring solution for that particular public cloud. We're also constantly tuning for performance and for security. So make sure that applications start up quickly and run quickly and also that the configurations are secure by default. So that's Bitnami. We're everywhere. You've probably seen our logo and that's primarily what we do. Now, saying that, one of the formats that we deliver these applications in is containers. And we're very involved in the Kubernetes community. We're very involved in the Helm community. And we have a set of very popular and highly functional containers that are free to use. And I'm going to talk today right now about two of those kinds of containers. One kind is that what we call a development container. And these are containers that we have packaged and optimized to run locally on your desktop and just help you get developing as soon as possible. And another kind of container is what we call a turnkey application. I think the easiest way for me to explain this is just to quickly show you our web page for containers. So I'm going to go over here to bitnami.com forward slash containers. And if you come here, you can have access to all our containers. So you can see at the top here are these seven containers. These are actually delivered via Docker Compose. So it's very easy to choose one of these frameworks that you want to start with and to start developing as fast as possible. So for instance, if you're interested in Rails, you've heard a lot about Rails, you can click the button to get it on GitHub. Just scroll down a little bit in the read me and you'll see that there's a command here to simply curl this Docker Compose file and then say Docker Compose up. What will happen is it will use the Rails code generator to generate a starter application right on your desktop and you can just start coding immediately. For example, the Docker Compose file already brings down a MariaDB container that's preconfigured to use to work with the Rails container. So you don't have to worry about any of that. You can just start getting started with your code. The other kind of container that I want to talk about is what we call a turnkey application. So these are similar. These are available via Docker Compose. We have all kinds of applications here, all kinds of developer tools. You can download these applications and developer tools via a Docker Compose file, say Docker Compose up, and just start kicking the tires and trying these out locally on your desktop via Docker. So that's a bit about Bitnami and a little bit of focus on the containers that we made. But as I alluded to, Bitnami is extremely active in the Kubernetes community. Of the many activities that we have there is our involvement in the Helm project. So first of all, what is Helm? Well, Helm is a Kubernetes package manager. So if you're used to a distro where you can say, for example, use AppGet install or Yum install or Brew install, Helm provides that same experience for applications on a Kubernetes cluster. So the application, the package, is called a Helm chart. So if you hear me referring to charts, I'm talking about a set of YAML files which you can use without even looking at a set of YAML files that collect all the necessary information together to create an instance of a Kubernetes application. So you can use commands like Helm install, Helm update, et cetera to manage applications running from the Helm command line, the Helm CLI. What's Monocular? Monocular is an application that Bitnami has invested a significant amount of effort in to make it even easier to use Helm. So first of all, it just creates a representation of a Helm chart repository. A Helm chart repository is exactly what it sounds like. It's a collection of Helm charts in a certain format so that the Helm command line can read it but also the Helm but also Monocular can read those same repositories. So it really helps facilitate the discovery of applications as I'll show you, you can just browse applications and choose from that browsing process. And it also gives you a way to simplify installing charts and managing the applications once they're running. And it provides other nice things like gives you a way to look at changes over successive releases on a chart. So here's just a bit about how Helm and Monocular work together. So you can see on the right here that Monocular can read from one to any number of chart repositories and can combine that information together and present it. Then Monocular talks to the in cluster part of Helm which is called Tiller. Tiller runs in cluster and Monocular talks to that through its APIs and then Tiller can manage applications. For example, you can have a chart repo that has a WordPress chart in it. Monocular can take that chart, pass it to Tiller inside the cluster. Tiller will launch that application, that chart. You can have another repo that has Drupal in it. Monocular can read that chart, pass it along to Tiller and Tiller can run it. But at the same time, Monocular is also able to read state from Tiller. So how do you use Monocular? I'm going to show you two ways to use Monocular. And the first way is on a website that we manage, that Bitnami manages, called Cubabs.com. And then the second way is that you can use it actually within your cluster. And I'm going to show you both of those via a demo. The advantage of using it in cluster is that you can use your own set of chart repositories and you can also deploy charts with just one click. And furthermore, your end users can deploy charts with just one click. So maybe I'm going to move on to a quick demo now. Does that sound good? Sounds good. Great. Okay. So this is Cubabs.com. Cubabs.com is, as I said, a website that we host. What Cubabs.com does is uses the Monocular front end, the Monocular GUI. And if I'm going to click on charts, and this will give me a chance to see what repositories or what chart repos it's Cubabs is looking at. If I click on charts, I go to the charts page, and you can see I can organize the charts in different ways. One of them is I can filter it by the repository that it's pointing at. By default, Cubabs.com points to only the official Helm repositories at the moment. We will probably enhance it to report to other good repositories in the future of well vetted charts that we trust. But right now it only looks at the upstream Helm official charts. And it looks at the stable and at the incubator repositories. So I'm going to click on stable, and this will filter out all the charts that are just in the incubator. And so then you can see there's still quite a few charts. These are all applications that I can learn about here that are all packaged for Kubernetes as charts. Okay, I'm going to pick on one Daku Wiki. This is one that a lot of users will launch a Daku Wiki at the beginning of a project. And you can see that when I clicked on the Daku Wiki link, it gave me a lot of information about the chart. It gives all the metadata and presents it in a very easy to read format. And it gives you commands that you can use to install it. And it even gives you a little button that you can use to copy this chart, this command. And then you can paste it into your command line if you want to install it on your own Kubernetes cluster. And as I said, you can browse here through older versions if you want to look at older versions or if you need another version, because you're a really easy way to browse versions. Okay, so this is kubabs.com. It's a great way to discover and to browse applications that are available for your Kubernetes cluster that you can just launch these applications directly. Now, however, if you look here, we actually have a Kubernetes cluster running and you can see that there's quite a few things on it. So here's our Kubernetes cluster running. One of the things that I have running on here is Monocular. Okay, so let me go here. And this is the UI for Helm running on the cluster itself. You can see it's being served from the cluster. So as I access it from the cluster, these are all applications now that are available on that cluster. And you can see it looks like pretty much everything that's in the stable repository. But because it's our own cluster, or actually it's a cluster that we set up with GitLab, we actually point to a custom repository. So I'm going to click on charts again. And at this time, you'll see that there's a GitLab repository in here. So GitLab hosts a repository of their own charts. And so I configured Monocular to talk to that repository. And now I'm filtering just what's in GitLab's own repository. Now, JSON actually already launched GitLab from here from Monocular. I'm going to do it now. But one interesting thing is, as I said, Monocular is talking directly to Tiller right now. Tiller is the in cluster component of Helm that manages the state of the applications that are running. So what I'm going to do is click on deployments here. And this is new because it's running in cluster. I can actually make deployments. And let me show you. If I wanted to, I could click on GitLab here. And you see there's a button here that says install GitLab. So you just one click, install this, and have it just running on the cluster right away. But as I said, JSON already did it. So I'll just go look in deployments. And you can see here, these are applications that we all ran with Helm already. And you can see one up here is the GitLab application that we ran with Helm. So that is how Monocular works both as a way to browse applications at kubelabs.com and also as something that you can use in your cluster to launch and manage applications in cluster. Okay. So maybe should I go on to Kubelis or is Jason ready for his part? It looks like Jason is in. I'm not sure if his audio is currently working. Jason, are you able to present? I believe so. Okay. Why don't we turn it over to Jason then? Okay. So I am Jason Plumb, senior middle engineer at GitLab. And I'll be sharing some of the benefits of GitLab, explaining a few of our features and then sharing a demo of how we can take you from idea to production in under 10 minutes. So let's start with some context on industry trends and the software development lifecycle. While software was once something that actually supported the business, it has actually become the business itself. The big driver behind this change in the digital transformation that's happened over the last decade is the need to stay competitive. Companies need to have strong software operations that can deliver value in any climate. This is why at GitLab we have built an integrated product for the entire software development lifecycle. We help organizations reduce the cycle time between having an idea and seeing it in production, particularly relevant in today's talk. One example is that a cloud and containers are enabling simplified architecture faster deployments and reduced costs. The speed and stability of containers are undeniable with an entire runtime environment, application, all of its dependencies, libraries, and other binaries in a single package. They provide a solution to the problem of how to get software to run reliably when moved from one environment to another. So how can development teams keep up by prioritizing both speed and quality? Applications today must be agile in their delivery and teams who are building and deploying software need to collaborate throughout the entire development lifecycle. Organizations must now learn to use tools and be able to optimize for speed and quality to meet these demands. It sounds daunting, but it doesn't have to be. Adapting to these demands simply requires removing unnecessary layers of process from the workflow and automating as much as possible. This is what we aim to achieve with GitLab. It all comes down to your tool chain. GitLab has transformed how development teams get from idea to production, saving them time while increasing development velocity and code quality. Our workflow prioritizes autonomy, collaboration, and automation, all of which are essential for DevOps success. In just a few minutes, you can deploy GitLab to a container schedule, add continuous integration and deployment with automatically deployed review applications, utilize chat ops, and analyze your cycle time. Building on our master plan at GitLab, we truly deliver the entire DevOps tool chain. If you want to design, code, build, deploy, and monitor an application, you can now do all of that straight from one application that is GitLab. Development teams will always need project management tools, so why not just bake them into one environment where you work? This naturally unifies project management within the development process. We have made many enhancements to issues, a core part of collaboration and project management within GitLab. This includes weights, linking to merge requests, related issue boards, and providing a simple mechanism for workflow management across all stages. By keeping everything in a single tool, you avoid the lack of transparency that can hinder a team. We advocate that everything be done in the open so that managers and developers can easily stay on the same page in terms of what's coming next. To that end, we recently introduced burn down charts to help managers and ICs visualize their work left versus time. After this, come the code, commit, and test stages of the pipeline. During these stages, our developers check in their code into a central repository, both regularly and frequently, and integrated private Docker container registry means that every project can have its own space to store its images. You can leverage the GitLab CI to automate builds and test every single commit a developer makes, and then CI can automatically run tests across multiple operating systems, browsers, et cetera, to ensure that all commits are also healthy. Next comes the deployment. We add the ability to deploy your code to a variety of environments, all managed through a simple YAML file. You can roll back to a previous state with a click of a button if you need to, and every change can be tested in its own individual review app to make sure that collaboration is complete and easier. At GitLab, we use our review apps to spin up and delete one-off application environments for branches, allowing us to monitor changes in production before they're actually live. This way, we can measure the impact of every single change immediately influencing the developer lifecycle because we need to get instant feedback to be able to get instant results. Thus, we can do this before putting something into production. Once the code is merged, a review app is automatically destroyed, freeing up any resources those containers might be using. We recently also added canary deployments to give you even more immediate production feedback, so you can deploy code to a small portion of your fleet, and if anything goes wrong, you can immediately revert it with minimal impact. Monitoring infrastructure is crucial to operating a successful application. It ensures your app is responsive, provides valuable insight into the impact of changes, and enables quick debugging when problems occur. However, setting this infrastructure up is often a lower priority, in particular for non-production environments, and is often not integrated with the rest of your tool chain. With GitLab 9.0, we introduced the first monitoring system that is fully integrated into your CI and CD pipelines and source code repository. Leveraging Prometheus, GitLab now brings the same technology used for production systems to the development environments, like staging and even review apps. Integrated monitoring allows developers to be more autonomous. They don't need to ask the infrastructure team or assessment to do this for them before shipping a new feature, yet again speeding up the development lifecycle because it reduces the number of handoffs. You can now observe the impact of these changes on your team's cycle or the time it takes to go from mentioning an idea to accepting a merge request. Cycle analytics is a tool to track and receive the data you need to make better decisions and work better as a team. With metrics to measure how long it takes your team to move from idea to production, you can pinpoint areas of improvement and more accurately predict your releases. Many teams already measure a portion of their workflow, such as how long they spend writing code, but cycle analytics allows you to see the entire flow from end to end starting at idea. There are three levels of measurements. We can measure team performance on a given project, track how long it takes the team to get a complete set of specific tasks, or measure the contributor analytics to see how individual team members are progressing. Before we demonstrate how to get from idea to production, a quick note on our Kubernetes integration. If you need to set up self-hosted tools with auto-scaling, chat ops, container registry, and review apps all in Kubernetes, without GitLab, you're spending days to install, connect, and authenticate the various tools. GitLab comes with everything you need to bring your ideas from idea to production in a matter of minutes out of the box. You can install and integrate everything. You no longer need to mean separate applications and all of the integrations, thus you're not spending time with authentications and authorizations, and you have everything in a single interface. We believe our 10-step workflow is more natural and intuitive to the way that people prefer to work in the library, but allow me to show you how to get from idea to production within GitLab. The first step would normally be to start up a new Kubernetes cluster and configure GitLab inside of that. To save some time, though, I've already done that. So here I have the Kubernetes cluster that I've already spun up, GitLab instance fresh out of the box. One thing I'd point out is that when we do this, we automatically integrate to GitLab and its continuous deployment feature directly in the cluster that you've actually deployed it into. So I'm going to go ahead and sign in. You can see I've already got a project here, and I'll come back to this one a little bit with Rick. In the meantime, I'm going to go create a group. I'm going to go ahead and let it create a matter-most team for this group. Now, to save a little bit of time, I'm going to create a new project, but I'm going to import it from an existing one. This won't take long. There we are. So the first thing I'm going to do is make sure that my Kubernetes CI is hooked up. This is simple. I'm over to the settings, integrations. I'm down to Kubernetes. We see that it's already pre-populated. Just to be sure, I'm going to hit Test. Everything worked. Come back to integrations, and we've actually got matter-most integrated, which brings us to our chat. The simple as hitting Add to Matter-most. What do I want it to trigger? I'm going to go ahead and use my project name, and I'm going to say I want it part of the demo team. Next, I'm going to make sure that I have AutoDeploy set up. So I'm going to come back over to my project, and we'll already set this up. I'm going to come over here and set up AutoDeploy. Now, we have a series of templates already in place, and I'm just going to pull it right down here and pull down Kubernetes. The only thing I need to do here is to change this out to the URL that I'm using for this demo, and I'm going to say that I want it to be Targeting Master. Now I have a complete GitLab CI ready to roll and ready to run with AutomaticDeploy. So from here, I'm going to go over to our instance of Matter-most. Say Sign in with GitLab, and this will automatically use the OAuth for the GitLab instance that we've deployed. Come over to my demo team, and then as I type in a slash command, you'll see that I now have minimal Ruby. So I can do minimal Ruby help and say, hey, can you please connect your GitLab account? We'll pop over, do an OAuth request, authorize, and now it's hooked up slash command so that I can run operations against our project for minimal Ruby. I'll just go ahead and create a new issue. I have a new issue on the project, and I can move forward from there. Back to the project. One brand new issue made from our chat. This is our deploy, our issue boards. Some of you may be familiar with Kanban, and this is what these do out of the box. It's decided it's not going to actually... There we go. There we are. Now that we have our issues here, we can see from the backlog in the to-do and doing column that if you're familiar with these, what we'll do is we'll have to-do on a completely open issue, doing as you're somebody's assigned and working on it and close when it's completed. Now that we have this in place and actually in the to-do, I can take that and just drag it right over to the doing column. Go over to the issue, and you'll see that it has now changed labels from to-do to doing. So now that we have that in place, we can go ahead and get some changes done. Come over to the repository, open our file, and I'm going to edit it online. I'm not going to make a whole lot of changes, and what I will do is make a new branch. I'm going to have it set to automatically create a new merge request with these changes. I'm going to assign this to me and hit submit merge request. Now this has automatically started a pipeline for me, and I can go and watch these stages happen. This is our initial setup, and here's my new MR. I can go to each stage. I can make sure that everything is functioning and behaving as expected. Obviously it says that it succeeded, so something went well. Now we're also at the review step, and in this stage it is actually deploying it into the actual environment, and I can come in here and I can go see my actual review app. I'm going to come back with you because I missed where the link was. Here we are right at the top, of course. It's created the environment for me, and it's created a deployment. I can go right here and hit view deployment. We can see that indeed it has changed the update. I can come to the merge request, and you can see that the pipeline is complete here, and we've looked at this, and we've confirmed that we actually like the way it looks. It does actually contain the changes that we're supposed to. Now I can hit accept merge request. Now once this is done, it'll rerun a new pipeline after it completes shutting down the prior one. As we said, once you've merged your branch, it's automatically brought in, and it will shut down any existing resources used for a review app. Now I can go back over to Mattermost, back into chat ops. So now that we know we've got those in place, and this is doing its job here. Now back and rerun this, I can actually say to do the deploy from staging to production, and it'll tell me that it's doing the deployment for us, and I can follow its progress. Now it's running the pipeline to do the actual deployment from our staging environment after the review app was merged to actually push it all the way out into production. Completed and good. Now it's providing me a link over to the environment. From here I can see that this is indeed production, and I can go hit view deployment. And we hit an occasional thing that we have seen, is that Nginx takes a moment for everything to start back up. There we are. Sure enough, our production deployment now shows more than hello world. Now I can go back to environments over here, and I can look at some of the statistics of everything that's been going on. Right here on the right, we're going to follow staging over and look at what our monitoring has been showing. You can see the CPU utilization span in the past, and it will continue to track for the future and how much memory we're using. Obviously we're not doing much here, so the memory isn't really going to change much. I can go back to production for the same reason. You can see since we've just deployed into production, suddenly we have a bunch of things going on, but not much. Now should I want to for some reason, I wouldn't say use production. So I'm going to use staging, and I'm actually going to pop into that actual deployment and have a classic technical failure. Okay. Well, this does work normally. It's decided to bite me. At least it's not USB. So let's take a look at these analytics I keep talking about. I'm going to go back to the project, and I'm going to come over here to cycle analytics. We have everything all the way through from an issue being created in a chat-op to working out the plan of what we're going to actually do in making changes, the tests and how long they took to run and how many seconds they took to operate, the review process and how long that took, the staging time, and how long it took to get all the way to production. This entire span took us only a couple of minutes, but now we have everything we need to be able to look across the entire cycle and get feedback. We know when our changes are happening. We can review them before we put them into staging as a merged item, and before we deploy them into production, which can be done entirely from our integration with MatterMost. Then we can come back, and people who aren't involved with the project but do need to know how well we're doing can come in and look and see how well we're keeping up, how long things take and how long changes in time are actually accurate. So in less than 10 minutes, we took an idea all the way from the beginning creating an issue in the matter of chat and automating all of this process through. We put it into an issue board, coded the changes, committed the changes into the repo, tested them with CI and CD, reviewed that, deployed that, made up the necessary changes in production. This is all on top of the container schedule that is Kubernetes and Docker that allows us to run all the components of GitLab and all of the integrations you need for your workflow in one single place. Now, Rick, I will hand this back to you for a second if you want to talk about what Kubless is. Okay. So now I want to talk about the last piece of the puzzle for production Kubernetes deployment. And this is what we call Kubless. And Kubless is the Kubernetes-native serverless solution. And what Kubernetes-native means is that it uses entirely the Kubernetes primitives and the Kubernetes APIs to offer you serverless computing right on your Kubernetes cluster. What is serverless computing? Well, that is a way to simply write a function, focus on your function, write a function that works, and deploy the function to your cluster. So you don't have to create a container, create a Kubernetes deployment as if you're making a whole application, but you focus on just creating one useful function and deploying that function itself. And I'll show you in a minute how easy that is. But what do people use serverless for? Well, they use it for things like quick integrations between applications running on a cluster, or they use it for extending features of an application on the cluster, especially if that application offers an API. People even use it to make different ways of monitoring or interacting with their Kubernetes cluster themselves itself. So every function is either triggered by an HTTP call or it's triggered from a PubSub. So PubSub means that applications can publish events to Kafka and your function can subscribe to those events and then publish other events themselves. So tubeless is brought to you by Bitnami, but it's entirely open source. So at the end you'll see there's a link where you can go and you can do pull requests, you can log issues, you can use it. It's totally free to use. So as I said it's very functional now but it's new and we have some things coming up on the roadmap. Like next week, well now you can try, we have a GUI for it which makes it quite easy to use. And we're going to be focusing on some usability areas like faster starts and also decreasing the amount of resources that it uses by pod sharing. And that all be clear in a moment. So I'll show you here. I have the tubeless CLI entered and the tubeless CLI lets me do important things like managing functions. So first I'm going to see if there are any functions already on my cluster. And I'm going to use the tubeless function list command and indeed it's gone to the cluster and it's found that there is a function already called tweet and it tells me the namespace that it's in. It tells me the handler and that means the code file that it's in. It's telling me that it's Python 2.7 and it's telling me that this particular function has an HTTP trigger. The topic is empty because it's not using pub sub and it says that there's some dependencies, Python, Twitter and Kubernetes. So how did I get that function up there? Well first of course I had to write a function and so I have the function right here. What this is is some Python code which imports two libraries that I need the base64 library and the Twitter library and then it imports some libraries from the Kubernetes namespace or itself. So now that I can program with base64, Twitter and Kubernetes I can put it all together. These blocks of code here this is code that retrieves secrets from Kubernetes. So what I've done is I've created a Twitter account and I've gotten secrets so that the code can interact with that Twitter account. What I don't want is to put those secrets in your code because then you can't share the code. If you accidentally publish it to your GitLab project in a public way people can see your tokens and steal your tokens. So what Kubernetes lets you do is upload your tokens directly to Kubernetes and then use function calls like this to get the secrets out of Kubernetes at runtime. And then you can see that I can add those to the Twitter API call here. So that's all kind of setup, right? So that gets the tokens and sets up the Twitter API so that I can use it. But for us the interesting part of the function is down here in what's called the tweet command or the tweet function. So context is the name that I gave the web request object that gets passed in. That then has a property called JSON which simply exports that as JSON which then makes it very easy to use in the context of a Python file. So what you see here is that I'm actually expecting to be posted form data about a tag in a repo in GitLab and then to go ahead and tweet information from that. So in other words, this function will add integration between GitLab and Twitter by writing a very easy function. So the function's already up there. You may be wondering, well, how did I get it up there? Well, that was pretty easy. So this shows how to deploy a function. And I split it out so it's a little easier to read. So from the command line, you can simply say kubeless function deploy, the name of the function, the name that you want to give the function in this case, I want to call it tweet. And then you just need to give it some basic information. So first I tell it that it's triggered by HTTP. In other words, that will hit a URL and that will cause the function to fire. Secondly, I'm going to tell it that it's a Python 2.7 runtime. We support two other runtimes right now. You can use Node.js and you can use Ruby. It's very simple to add other runtimes. So if your organization has other scripting languages that are important to them, you can make runtimes for those languages or versions. So this next line that says handler is sendtweet.tweet, that's saying that in the sendtweet module, to use the tweet function. And the next line just says that there's a file that I'm uploading with this command called sendtweet.py. Familiar with Python, that just means that the sendtweet is the name of the module that I'm sending. And then finally, I have a list of requirements. And those are those two dependencies that I listed before. So that's all it takes for me to create a function and I can run this one command that actually uploads the function and QList takes care of deploying the function. And so I think the best way to explain this is to turn it back over to Jason so he can show you how to make it work. Thanks for giving it back. So just to save ourselves a little time, I've already created yet another project at this time under the Bitnami group. In this, we've called it just a Qubeless function. Now, in this project, we don't have a whole lot. It's all over readme and says, hey look, it's a Qubeless function demo. What I've done is in order to expand the capabilities of GitLab without a native integration, you can make use of webhooks. So we can go over to settings, the integration, and then we can look at the webhooks. There is a documentation page that can be reached right from here on the integrations. Just click on the webhooks and it'll take you over there. And it describes all of the JSON that Rick was talking about. Now, what I would normally do is I would put in a URL and then I would give it a secret token if we so decided to set that up with a remote service of some kind. And then I would set which triggers I want to actually cause the call of that particular webhook. And what'll happen is it will perform a post request in the background with the web form filled in with all of the JSON data that we need. I've already created one earlier that actually calls over to the default namespace and calls the services for Tweet, therefore calling the function for Qubeless. What we've set this one up for, as you can see, is for tag push events. So what I'll do is I'll come over and make a very small change. And once I have that change in place, I'll tag that. Okay, now we have the changes in place. If I'm looking at this, we have the readme and it's now showing the changes. I'm going to come over to tags for my particular repository. And here's my previous 0.1 tag. And I'm just going to do this from the UI just because I don't feel like showing everybody the console. And I'm going to call this one version 1.0, which for master and say demo is now complete. What this will do, I'll hit create tag. And in the background, it will trigger that remote web hook call. And in this case, it's causing a Tweet to occur. Now, if I go over to the Twitter account that we've set it up with, you can now see that through the use of GitLab with web hooks and then extending that with Qubeless to call our function that's actually tweeting via Python, we now can have a public notification to any of our followers that we've actually pushed a new version tag to the repository. Thus, we can actually show what we can extend and how you can extend it in a simple way without having to worry about all the abstractions below what you want to get done. You can just get it done. Rick, I'll hand it back to you. Okay, I think that is the end of the program's content. How about I'll turn it back over to Mayvian and she can emcee any questions? Does that sound good, Mayvian? Sounds great. So, if anyone has some questions, feel free to add those to the chat client. This time, I know I saw one or two to get started. So, let me get those up and going. So, the first one is about Kubernetes in general. So, either of you could answer, if you'd like, the question is, can Kubernetes run on bare metal? For example, can it replace hypervisors like OpenStack or Proxmox or does it sit on top of hypervisors? Rick, would you like that one or me? I wanted to take that one, Jason. Sure thing. So, Kubernetes can be run on top of multiple operating systems. It's a software stack that does the container orchestration and management. You can do this on bare, if not hard. It's actually very easy to do with operating systems such as Coral S and container-focused distributions of Linux. You can use it on top of a hypervisor similar to the way that this demo was actually run today, which is actually running on Google's container engine, running on top of KVM, that they're running on their Linux servers. So, it runs both ways. You can run it on a bare metal box. You don't require a KVM, but it runs perfectly fine in KVM. Okay. So, then, the next question, there's a few questions in regards to Cubeless. So, Rick, could you give some of the attendees real-life examples of Cubeless functions? They say it seems a little fuzzy from a web developer point of view. So, speaking for serverless in general, people do write applications in a serverless way. For example, you can imagine that I write a function with, so, for example, Node.js, which returns a web page. And then I can write another function that presents a REST API call. And then I can hook those together, right? So, that is a practice. But what we're seeing, the more common use cases are, for example, extending an API. Let's say there is an API that provides a list of every, you know, a complete list from a database. An API itself does not offer a search function. Well, you can imagine that if you needed a search function, how easy it would be to write an application, to write a function that simply iterates over the list and only returns the things that match your search terms. So, you can extend an application or an API's existing functionality in that way. So, that's a more usual case. The case that we just demonstrated is very common to connect applications with chat ops. So, for example, you can imagine that you have an application where users create documents and those can fire events that Kafka can subscribe to. And then you can subscribe to those events via Kafka and then, say, update your Mattermost or your Slack channels with information about the document that was just created, for example. And in that way, you can integrate two applications together. Those are very common use cases. Another common use case that we've heard is people actually using it to create Kubernetes operators directly, because you're right on the Kubernetes cluster. It means you have access to the whole Kubernetes API so you can provide interesting insights into what's happening on your cluster without having to go through a whole application deployment process. Did that answer the question? Yes, they said thanks. David asks, is Cubeless Event Driven? Yes, there's two kinds of events. One is a PubSub event via Kafka and that is typical PubSub semantics with topics and et cetera. And then the other is just an HTTP trigger. So what Jason demonstrated was an event occurs in GitLab and it has a webhook which essentially triggers a call to that function. So those are the two kinds of events. Okay. So Scott, one of our attendees has asked and you covered this in the very beginning but just to reiterate, yes, are Bitnami application containers preconfigured to speak to other services or is that a manual effort? I'm not sure what he means by services in this case. If he means, for example, is the Ruby on Rails container configured to speak to MySQL? Absolutely. So if you look on bitnami.com forward slash containers, you'll see development containers and you'll see application containers. The best way to access those or the easiest is to actually go to the GitHub repositories and there's links right there for them and to get the Docker compose files. And the Docker compose files will put all the containers that you need together and they'll just work or just configure to just work with the one command Docker compose. We call that curl up. You curl a file and then you say Docker compose up. It's only two commands to get the whole stack working whether it's a development container or a term key application. Okay. Great. And then I think we only have time for one more question. Let's find one. There's a few. This one is just more of a general type question, I think. This is from Vladimir. He asks, when running a VM, the stacks are based on Linux applications that you can install and remove upon need. With containers, we seem to be tied to the software suite. How can you transition from software applications like microservices? They're more interested in running applications written in PHP. That's a good question. And while it might be slightly off topic, I'm willing to field some parts of it. So your familiarity is with individualized systems and having the chunks that you need individually installed probably prepared through, let's say, vagrant and chef, right? When you're using microservices and containers, what you're doing is you're producing a very minimalized system that has only the requirements that you need for your code or your application. So in the case of PHP, you would have just PHP, the modules that you're going to be using in your particular sense, and all of its necessary components. MySQL accessors for clients and OpenSSL, a couple of things like that, but absolutely nothing else. Then PHP would be running a container, and you would hand it your code base to actually run the application. And then you would provide a second service, which is MySQL in the case of the example we had with Bitnami, and then your PHP application would talk through orchestration to that MySQL service. Then you have those two separate pieces. That way your MySQL is just itself and it's isolated, and your PHP is just itself and it's isolated. While this is similar to how you would lay out a PHP server and you'd have MySQL on a large host, and then have multiple front-end servers that were actually doing the PHP responses so that you can balance that load with containers, what you can do is you can actually set up multiple instances of MySQL, and then they can share their data through replication. And then when you need to scale up your PHP, say you don't have enough front-end servers or you don't have enough to handle both your API calls and your front-end services, you can say, spend me up a new PHP container but only run the API set and give me three copies of that. And it will spawn up three of those, and then it will route just the API requests to that. So where does this come in differently than VMs? With Kubernetes and Docker, what you can do is you can set up nodes and pods. A pod is a group of containers, and a node is a host on which containers are run. One of the things that Kubernetes does for you is intentionally balance those loads across multiple nodes. So if one node happens to go down, one server happens to go down, you won't lose everything. You might lose one or two requests from your PHP application. However, everything will keep going because the load balancers that stand in front will automatically shift over to the portions of the pod that still work. Does that help? He says yes. Thank you, Jason. All right, everyone. Well, just to wrap up the webinar, I want to thank Jason and Rick for presenting on behalf of GitLab and Bitnami today. Thanks for joining everyone.