 Well good afternoon. Good morning or good evening depending upon where you are in the world and welcome to today's devops.com webinar I'm charlaine ohanlon moderator for today's event, and I welcome you as Always we have a great webinar on tap today Lots of great information, but before we get started. We do have a few housekeeping items. We need to go over First of all today's event is being recorded. So if you miss any or all of the event We will be able to access it on demand later on Following today's webinar. We'll be sending out an email that contains a link to access the webinar on demand And we are taking questions from the audience So if at any time today during today's presentation you have a question for either one of our panelists, please Don't wait don't hesitate just use your go-to-webinar control panel and submit your question And we'll take a few minutes near the end of today's presentation and go through those audience questions and Finally at the end of today's webinar. We are going to be doing a drawing for $3.50 Amazon gift cards So stick around. Hopefully your name will be one of the three chosen Okay, with that we'll go ahead and kick off today's webinar, which is running containerized applications on modern serverless platforms Our speakers today are William Chia who is in product marketing at GitLab and And I'm at Alp Balkan. Who is a developer advocate at Google? Welcome gentlemen. Thank you so much for joining me today Hello Hey, thanks, Charlene, uh, no problem. We would love to kick it off with a poll just to ask a few questions of the audience Uh, and we'd be happy to share the results as well to just see who's on the call today and how we can Shape the content. So I think the first question we Would like to know about is how often in your organization do you deploy to production? And so, you know Every six months or longer, maybe one one production deployment every six months or longer Once per month or maybe once every few months Once per week or a few times per month once per day or multiple times per day in terms of where You know, how often does your uh, organization deploy to production And it looks like most folks have have entered in their, uh their choices And I think uh, Charlene, can we can we kind of show the the poll? Results, yeah, sure sure. Let me go ahead and close it out here and show the results And there they are So it looks like the majority of folks on today's webinar Are deploying anywhere from once a week to a few times per month Uh, we have some folks who are Deploying multiple times per day and we have some folks that are doing one release You know every six months longer. So a spread across what looks like most folks Doing a production deploy about, you know, once per week Um, the next question that we have if you can pop that one up, Charlene Sure. Happy to Is what are which cloud providers are you using today? And so something we're definitely interested is Are you doing everything on prem or everything in a private cloud? Are you using one of the major cloud providers? Let's say, you know, google aws or azure or Uh, are you using perhaps another cloud provider? And if you're using another cloud provider feel free to Drop in chat Which uh, which cloud which what are some of the other providers that you are making use of? And uh, certainly feel free to check all that apply. We know some folks Are using a single cloud provider and some folks are also doing multicloud And so I think it'll be kind of interesting to see how this data pops up Um, I have some guesses on how I think the graph will go based on where the industry is But it'd be good to see where this Webinar is and I think if we can show the results here to the folks on the call we can See who's who's on the call here? So, uh, aws at 50 percent What's very interesting here is about 40 percent or 39 percent A folks on the call are doing no no public cloud everything either on premises or in a private cloud And some other types of cloud providers there, so that's definitely interesting Just a couple more questions and we're going to kick into our content The next question we'd like to know is about, uh, kubernetes And so you may or may not be familiar with kubernetes We will be talking about kubernetes and give you a full introduction today on the call And we're just curious. Where are you at in your organization? Perhaps you're not using or kubernetes not looking not looking to use it Perhaps you have some plans to use kubernetes. Maybe you have started testing with it today Or maybe you are doing kubernetes in in your infrastructure for for production deployments So it looks like most folks have have answered the poll there Let's go ahead and show the answers to the crew so they can also see who's on the call This is pretty interesting to me, uh It looks like we have several several folks 30 percent almost a third using an experimental or non-production environments And just a few who are not using kubernetes about 20 percent 22 percent Or not planning to use it, but everyone else has some familiarity or is using kubernetes It looks like most folks are either planning to evaluate it or or evaluating I think you're on the right webinar. We'll be talking about kubernetes The last question we have is the same question But in regards to serverless, so of course we'll be talking about serverless today on the webinar And so the question would be where is your organization at in terms of serverless adoption in a similar question either Not using it not planning to let's say in the next six months Or some level of adoption And today we'll be talking specifically about containers using containers with serverless But this could be any serverless technology, you know lambda or google functions Where is your organization at? so It actually looks like folks are both are still coming in I was going to say we could show these results, but we'll give another moment here And I always think it's interesting when I tend to join webcast as well You get a you get a makeup of the audience and and sometimes it's a little bit different than what you expect So I think strongly if we can close this one out and show it to the folks also interesting results, so this one Pretty split across with only one percent using serverless today in production So definitely on the right webinar, we'll be talking about serverless and if you're You know evaluating or looking to adopt serverless. This is what we're happy to share with you today so with that I would love to close down that poll And share a few slides here just to chat about what serverless is and give a bit of an introduction so First just a level set for everyone on the call. What is serverless? So Really the way I'm at and I are thinking about this are in terms of of two components an operational model and a programming model So with the operational model serverless is about not needing to manage any of your own infrastructure So the traditional serverless Services are things like amazon lambda google functions azure cloud functions And the idea here is you write a single function and upload that into one of these cloud providers I'll show you what that looks like So this literally can be as small as a single hello world function This is an example from google cloud platform uploading to google functions. So you write the function You can either put it into the web interface as i'm showing here on the screen Or very often what you do is you will actually zip up that code and do a zip file Upload that to the web and then all of the operations is taken care of for you So what the cloud provider does is it's event driven when an event comes in They will spin up the infrastructure to run that function They will execute it and then if there's more load They'll they'll process that load But if there's after a certain period of time usually like five to 15 minutes Then it will actually shut down that infrastructure What's really nice about the serverless model Is it's a pay only for usage That means if your app is getting hit with a lot of requests The cloud provider will spin up more and more infrastructure to handle that load It'll scale for you elastically so you don't have to worry about load or any operations And if there's no load on your app, it will scale all the way down to zero So there's a lot of benefits here with serverless technology and in particular serverless functions However serverless is more than just functions Often when I'm having conversations with various folks, whether I'm talking to customers of GitLab or users or folks at a conference People tend to think of serverless and equate it with Functions or functions as a service what we would call fast, but really serverless is so much more than functions That's what we'd really like to talk about today One of the things and this is pretty common is folks will say functions go together great with managed services So also in the camp of serverless if you were building a serverless application or serverless architecture It is almost certainly will include some type of managed services Again, usually from some type of either one of the major cloud providers or some type of sass provider That will give you either a messaging queue or a database Could be ai or it even could be Some amounts of compute that you're using to to augment and manage your other services that are are built on functions And these types of managed services go really well together with functions. Usually in a microservices type of architecture But if you're if you're using functions, they don't apply it for every use case And there are some challenges So when you start using serverless functions, one of the challenges are if you have an existing application And you want to move it into a fast or functions as a service platform Uh, your existing app will need to be completely re-architected And usually pretty dramatically. It's uh, it's you would need to have that shift. That's one of the challenges Another really really big challenge one that I hear extremely often is that if you're using fast There's a vendor lock-in. So if you write all of your functions on lambda And then later on the side you want to use google cloud platform or azure functions Shifting between those platforms can be very very difficult Um, especially the more managed services you consume a lot of times are very integrated And there's not a lot of portability It's just one of the things that comes along with using functions from a cloud provider There's not a lot of portability there another challenge is If you are used to developing in a certain way or your engineering team Is most likely comprised of engineers who have a certain set of development skills architectural patterns that they understand Uh, programming with functions is a whole new paradigm to learn There's a lot of things that you need to do differently than what you're used to So those new paradigms can be a learning curve if you haven't done them before or it can be difficult or expensive to procure You know to hire Developers that understand how to how to program with functions because it is new Additionally, one of those challenges is if you want to do local development Very very difficult to do when you're using functions as a service The reason why is because you upload them to the cloud service and then all of your different functions are interacting with each other In order to replicate that entire cloud service and your local development is very very difficult It is possible, but it requires a lot of sophisticated engineering Usually setting some type of stubs and You know data type services that are sending you that pre-program responses And it's really almost impossible to test locally You could have some types of proxies where you're funneling production Data down to your local development. You can see it's very very difficult very very complicated It is one of the challenges of using functions is it's just not going to be local development like you're used to And finally there are new architectural patterns to learn So if you're used to building an application with that runs in a virtual machine And it has certain components in order to break that all out into a set of services is an architectural shift And then in order to break those services into individual functions is another architectural shift So learning that new architectural paradigm can be a challenge With that though, there is Functions are good for a lot of use cases, but so are containers And so I would like to pass over to Ahmet to share a little bit and I will pass the screen share over to him as well Potentially we have a lot more folks on the line now. So I noticed that Ahmet's in the list is farther down. There we go Um Okay, hopefully my slides are coming through voluntarily Okay So thanks, William. This was a great introduction to functions So when we think of serverless, we tend to think of pretty much functions and functions as services only But as William has said containers actually give us that opportunity to pretty much do whatever we want in a Way that we're already used to right So, uh, first of all containers are portable when you build a container image It runs on it runs everywhere the same way you can go from one cloud provider to another Um containers are flexible. Uh, you can use any language framework or the dependency that you like If you wanted to actually run let's say a for-trans service on containers, you could totally do that, right? And Finally the containers are reproducible. So this gives out This gives us that flexibility to to have something run locally the same ways same way it runs on a cloud provider So containers are this thing that we already had for a while already Uh, so why are we not using containers? Right? Why don't we just apply our applications to containers and run them in a serverless fashion? So let me think about think about that. Um, what is the best place to run containers and This brings us to kubernetes. Um, and if you've gone through already already before Assuming we know if you already know kubernetes, uh kubernetes is a de facto platform for running containers on a set of machines that you own Um, so kubernetes was initially developed at google and now it is one of the most often so popular projects, right? Um So kubernetes is not optimized for serverless paradigm And let's talk about that a little bit. Why is kubernetes not great? So first of all, uh kubernetes does not have this notion of fast request space auto scaling kubernetes has a slower auto scaling mechanism that lets you scale on cpu or memory But it doesn't really have that notion of request space auto scaling and Another serverless property that we're seeing in functions of the services is the scale to zero For example, when you're not running anything when you're not running when you're not handling a request Nothing is ideally running and we have this notion of request space activation So if you ever heard of the notion of cold stars, this is what we're referring to Basically a request comes into your service and then your service actually starts and handles that request and goes away ideally after a while And finally kubernetes does not have this notion of events and event triggers like kubernetes just runs containers But it doesn't really know what is going on inside a container. It doesn't uh know anything about events and serverless platforms like a double s lambda google cloud functions So on and so forth. They know they have this notion of events which kubernetes is lacking And finally, um, you need to worry about actually operational aspects of running applications on kubernetes So for example, you need to worry about networking load balancing. Uh, what is the process life cycle? Like how do I do restarts? Um, how many replicas do I run at a time? Um, like how do I set up auto scaling again? As I mentioned earlier, how do I do my rollouts and What is the memory cpu that my container should be running with? You know ideally in serverless worlds The serverless world you worry less about the operational aspects and you focus on the code and once you deploy your code The rest is pretty much taken care of by the infrastructure provider So what we're seeing is that um at the end of the day Um operators want kubernetes kubernetes is great for, uh orchestrating infrastructure and microservices Um, you know when we talk to operators, they just want uh kubernetes. They want to have You know, hey, I have these these machines. Can you manage them for me? So, uh, for example, we have google kubernetes engine, which is the, um managed kubernetes service we have on google cloud People come to google kubernetes engine They say hey give me 10 machines and they get those 10 machines that are fully managed by google cloud The problem though with kubernetes is um, it's not the right abstraction for developers Like if you're just a software engineer writing code You have to go learn a lot of a lot about the infrastructure concerns and kubernetes api With best practices about kubernetes and so on and so forth So the kubernetes kubernetes is not the right abstraction for developers for sure So on the other hand when we talk to developers We're seeing that most developers don't actually care about the infrastructure and operating issues at all They just want to write code that directly benefits their business. So they don't want to be in the business of, you know, doing operations Uh, they don't want to be constrained by, um, you know, framework or language They want to, you know, have that full flexibility. They want to say, you know, I want to use python 3 I want to use python 2 I want to use java Uh eight or maybe like a java version that is from 12 years ago So they want that flexibility While still having something like a high-level platform that lets them deploy their applications, right? And finally, they don't want to manage the infrastructure at all So most serverless platforms give them that like the reason why people like, uh, you know platforms like Google App Engine, Google Cloud Functions, AWS Lambda and so on and so forth The only reason they like it is you just write your code and the infrastructure gets out of the way So why can't we have both? You know containers give you that flexibility that you need Um, so what if we had an option to run containers in a fully managed infrastructure? But at the same time, you know, we can't we also have containers run easily on the infrastructure that you own Because most of you are not directly, you know, running on a fully managed environment You're probably still running virtual machines. You're probably, you know, running on bare metal machine servers that you have So what if you had something like an easy way to run containers like a kubernetes cluster? But still have that serverless developer experience, right? So this brings us to the k-native project. Uh, i'm going to talk a little bit about k-native I'm not going to go into too much detail. Um, so k-native project is last year something, uh, vnounced last year along with some partners like ibm, redhead and pivotal. Um, so k-native project is an open source building blocks, uh, for creating serverless experiences of on kubernetes. So what does that mean? First of all, k-native has an open api that defines what the serverless experiences look like on kubernetes or on any other platform And secondly, it is an open source implementation and i'm going to explain that in a little bit So if you are interested in k-native, you can find more information on k-native.dat. So let's see what k-native adds to the table, right? It brings to the table. So first of all, we have an open api on k-native. k-native is trying to standardize serverless containers just like how kubernetes has standardized running containers. k-native wants to standardize the notion of having serverless containers and events on a kubernetes cluster So k-native has some properties that kubernetes lacks that i mentioned earlier. For example, scale to zero or zero to n, which is the fast request based activation and scaling. And it has this notion of event sources, event handling and event delivery. So if you already used kubernetes before, or you, you know, let's say roughly know how kubernetes works, the power of the declarative apis in kubernetes come and manifest themselves on k-native as well. And that gives us a really powerful language to express serverless containers. So let's talk a little bit about cloudrun. Cloudrun is a new product that is by Google cloud. It is a serverless compute platform to run serverless containers. Cloudrun is built from the k-native project that I just described. So we took an open source implementation and we pretty much re-implemented it internally so that we can run it internally. And then it uses the same api that is portable. So you can, you know, install k-native on your kubernetes cluster and it would work the same way that we have cloudrun on Google cloud. And we also have an option to run cloudrun anywhere you want. So let's dig a little bit into that. So why is cloudrun interesting? First of all, it lets you run any language, any binary, any library because we're using containers, right? If you wanted to run cobalt containers in a serverless fashion, you could totally do that because it's container based. And we know that you can install anything in containers. And it has still this paper usage model that you have in serverless platforms. You're only paying when you're processing an HTTP request. So if you're not running, if you're not getting any requests, you're not actually charged. And on top of that, if multiple requests are actually overlapping, you're not double charged as opposed to some platforms like, you know, AWS Lambda or Google Cloud Functions, they charge you per request. But cloudrun doesn't charge you per request. It only charges you while you're processing an HTTP request. And certainly, you know, cloudrun can go from running nothing to thousands of requests per second really, really quickly. Like you can be running nothing. And next minute, you'll be handling 10,000 requests or more within just a few seconds without dropping any of the requests. So this is really powerful. And this is why people really like serverless. You know, if you were managing servers, running from nothing to tens of thousands of requests will require you to provision a lot of infrastructure ahead of time. So we don't have that in, we don't have that in cloudrun or other serverless platforms. And lastly, as I explained earlier, the cloudrun really gives you that option to run, option to choose where you're running. So we have two options. And I'm going to talk a little bit about these. One is running fully on Google's infrastructure on or running on a Kubernetes cluster that you already have or provided by Google. So as I said earlier, on the left, you're going to see cloudrun. Cloudrun runs your containers on Google's private infrastructure. Like you don't see any virtual machines. There are no, you know, instances laying around. You just give us the container and then we run it for you. So this is great for teams and developers that don't want to do any infrastructure and operations and you only pay for what you use. And as I said earlier, cloudrun implements the Knative API. So this is again portable. On the right, I have cloudrun on GKE. So this is a flavor of cloudrun that is installed on top of Google Kubernetes engine clusters. So Google Kubernetes engine is a standard Kubernetes distribution. And then we add an add-on called cloudrun to it. And then suddenly your Kubernetes cluster is supercharged with serverless powers. So if you're already using Kubernetes on Google Cloud by enabling this, you get that serverless developer experience of simply deploying containers and you have that FES request base activation and auto-scaling. You get all these superpowers that you're getting from Knative. And cloudrun on GKE actually uses the open source Knative implementation. It is not a re-implementation of the same API. It is actually just the open source Knative distribution. So both services give you the same developer experience, same API, same image format and same command line interface. So they're practically the same thing. They're just running on different places. So cloudrun on GKE runs on virtual machines that you have in your Kubernetes cluster. So instead of paying per request model, you're actually paying for the virtual machine instances that you already have. So what is cloudrun good for? So let's talk a little bit about this. If you have an application that does processing when a request comes, cloudrun is a great idea. So cloudrun doesn't support background processing. For example, let's say you have an application that wakes up every five minutes and does something and just goes to sleep again. Cloudrun is not great for that. If you get no requests, we actually throttle your CPU almost to zero. So there is no background processing. There's only processing when you're handling requests. And cloudrun is great for going serverless without changing your code. Like if you already have a Python Java application out there, you can just put in a container and then suddenly you're actually running in a serverless fashion where you don't have to manage any of the infrastructure. And you get all that auto-scaling capabilities out of the box. And lastly, let's say you want to move out of a container-based infrastructure that you manage, let's say Kubernetes cluster, because you want less cluster management. You don't want to deal with managing virtual machines. You don't want to deal with managing Kubernetes upgrades or operating system upgrades, or you don't want to deal with the system dependencies that you have on your environment. And you want faster auto-scaling. Again, this is like another reason to move out of Kubernetes to go to cloudrun. So lastly, I'm going to finish my presentation with a demo and then I'm going to hand it back to William. So in this demo, I want to show a demo of what we've been discussing. So first of all, I'm going to walk you through a legacy application and I'm going to explain why is this a legacy application and not a greenfield application. And then I'm going to explain how to put that in a container image using Docker and then I'm going to go serverless from there. I'm going to deploy this container to cloudrun and then I'm going to show you how this application runs on cloudrun on an endpoint that is HTTPS and it's fully managed by Google Cloud and it scales automatically. And then I'm going to take my application from cloudrun Google Managing Infrastructure and then I'm going to put it to a Kubernetes cluster that I already managed. And then I'm going to take it from there and I'm going to put it to another cloud provider just to show you that Knative is portable and it's vendor neutral. So I'm going to exit my presentation right here and I'm going to walk you through the application that I've been mentioning earlier a little bit. So this is the PDF application that I already have. As you can see, this application basically converts doc files to PDF and let's see how it does actually. So I have this Python application here. Let's click to it. And just of this application is basically down here. I have a function that basically calls open office or liberal office. And as you know, liberal office is like 10 years also fair and it's probably written in C++ and I'm invoking this from a Python application. So the way I do that is I go back and I'll show you this Docker file which is the file that we use to build containers. So this basically says, hey, I'm going to use a Python image and I'm going to add liberal office to it and a bunch of other dependencies that you normally wouldn't be able to install on a serverless environment. And then finally, I'm going to run this server application. So if I go to my demo environment, so this is the Google Cloud Platform console if you're not familiar with it. So let's go to this PDF directory and the way you build this application is you type Docker build and then you basically specify where you want to build this image and then Docker kicks off, goes ahead and kicks off a build. So I'm not going to wait for this to complete. I already built and pushed the image for the sake of this demo. So next, I want to show you how to deploy this to Cloud Run. Again, Cloud Run is that environment that I deploy containers that are HTTP based and it's fully managed by Google. I don't have to do any operations whatsoever and to show how that actually works, I'm going to click create service here and I'm going to type the same container name I just described earlier. And once I type that, it basically says me where do you want to deploy? Do you want everyone to access it? And I'm going to check these boxes and I'm just going to hit create. So once I do that, what's going on right now behind the scenes is that Google's managed infrastructure is downloading my container image and it's using the infrastructure that is already provisioned by Google to serve this traffic. So in less than, say 15 seconds, the application is actually deployed and I get a fully HTTPS endpoint from Google and once I click to it, after a few seconds of cold start, hopefully my application is going to load. It seems like this is the application that you use to convert a doc file to PDF. So there's an example down here. I'm actually going to click to it. And it's probably going to download that doc file first and then it's going to convert that to PDF. Again, this is a container that runs open office in it and normally if you're running on like, say Google Cloud Functions or AWS Lambda, which is what we traditionally associate as serverless, you wouldn't be able to do that. So now you have that ability to do this on Google Cloud Run. So it seems like my document has actually been converted to PDF just fine. So I'm going to go back and I'm going to show you that here. I have this Kubernetes cluster on Google Cloud again. It's called the GKE cluster. And what I want to show you is I'm going to click on this cluster and I want to show the add-on that I mentioned earlier that is already installed in this. So if I scroll down here, it basically says add-ons here. You see it says cloud run on GKE. So this is that thing that makes my GKE cluster run this cloud run applications the same way I run them. So if I go back here, go back to my console again, let's take this demo application again, and then I'm going to deploy another service to Cloud Run. But this time, instead of deploying to Google's managed infrastructure, I'm going to deploy it to the GKE cluster that I already have. So if I go ahead here, and if I click on the location, you're going to see that there's a US central one which is fully managed by Google. But I also have a GKE cluster that I can deploy to. So the same way, I deploy this application. So what's happening right now is I'm using the same API or same command line experience or same UI to deploy to both platforms. And again, in a few seconds, this application is already deployed to GKE. So if I click on the URL that it gave me, I get the same experience. If I click to the same demo, the whole thing works the same way because it's containers, right? So let's take a little bit behind the scenes of what is actually going on here. So if I come to my cloud console here, if I say gcloud beta run services list, let's type that correctly, bunch of warnings, you're going to see that I have this PDF service. So if I say gcloud beta run services describe, because I want to see what's going on inside my service. And I'm going to say, I'm running on the managed platform. And then my location is US central one. And I have this application called PDF. So if I take a look at the output of this command, I'm basically seeing this YAML file, which is oddly familiar. It has API version, it says kind service, it has metadata. If you already use Kubernetes, this is because this is a Kubernetes object running on Google's infrastructure. So what I'm going to do is, I'm actually going to save this thing to a file called service.yaml. Okay, so let's take a look at this service.yaml. And what I'm going to do here, first and foremost, is I'm going to delete bunch of things that I already don't need. So for example, I'm going to do all these lines to just to show you what is going on here. All right. So at the end of the day, what's going on here is that this is a simple K native service description. It basically says that I have this image and I have this memory constraint. And you know, these are my timeouts, etc. Please go run this application for me. Using the K native API. So lastly, as I said earlier, I'm going to go ahead and deploy this application to IBM's cloud. So if I go to IBM cloud console here, you're going to see that I have a cluster here called K native cluster. Let's click on this. Just like, you know, what I've been doing at Google cloud, I actually installed K native to IBM cloud as well. But I didn't do it myself. Like I didn't go ahead and manage this K native installation. If I click to the add on step up here, you're going to see that IBM cloud also has a K native distribution. Just like Google cloud does, I actually went and enabled manage K native, which gives me the serverless capabilities on my Kubernetes cluster. So, you know, you're seeing that I have the same functionality on multiple cloud platforms and it's open source and it is open API and it is portable. So if I go back to my console here, and if I say, you know, if I point my kubectl command to the IBM cluster, let's run kubectl get nodes here. And then you're going to see that. It says IKSE here, which means IBM Kubernetes service. I'm actually right now targeting IBM Kubernetes service. So if I go ahead and kubectl apply the service YAML that I saved earlier, you're going to see that the service is actually now deployed to IBM Kubernetes service. So if I do kubectl get services, I'm going to get this domain name that is pretty long, but it is from IBM cloud and it also works the same way. So if I go ahead and open a new tab, and deploy this application, yep, if I visit this application, I have the same application again, the same container, this time running on IBM cloud. So it took me like five minutes to go from something that is, that I have locally to Google cloud and then to Kubernetes and then from there to, I went to IBM's Kubernetes cluster. So this is pretty much all I wanted to show. Again, kNative is that open API and open implementation that gave me that functionality. So now I want to hand it back to William. Okay, that is great Amet. Of course, this is a phenomenal developer experience that you've seen. Developing locally, I want to have my application deployed. Amet showed how to do that through the UI, which is just a few clicks, but there are also really nice command line tools for Cloud Run that'll allow you from your shell to do these kind of same deployments with just a few commands. Of course, at some point, when you scale, you want to automate your deployments and then you really want terrific CICD. And so what I would like to share is that most folks are familiar with GitLab as a code hosting service. So a lot of you folks on the call have your code on GitLab. You saw an Amet's demo, the code that he was deploying was hosted on GitLab. But what you may not know is that GitLab also has world-class CICD. In fact, many analysts and enterprises have written about our CICD capabilities as world-class CICD. So when you're ready to automate those deployments, GitLab has tremendous CICD. What you may not know is that GitLab also has a whole slew of other capabilities, everything from project management through to release management, configuration, even monitoring and security capabilities. So what you're seeing on the slide right now is actually a screenshot of our homepage that shows all the categories that we have functionality in and even some that are coming up on our roadmap. So one of these that you will see is that within our, what we call the configure stage of the DevOps lifecycle, our goal is to have a single application that covers the entire lifecycle, we have a serverless as functionality. And so the GitLab serverless is a way for you to actually have your own functions as a service, your own FAS, or even your own container service built into GitLab. So the advantage here is whether you are using vanilla Kubernetes that you have provisioned yourself and are running on your own on-premises infrastructure, whether you're using Kubernetes that you've built yourself and are running on cloud infrastructure or maybe using a managed service like IBM or GKE that you've seen, wherever your Kubernetes cluster is, you can connect it to GitLab and GitLab serverless can actually deploy your code into that Kubernetes cluster and using Knative will allow it to scale up and down as you've seen described. So what I would like to also kind of show you this demo of how that will work, you can think of your infrastructure can be anything from bare metal to any cloud service, you are going to have a Kubernetes cluster running on top of that. For the purposes of my demo, I'll show that with GKE, but on GitLab it can connect to any Kubernetes cluster and this is one of the really powerful elements of these types of technology is just that portability. You can decide where you want it to run and you are even able to shift where that runs, even have it run in multiple places as multi-cloud. We will be using the Knative APIs for the serving and eventing and in this case, this is what Cloud Run provides. I'll show you how you can install that via GitLab or you could use Cloud Run and then of course, GitLab serverless, that's going to provide you this concept of functions or applications whereas Kubernetes doesn't have these types of scaling capabilities, that's why you need Knative and Knative does not have the concept of a function but within GitLab, you can actually write a single function and have it scale or as you've seen a container, you can have a container and have GitLab serverless deploy that and that's what I'd like to share with you over here, potentially. So here I have a project that I have already set up on GitLab and you can see this is the same code that you have already seen in terms of this PDF application where it has a bit of Python and it's calling out to Libre Office to do the conversion. This is the same code that AMA has running. The only difference is I've made a commit here to add a CI YAML file. And as I mentioned, GitLab has CI CD to allow you to do this automation and the way that you configure that CI CD is via some simple YAML and this is very powerful because it means you can version your CI CD configuration, means you can have multiple people collaborate in the same way you would collaborate on code, you can roll forward and roll backward and have accountability and auditing, etc. Now this GitLab CI YAML file is extremely simple and that's because it is using the GitLab serverless template. So within GitLab CI YAML, you can import a template and then extend it if you'd like and there are several templates that just come baked into the lab and the serverless template is one of those. So you can see all I'm doing here is importing or including this serverless template and I'm going to create two stages. I'm going to have a build stage of my pipeline and a deploy stage of my pipeline and both of those are making use of a serverless image that just comes baked into GitLab. In a nutshell, GitLab serverless really is the CI YAML templates and some visualization capabilities that I'll show in a moment. So this is the same code you're familiar with and in order to deploy this code, I of course want to connect GitLab to my Kubernetes cluster. So I'll go to the Operations tab and go here to Kubernetes. When I go to add a cluster, I have a few options. So one is if you are not familiar with Kubernetes, this is a nice way to get started. We have a great integration with GKE where you can literally sign in with your Google account and name the cluster and GitLab will actually go and create the cluster for you on GKE. So if you're new to Kubernetes or want an easy way to spin up a quick cluster, the GKE integration does that nicely. Or if you have an Operations team that is already managing your Kubernetes cluster, you can attach any cluster here. You can see I've already attached a cluster and what I've done with this cluster is I've already created a cluster here and I've one-click installed some of these add-ons. So GKE Lab can also help you manage things like Teller, HemTiller, HelmTiller, Prometheus, which is going to do some monitoring for us, and of course Knative, which we're going to use for those serverless APIs. I've set my own domain here and what you can see is I can then run a CICD pipeline. So when I actually run my pipeline, here is my build and here's my deploy and we can actually look. This is a GitLab CICD when it runs that file. You can see here's the build script that it ran if I had any artifacts I could download them here and then of course it is also going to allow me to go and look at the deploy stage. So I've deployed this using GitLab serverless. What that will do is not only deploy it onto my infrastructure and give me an endpoint URL but here I have some visualizations I talked about. So what GitLab is doing now is it's querying my Knative service and it seems like I have an application. I've named it this service and here's my endpoint. So if I visit the endpoint you will see here's the cold start we talked about where in this case it'll take anywhere from a few seconds to there's no pods spun up. There's no container so from zero spun up this container and now here's the same application we've seen a few times. Of course now that it's live and running hot it will spin up very quickly with new requests and I can go into GitLab and see some of the visualizations on this page. So for example it will show me a request graph here of those requests that I just did and there it is using the Prometheus monitoring. So this is some of the integration you see between GitLab and Kubernetes. Of course what we have today is if you want to use all of this together with Cloud Run you can use Cloud Run on GKE and Cloud Run will install your K-native cluster when you K-native to your cluster when you attach that cluster to GitLab GitLab will actually detect K-native and will be able to use GitLab serverless all running just the way you've seen right now. But in the future we do also have some roadmap where we want to enable you to also be able to one click install K-native on GKE of course GitLab we like to keep everything very transparent kind of unique so you can go to our homepage this is literally about gitlab.com you can see here is our roadmap for each of our stages we link to our roadmap and you can click here to see what's coming up and that's how you can get to learn more about when even more integration between GKE and Cloud Run will be coming. So with that I would like to stop sharing my screen and Charlene I believe that while we've been chatting here we have some folks that have been asking questions along and we would love to just do some live Q&A to see what folks in the call want to learn about. Perfect, perfect. Well yes we have gotten some great questions in but there's plenty of time guys if you have a question let me put myself on what can if you have a question for any of our panelists please go ahead and use your go-to-webinar control panel and submit your question okay first one is let's see really cool presentation and examples so yeah let's assume I have a container running some Node.js front end which takes a while to start that would mean the cold start from serverless would be pretty slow causing a bad user experience right? I can probably take that yeah that's true so that's a property of serverless although it's negative there are cold starts and that happens mostly because when you're not getting any requests you're not really running anything so it takes actually some time to start your application on Google Cloud Run specifically we have optimizations such as where we actually catch the image so that it doesn't have to be redownloaded when a request comes in as long as your application startup is pretty fast it'll probably be a couple of seconds when before the first request is processed but there are again ways to keep applications quote unquote warm you know the same stuff that you've already used to from the serverless land so it's the same stuff yeah I would I would say my general advice there is there are use cases that are really good for serverless if something is event-based one use case that we'll talk about as you sign up a new user and let's say you need to do some processing you've uploaded their image and you need to go crop that into different sizes for parts of your app you need to let's say create some other account details that type of asynchronous event-based processing really really good for serverless workloads things kind of like the example we showed was a little bit contrived because that was a web page that users would visit and that's not the best experience because if you have that cold start it is it is going to take some time and so you don't want your users to see that so that's kind of my advice there is if it's a long-running service or if it's a user-facing service just running that in a regular container on your Kubernetes cluster is always on service that's a really really good for those workloads and for anything asynchronous and event-based running that on serverless awesome all right great next question let's see do I need to use GitLab for everything or can I use GitLab together with tools like JIRA and Jenkins William where have you gone there you are we lost you for a second so we get we get this question I get this question pretty often when folks kind of see that that homepage image I showed again they're they're familiar with GitLab I host my code there but wow you do all of these types of things like project planning and whatnot and so the question is usually I'm a let's say I'm already using tools for that let's say I already have JIRA for my project tracking and let's say I already have my pipelines let's say in something like a Jenkins can I use GitLab just for my code repository and the answer is yes you can in fact at GitLab we try to play well with others so you can for the most part use any component of it as a standalone so what a lot of folks even do now is they will use our CICD tool and they will have their code hosted on GitHub or Bitbucket and use GitLab just for the CICD so you don't have to replace your entire toolchain although if you do use GitLab end to end because it's so tightly integrated there are some kind of neat meta benefits that you see and she will chat about that on a different webinar awesome all right great next question can you explain a little about events what are they for yeah I can probably take that so events are stuff that are happening outside your application and it comes to your application essentially you can think of events like I just uploaded a file to a storage bucket and then I want my application to get called when a file is uploaded to a storage bucket that's like the cliche example of serverless events but another example could be like let's say my build has failed so I want a web hook so that my application should be called when a build is failed so I can do something like maybe post it to Slack or post it to open a ticket maybe something like that or another example of events could be let's see maybe like IoT is a great example when something happens on an IoT device you want to do something on cloud so if your IoT device emits an event says let's say light turned on you might want multiple applications to get triggered when that event happens so events are basically just asynchronous actions that are happening outside the application and your application as a result gets invoked as a result of that William do you have anything to add? No I think it's a great description you can think of any type of time you'd receive an API call or a web hook as Ahmed described or ones that are very very common are when you have some type of a queue processing for example uploading the S3 bucket or you have you know some amount of things in a queue each time something pops off you can go and process that event with a serverless workload I think an interesting example of events here is I'm going to add a little bit is that let's say a video is uploaded to a storage bucket right let's say you're getting hundreds of videos normally you would implement an application that would dequeue from the event source but in serverless case what's happening is that you can you know let's say 100 videos are uploaded at the same time you can just spin off 100 containers to process these videos and then put it back to to storage again so basically instead of scaling that application the worker application yourself serverless does that for you and that's probably the difference between normal queue based applications and serverless all right great so many good questions here okay next question how large as in gigabyte can the function script and its dependencies be in order to run on GitLab serverless? that that is a great question and I will I will admit ignorance that I I don't think that there is an upper threshold there so for example there's kind of two components to GitLab serverless one is what we would call a function and what is what we call a Git serverless application the only difference being with the application you're specifying your own container as you saw in our demo we had a docker file and you specify that image and then GitLab will just run that and for the purposes of that I would say anything that you can in containerize you can run in scale and I'm I'm seeing on that shaking his head that it's going to be the same way on cloud run is that right on that? yeah so basically what you did there was building an application that resulting container image was actually around 800 megabytes I think because we installed LibreOffice and a bunch of other fonts and stuff so yeah you're right so if you think of a typical function there tend to be size limits on what you can upload in that zip file and I don't I don't can't remember them all they're probably a little bit different for the different platforms but this is another advantage of containerizing the workload is you you get it containerized all of LibreOffice and run a PDF translator that's kind of one of the advantages there all right great okay let's see we're about eight minutes to the top of the hour so I think we can get to one or two more questions here before we close things down okay so what version of GKE has Knative add-on enabled? I don't see this option in my 1.13.6 GKE.13 cluster yeah I think the person said sorry type it's 1.13.6 so this is a beta add-on if you go to the documentation it documents which versions it's available on I know that it's not available on some of the older versions I don't remember off the top of my head so I'm just gonna say sorry if please look at the documentation excellent okay great next question okay you need one container for each request I think this is maybe an add-on to an earlier question and what uses Knative to scale? yeah so I can quickly probably answer that Knative can actually reuse containers and so does the Google manage Cloud Run platform so if your container is a let's say Go application or C++ application that can handle thousands of requests at a time we will send however many requests that you want to your container at a time so for example if you want your container to get only one request at a time you will basically spin up another container for each request as long as the other one is occupied but once the other one is free we're just gonna reuse that so how does Knative scale? I think the answer to that is Knative actually looks at request metrics it internally uses Prometheus and a few other things to keep track of the requests because all requests are going through Knative Knative actually knows what's going on and it can do really fast scaling based on an immediate need for scaling so if it realizes it's in a panic mode it will add a lot of containers for a while and then it will over time correct that okay great and some of that is configurable as well again maybe for another webinar but you can actually look at the Knative documentation and if you are an operations person or this is your bag of tricks as you do like administering Kubernetes cluster and applying YAML files then you can even kind of you can configure and set some pretty granular parameters on how and when Knative will do the scaling for you all right great well we're about five minutes to the top of the hour so we're going to go ahead and close out the question and answer period of today's webinar we had a ton of great questions and if we didn't get to your question I apologize but I'm sure somebody from Google or GitLab will be more than happy to follow up with you and get your question answer so before we close things down do want to do the drawing for the $350 Amazon gift cards so our first winner is let's see Tom Westrick congratulations Tom next person big winner Mari Federo congratulations Mari and finally drumroll please Ish Kapila congratulations Ish thank you all for being on the webinar today and I'm sorry if you weren't one of our big three winners but maybe next time maybe next time before we close things down I do want to remind the audience that today's event has been recorded so if you missed any or all of today's event or if you just want to watch it again you will be able to do so we are going to be sending out an email a little bit later on today that includes a link to the event on demand and the webinar is also going to be living on the devops.com website so you can always find it there just go to devops.com slash webinars look on the on-demand section and it will be right there waiting for you so William and Amit thank you both for giving such a great presentation judging from the types and the number of questions we got in I know the audience got a lot out of it so thanks so much okay thank you great great want to thank the audience also for joining me today this is charlie no handlin and I'm signing off have a great day everybody