 All right. It looks like we are broadcasting now. This is our first time trying to live stream this from YouTube. It seems like some folks are having trouble joining the Hangout. So if you give me just a moment here, I am going to see if folks can join. So it looks like some folks have joined have been able to view the link via our YouTube live stream. So we're trying this out a bit new, where we want to stream this training live to YouTube. So anyone anywhere can join and view in. There should be an ability for folks to also join the Hangout so that they can also be on screen. It looks like that permission model is not set up. So while we're working out the kinks, please do type any questions or interactions onto the YouTube chat. And I'll also be checking that through the live stream today. And we'll do Q&A that way as well. And as we work out the kinks, we'll set that up going forward. So I will give a screen share here. And today, we are chatting about what is cloud-native. So this will hopefully give an understanding of both the technical bits at a high level of what cloud-native is, so that whenever you're talking to a customer or prospect about cloud-native technologies or about other topics that relate to cloud-native, it gives you an idea. So we do have a website page about.getlab.com slash cloud-native. These slides are also shared in the description of the YouTube video. So you can go and check out the slides. And all of the links on the slides will be there. Today, the content is in the slide deck, but I am looking to move all of this content onto the web. So in the future, this will be where you can access all of the GetLab content about cloud-native. So to start out, let's talk about what cloud-native is not. Cloud-native is not simply taking an existing app and running it in the cloud. So application architecture has changed over time. You can kind of think of this like when the first movies came about, really all they did was they took stage plays and recorded them with a video camera. But you can do so much more. You can shoot from different angles. You can take advantage of the fact that it's a different medium and create something so much more complex and different. So in the same way applications that run on a server, if you just take that application and you're running it in the cloud, that's not really a cloud-native. Cloud-native is about taking advantage of all of the things that you can do when your workloads are in the cloud and architecting your app to run in that way. So there are three elements in a nutshell that describe a cloud-native architecture. So that is they use containers. They are dynamically orchestrated and I'll chat through what is orchestration and they use microservices. So really at a high level, these are the things using a microservices architecture, taking advantage of containers and then using container orchestration, those elements to an application architecture and how you design your applications is what's referred to as cloud-native because it takes advantage of cloud computing models and allows you to abstract away the hardware so that you can scale so that you can have more resiliency, et cetera. The phrase was first coined probably around 2015. You can see other references to the term cloud-native earlier, but really this was the Linux foundation starting the cloud-native computing foundation to propagate and to drive and to evangelize and to shepherd this type of architecture that uses containers and orchestration and microservices and I've linked the initial announcements there. So let's talk about these three things, containers, orchestration and microservices. And really when you're talking to somebody, the orchestration part is really just Kubernetes. And really from one perspective, folks may even just talk about Kubernetes. So if folks are saying they're using Kubernetes, odds are you can't use Kubernetes without containers and you probably also have a microservices architecture. So really if you're using Kubernetes, you're building a cloud-native application. So Kubernetes is a bit of a shortcut for saying cloud-native, I'm just using Kubernetes. So let's chat about the containers bit. What are containers and how does that application architecture work? So the idea here is this is the evolution of application architecture. So in the early days, you would deploy an application on bare metal. You would need a lot of things by bare metal, we mean on a server, a physical computer. And so you need things like your application, you need a lot of your libraries and the operating system that all lives on one server. And if you have multiple applications, they all live at that same server, they all share the same OS, so there's not a separation, for example, for security or for resource management, they're all on the same server and so there can be contention between applications and that's a limitation of that model. So then along came virtualization where you could have a virtual machine or a VM and that was run by a hypervisor on the physical machine, but you could have multiple VMs on one machine and this has a lot of advantages. You can take a copy of a VM and you can version it and you can roll back. But of course it's not perfect because what happens is every single virtual machine includes your application, includes your libraries, it also includes the operating system. So every time you're copying that virtual machine, you're copying the entire operating system. So the newer technology is containers and what containers do is they're very lightweight way to copy your application. They essentially copy only the things you need in order to make more duplicates of an application. So all of the containers share a common operating system. So instead of copying that operating system every time, you can share that operating system and you have a very lightweight way to distribute your applications. So you can fit a lot more containers on the same amount of server resources than you can with virtual machines. This is a visualization of what I'm talking about. Here's a physical server. It has three applications on it or potentially copies of the application or they might need to be different servers. And in order to scale that application, you need to add physical hardware. When you come under load and your server can no longer take that load or your application can no longer take the load in order to scale, you just have to add more physical machines. And it's a lot of heavy lifting and you also then need some other type of software to load balance between those machines. It's a inefficient and complex way to run. Virtual machines make this a lot nicer but of course the operating system is copied. So when you want to scale, you can run multiple virtual machines on the same machine on the same server. You can run multiple VMs on the same server but again, it's more resource intensive. So when you scale, you're still gonna need more hardware, not as much as if you're physically deploying because virtual machines can share resources on a machine more efficiently but you still need a lot of, you still need more servers to scale. So then this comes down to using containers. Here you can see that a container is gonna run inside of a virtual machine on a server and because the copies of your container are very lightweight, you can run a lot of containers and you need fewer servers in order to scale. So this is one of the advantages of the container-based architecture is they're lightweight and easy to copy. So in a nutshell, a container runs inside of a virtual machine that eventually runs on a physical server. So there's always a physical server somewhere. There are a lot of different types of containers. There are Linux containers, there's Rocket. I've added some links here to the slide where you can read about the histories of containers and more info. In a nutshell, Docker is the most popular type of container and that is what GitLab supports. So GitLab does not today have support for other types of containers. We focus on building Docker support. So that then comes down to, I have a bunch of containers, they're gonna help me scale. I'm using that as my application architecture. I'm deploying via containers instead of deploying via virtual machines. But now I need to schedule or orchestrate those containers. So for example, if I have a bunch of machines, I need to say which containers are gonna run on which machines. And if a container dies or an instance of an application goes down, I need something to spin it back up. So we call that orchestration. It's taking care of where should a container run and keeping a life beat of is that container running and do I need more containers? There's a lot going on there, but that's essentially orchestration. And again, like I said, really when you think of orchestration, just think of Kubernetes. The idea here is in the era of 2015 to 2017, there was many container schedulers there. You had lots of options. There was Docker swarm. There was Kubernetes, which was originally a project open source by Google and Google's a large contributor to Kubernetes. And you had things like a marathon and Apache Mezos. And so what ended up happening, and I've added some links there if you want to read more about container schedulers is that Docker and Mezosphere both adopted support for Kubernetes. So really Kubernetes has won the race. There maybe once upon a time there was a discussion around which container scheduler is the best, but overwhelmingly the market has spoken and Kubernetes has run. It's really not a discussion anymore. And 2018 container orchestration is the same thing as Kubernetes. This is really where the market has shifted. As a side trivia note, if you've seen the abbreviation K8S, that stands for Kubernetes. It's basically because there's a K at the beginning and S at the end and eight letters in the middle, somewhat like Andreessen Horowitz. But of course at GitLab, if you are writing about Kubernetes or blogging in any way, our corporate marketing guidelines say to always spell out Kubernetes and not to abbreviate it. But if you've seen the abbreviation, that's what it means. Further evidence that Kubernetes has dominated the landscape is that every single company that could or that wants to has spun up a managed Kubernetes service. So I've listed a few of them here, but there are actually more. So Google has a Kubernetes service, a managed Kubernetes service, Google Container Engine. Amazon has released EKS and Amazon Kubernetes service. Azure has one, IBM has one. There are many companies that have managed Kubernetes services. And the idea here is Kubernetes is an open source project. You can build and run it yourself, but it takes a lot to manage that infrastructure and manage that Kubernetes instance. There's a lot that goes into that. So by taking advantage of a managed service it abstracts away a lot of that hard work. So with GitLab we have tight integration into Google Kubernetes Engine with GKE. We have some unique functionality that I really think no one else in the market has. And we work very beautifully with GKE. And we also have official GitLab support for EKS. But the reality is, and one of the beauties of Kubernetes is that if these services are using upstream Kubernetes which I believe Azure and IBM and many of them do, the reality is you can take your application workload that's running in one Kubernetes cluster and it's very, very easy to port or to copy that to another service. So essentially it's easy to move between Azure or Google or Amazon or to run in all three because of Kubernetes. Kubernetes is what allows you to have that portability. So at GitLab we have these official support for some, but in reality is GitLab officially supports Kubernetes in general and if a service is using what we call upstream Kubernetes or vanilla Kubernetes, then GitLab is just gonna work there. There are some places where that's not the case. For example, with Red Hat OpenShift they do not use vanilla upstream Kubernetes. They do some modifications for security and some other components. Therefore, GitLab doesn't just work on OpenShift today but any service that's using vanilla Kubernetes, GitLab just works there. This is kind of the stack to give you a visualization of the different components that you're using. You have your GitLab instance and that can be installed on and running in your managed GKE service which of course is running Kubernetes and Kubernetes is orchestrating your containers. So these are all the components and hopefully gives you an understanding of the difference between a container and Kubernetes and a Kubernetes service. If you wanna know more about orchestration or Kubernetes in general, highly recommended resource is the Children's Illustrated Guide to Kubernetes. This was put out by some folks over at Microsoft Azure. It's a great, great resource and I highly recommended it is an easy read and will bring you up to speed on just what is orchestration more technically. With that, I'll give a quick overview of microservices which is the final component to cloud data and bringing this all together. So in the early days or I suppose even still today, a simple application architecture is what we call the monolithic app means you have an application and everything that is running that application is in one place. But of course, this becomes really hard to scale and if a lot of people or multiple teams are working on what application can be tricky to do that development and it can be tricky to scale because if you need a copy of the application to handle more load, you have to copy all of the application even if there are parts of that application that don't need to be copied. So this is what's called decomposing the monolith. You take a part of the application and you break it out into its own service. So here you have a service A and a service B and a service C and this could be something like an inbound point, like a web interface that folks log into and that might be used a lot or this could be a database service or this could be really any component of the app. And what happens is service A, that might need to scale more. So you might have four or five or seven copies of service A, but you really only need two copies of service B. So by breaking these things out into different services and breaking your one monolithic application into each of these component parts, that allows you to do development more easily because it allows a team to own a service and to end and not have to worry about the innards of another service. They just have an interface to it and it allows you to scale. So these are kind of the benefits of what's called microservices architecture. The reality is you don't need to start with a monolith and to decompose, you can actually just start adding services and once you get to a certain size and folks really are just building new microservices, they're not adding to a monolith anymore, perhaps the monolith doesn't even exist. So as an example, this is GitLabs architecture and this is from our development architecture page. This doesn't really show what's part of GitLabs monolith or what's broken out into a service, but it gives you an idea of some of the component parts and what could potentially be broken out and more info will be coming in the future as part of our migration. For example, we are migrating to Google Cloud Platform and as part of that migration is to run GitLab as a cloud native application. That means microservices, those microservices run in containers and they're orchestrated by Kubernetes and we don't have that today, but we're moving towards that in the future. So you'll see more info coming down the future of how GitLab is actually breaking apart its architecture and running this process. Some other good examples of microservices at scale are folks like Amazon or Netflix is one of the quintessential examples of microservices and here you can see that once you get to this type of scale where you have so many microservices, it really just starts to look like a cloud of services. You can see if this was all one application, it'd be almost impossible for these companies to manage them anytime you committed code, you have to worry about conflicting with all of the other code, it'd be a mess or if you needed a scale, you need to make complete copies of everything every time you scaled instead of individual services. So you can see why large companies do this and even the benefit of smaller companies that don't need this level of complexity why microservices can really help them out. A great resource if you want more info on what is a microservice, highly recommend watching this video, Mastering Chaos, this is a technical overview of Netflix architecture and it's a couple of years old but it is a couple, it's really only two or three years old but it really is a great overview of what microservices are and how Netflix designed their system. If you wanna know more about microservices, that's a resource that I recommend. So Cloud Data Apps can use containers, it's gonna use Kubernetes to say where should those containers go and to also manage the underlying infrastructure and the virtual machines. So Kubernetes is gonna orchestrate all of that and you're using microservices in your architecture. So what are some of the things that GitLab does together with Kubernetes? I've alluded to a few of these but I'll just share some more of them now. So the first is our Kubernetes integration. This is like I discussed, this is a generalized Kubernetes integration. It is really nice and it works with any, it works with vanilla Kubernetes or any service that any managed Kubernetes service that is running from upstream. This is the best way to run GitLab, literally. Our most advanced features are only available to you if you are running Kubernetes. So for example, things like deploy boards, canary deploys, auto DevOps, monitoring, Kubernetes monitoring and our web terminals. You only get access to those features if you are running GitLab deploying to Kubernetes. And so you can run GitLab anywhere. You can run GitLab on bare metal. One of the advantages of GitLab is that it has a lot of flexible installation places. You can install it and you can use GitLab CICD to deploy your application to bare metal, to virtual machines, to anywhere you really want. But where GitLab really shines is that it is designed to run on Kubernetes. Or I should say we are designing it to run on Kubernetes and we're making rapid progress towards that. Today our Helm chart, which is the thing that installs GitLab on Kubernetes is in beta. We're moving very quickly towards a general availability. So then we would say GitLab is designed to run on Kubernetes and GitLab is certainly designed to deploy to Kubernetes. So when our users applications and our customers applications, if they want to run those applications inside of a Kubernetes cluster, GitLab is the best way to get your software there. That's where we really shine. So this is just some screenshots of deploy boards and maybe this will make a little bit more sense now where each of these dots in the deploy board is essentially a container. It's really something called a pod. And if you want to know the difference between a pod and a container, I recommend that children's guide as a resource. But you can just think of this as copies of your application in a container. And the deploy board shows you how far you have, how many of those instances have deployed to and what part of what's called your fleet is complete. So this is 23% complete here. Things like Canary deploys, let's you deploy just to a little bit, only a few instances and test the traffic there, see how it behaves before I want to commit to deploying it everywhere. Of course, automatically monitoring, being able to get GitLab gives you access to things like what is the memory utilization? What's the CPU utilization of my underlying hardware so I can see am I overloaded? Do I need to add more nodes to my Kubernetes cluster, et cetera? And of course, web terminals, which is really an amazing feature. We probably need to do a whole training just on this because this allows you to introspect and troubleshoot the actual environment that you've deployed to. This is where if I have a local Dev environment, things might work here, but all of a sudden when I get it into the staging environment or when I getting into the production environment, things behave differently because there are different versions and different modules. And with this, I can get a terminal straight into that and it can help with troubleshooting. It can help with design. There's a lot of fun things there. So of course, we have a Kubernetes integration. We have a GKE integration, which I spoke of before. This allows you to create and configure Kubernetes clusters with just a few clicks. So again, thinking about that stack, this is like the GKE managed service, helps me manage Kubernetes. And GitLab makes it even easier to use GKE. So really you can sign in with GitLab and within a few clicks, we create the Kubernetes cluster for you. It's a really nice experience there. Talked about our Helm chart already. And one of the nice case studies that we have is that the CNCF itself, the Cloud Native Computing Foundation is one of GitLab's customers. And so I've linked the case study here and on our Kubernetes page, we also have a YouTube video of them describing their cross project, cross cloud pipelines. This was the CI working group that's part of CNCF. They use GitLab to actually test Kubernetes and core DNS and Prometheus. So they test multiple projects and they deploy those projects to multiple clouds, including Google Cloud Platform, AWS packet. So this is a really sophisticated use case and they're doing it all using GitLab, which is really nice. Some other resources for you. I did a webinar with William Dennis at Google that this goes through not only Kubernetes and a nice intro to Kubernetes, but also a demo of our Kubernetes integration. This is a great asset to watch to learn more and also to share with customers. The link to the YouTube video is there. And if you want just a five minute demo of our GKE integration, literally from start to finish, from zero to a deployed application, you can see what that looks like in five minutes using GitLab, our GKE integration, and auto DevOps. So this is a really nice video to say you can go from zero to running and production in five minutes with GitLab, including the setup time. This is pretty interesting there. So with that, I will stop the screen share. It looks like we have just a few minutes left, but I want to see if I can jump into questions that are in the chat. So it looks like we have a few, John notes that even Cloud Foundry adopted Kubernetes. Yes, Joe asks about GitLab compatible with OpenShift. The answer is yes, we are compatible, but it takes a little bit of effort to make that work. Again, because they're not running just vanilla Kubernetes. So GitLab does run on OpenShift and it runs nicely, but it takes a little bit of tweaking to get there. A number of customers using them together today, I'm not sure what the question is there, what they would be using together. Joe asks, is there feature parity with GKE or other services? So that's a great question. The answer is actually no. There are a lot of things that GKE does that other services don't do and vice versa. For example, if you're using Amazon's EKS, that has really tight integrations into all of Amazon's other services. So if you're an Amazon customer and you're using all of these other things, slip my mind now, like the thousand things, like besides EC2, but if you're using their database service and their other services, EKS has integrations of those. On the other hand, GKE does some really sophisticated things with Kubernetes that I don't think anyone else does. So for example, GKE has an auto scaling capability where it will monitor the amount of nodes, essentially how much hardware or how many VMs do you have running on your hardware? The way Kubernetes works is it abstracts away all of your hardware and it treats it as a pool of resources and it will essentially bin pack the asset here I recommend is watching that webinar with myself and William Dennis. William Dennis describes that quite nicely. But the idea here is that GKE will help you auto scale. It's a unique feature. So each of these services have different ways that they differentiate themselves and value ads that they bring on top of Kubernetes. Looks like Joe asks, are we compatible? There is a question here, is GitLab compatible with Jira or GitHub? So there's a question here from Lili. We're just about up, so this will be the last one I answered but I'll try to hop into the chat since we're out of time and answer some more. After we close down, we do, GitLab does have integrations into other products and you can visit our about.gitlab.com and you can see a lot more of other tools that we do integrate with. So with that, looks like we're up on time. I'll shut down the video and I'll continue in chat if it lets me continue in chat to answer some more questions. Okay, thanks a lot.