 Hello everyone. I hope you all are doing great. The topic for today's session is cloud native app development 101. So this session is going to be for a beginners level audience, and it will just touch upon the basics on how we can pursue the app development journey in a cloud native manner. Before we dive right into the talk, I would like to introduce myself to the audience. My name is Avni Sharma, and I'm a software engineer at Red Hat. I work in the OpenShift Developer Tools team, and I'm really passionate about cloud native technologies. So my Twitter handle is 16avni Sharma, and you all can connect me on Twitter. So let us now see what's the agenda of this talk. We have basically four major components which would be taken forward. The first one being discussing about cloud native. So yeah, since we are taking it from a very basic level, I would first like to demystify what cloud native is, why do we need it, how to actually adopt cloud native paradigm. The second one would be focusing on the key concepts or the characteristics of being cloud native technology. And out of those key concepts, the one concept that I would like to stress upon and I would like to even demo further would be containerization. So everything around a microservice architecture and how do we get an app in a containerized format. And that would be demonstrated with the help of a small demo. As cloud is becoming so pervasive in the IT field and there's a lot of digital transformation going on, we have adopted new way of building and deploying apps. And that is typically cloud native. So what is cloud native? Elaborating on what I just said, it is quite apparent that cloud native has two distinct words. One is cloud and one is native. So when I say cloud, it means apps residing in the cloud instead of traditional data centers. And when I say native, it means apps designed to run on the cloud. And it is designed in such a way that it utilizes the characteristics of the cloud that is elasticity and distributed system of the cloud. So it's there, it exists there, but to leverage it and to use it to its full potential and advantage is basically cloud native. And it is an approach to build and run applications. It is a set of systemized techniques and methodologies. That's it. So there is a foundation which is known as CNCF. And if you go on CNCF GitHub pages, you will find a very crisp definition of cloud native, which is cloud native technologies empower organizations to build and run scalable applications in modern dynamic environments which can be public, private or hybrid clouds. Whenever you listen about cloud native or you attend conferences about cloud native, you will often hear about CNCF. So what is actually CNCF? CNCF stands for Cloud Native Computing Foundation. And I also have pasted the link to their site over here. So CNCF is part of the nonprofit Linux foundation. And it seeks to drive adoption of this cloud native paradigm that we're talking about. And how does it do that? It does that by fostering and sustaining an ecosystem of open source when the neutral projects. So you might have heard of Kubernetes and many such projects which come under CNCF. So all these projects help us in driving our cloud native methodology and to adopt cloud native in a very easy and friendly manner. So I encourage you all to visit their site and the community is really friendly. And you can also take a look at the projects that are housed under CNCF. So yeah, this can be pretty overwhelming. This is a landscape of all the projects that come under the CNCF foundation. And you can see here's the Kubernetes. You have so many HCD code, DNS, Prometheus, Rook. So there are many, many, many projects. And as you can see the list is endless. So I won't be going through all the projects. I've pasted the link below the image and I encourage folks to visit the landscape page once. Key concepts that I would like to discuss in this talk are these four concepts. I know there can be many concepts and the definition for cloud native can be as diverse as the community. I feel these four really drive the cloud native methodology and the idea of cloud native. So the first one is DevOps. So DevOps is basically building, testing and releasing software rapidly and consistently. So DevOps is the whole journey of your application, of your code from its development to operations. Hence DevOps. So it's not only about building a running code but it's also about getting it into production. How do you get it, you test it, you release that software, everything comes under DevOps. Continuous delivery is faster and reliable delivery of products. Then I have containers, which is basically isolation of apps and it is packaged app. That means one unit of the container can have code and the runtime related to that code, dependencies, etc. And how can we achieve a microservice architecture is actually through containers. That is how we achieve a microservice architecture and when we say we want a microservice architecture it means we want loosely coupled apps. So the benefits of cloud native app. The first one I can tell is that a cloud native app is engineered specifically to run in the elastic and distributed nature required by the modern cloud computing platforms that we discussed in the definition of cloud native as well. And since these apps are loosely coupled in the cloud environment, so it's really easy to manage to scale them on demand. We can scale them up or down and thereby we can influence the cost performance mix and keep up with the changing and going demand. So all these are benefits of cloud native app, which wasn't really possible in your traditional way of dealing with applications on cloud. So why to adopt cloud native? I think I have pretty much summarized it in the previous slides when I said, what is cloud native and what are the benefits? So it pretty much summarized on why do we go ahead on adopting this paradigm or this approach, but to summarize it ahead in a single slide, I would like to say that cloud native will provide with these three as an umbrella of stuff, which is speed, agility, and resilience. So with speed, the productivity increases as operations gets easy. We know we can adopt agile methods, we can have DevOps, we can have continuous delivery patterns. So it will influence the speed and the productivity will increase and operations will get easier. Agility, we talked about loosely coupled apps, containers, and microservices, which I would be telling in more detail in coming slides. But basically all these loosely coupled structures or loosely coupled apps, loosely coupled services will provide us the flexibility to have app portability. And as I mentioned, we can scale on demand and it will actually help in ease of management. Resilience, we can recover from failures, we can minimize downtime and build reliable systems. So the third point that comes to our mind now is, okay, we know what is cloud native, we know why do we need it, and hence we know the benefits, but how do we go about adopting cloud native? Because it can be really challenging. We know the benefits, but it can be sometimes challenging to think about on how to adopt cloud native. So I would just summarize it in three broad structures, not going again in my new details. But the first two I will discuss in the coming slide. The first one is microservice architecture and hence we are relying on containers and we have to adopt agile methods. Agile methods is basically rapid and consistent releases of your application, the whole application life cycle, and everything needs to be of an agile model, which would be covered by DevOps and continuous delivery parts. But my focus for this talk would be basically on building containers and a microservice architecture. So whenever I talk about microservice architecture, I always compare it with the traditional way of building and deploying apps, which is through monolithic architecture. So monolithic versus microservice architecture. So on the left you can see a monolithic architecture, which shows that our app is really in just one unit. It is so tightly coupled such that one sector of it fails, the whole app will collapse. And that is something we don't want with it. If that happens, our day as software developers is doomed. Whereas on the microservice architecture, if you see everything has been loosely coupled. It has been disintegrated. And if one thing fails, the whole system will not collapse and we have time to recover. And similarly, since it is loosely coupled, we can have as many microservices we want, depending upon the load. And we can also get rid of some, so on-scale demand, on-demand scaling, I'm sorry. So all these can be taken care of in a microservice architecture. So the loosely coupled model of this architecture is something that our cognitive is helping us to take forward. And monolithic can be really cumbersome. There are cases when you want to use a monolithic architecture, but I'm sure in production you would prefer microservices. A monolithic architecture you can use for making a POC for your code or anything like that. But definitely for production, microservice architecture is the future and it is happening. So after 18 more on microservice architecture, we can see over here for an example use case, we have account service, book service, order service. So suppose we take order service over here. So this is being used by my request coming from my mobile app, from browser. So one functionality can be used by being called from multiple APIs and how microservices actually communicate with each other is like a basic rest API. And you also see that the business capabilities has been demarcated and has been isolated. So account service will manage account service, book service will manage book service. Nothing is clubbed into one section. Every business logic can be one microservice. So that is the flexibility that we want. And since every business logic is independent of itself, it's highly maintainable and testable and you don't need to depend on the service. If you're making an X service, you don't need to depend on the Y service. You just need to hit the API if you want to use the Y service and that's that. So this loosely coupled architecture is something that we want to take forward and understand more. So since it is all independent, loosely coupled, we can deploy it independently also. So microservice is an architectural style to develop an application to a suite of small services. So we can see the adoption timeline, how microservices came into picture and how it is being adopted. So first we had mainframes, it was centralized. Then we had in 90s, we had the client server distributed model. And then we had internet and internet really changed the game of all of this. And then we had cloud and with cloud, we have infrastructure as a service, platform as a service, software as a service. And after we had this, we came up with cloud native. And one of the pillars of that would be microservices, which is granular reusability. And we are using it still. So we read about microservice architecture having loosely coupled services. So one app is then bifurcated into different services. So containerization is how we can achieve that architecture style. So let us now see an example of a virtual machine and containers. So when I say everything can be loosely coupled, many of you can come up with virtual machine as an option. Well, virtual machine can be really heavy. And over here in this diagram on the left, you can see a typical machine virtualization diagram. Over here, there is a hypervisor. And on top of it, we have three VMs. And so if we want three apps to be running, we would be having three VMs and that would be really heavy. And each VM is having a guest operating system as well. So for one app, you're having one guest operating system out here in virtual machine. Whereas in containers, if you see we don't have the guest OS here. We have the container engine, which would be something like Docker, Podman, or anything like that. But it's pretty lightweight. So containers is preferred more. And all these containers would be on one Linux kernel. And so this is really lightweight. And your containers will have your application code and related bandwidths, libraries, dependencies. So how containers work? With virtual machines, we have a whole guest operating systems spinning up. But with containers, you can... It's preferred to say that containers are just Linux processes. So it is just a process and it use all the Linux kernel capabilities, like namespaces, cgroups, ccomp, techcomp, and many more. So containers are basically a process. So let us now look at the steps, which will help in building our container. So the first thing is a container image. So a container image is basically an immutable file which will consist of all the executable code and the required runtime environment, libraries, dependencies for that application, which we need to run. So that would be in the container image. And it's an immutable file, so you cannot change it. You can make a new container image with the new version, but you cannot change an existing one. So you can think that image that I mentioned in the previous slide, which we call container image, is the recipe and container is the cake. Container would be the running instance of the container image. And from where can you get a container image? So container image is stored in a repository kind of place, which is known as a container registry. So like you store your code on GitHub, similarly you can store and push your container images to a container registry. For example, Quay, Docker Hub, Google Container Registry. And these container registries can be public and private wherever you want to configure them. So now container runtimes. You want to manage the life cycle of your container, that is you want to run a container, delete a containers and build a container or push a container to a container registry. So all these can be handled by your container runtimes and they will manage your container images. And the examples for container runtimes is Docker, Arcity, Podman, Container Day. There are many more. So for my demo, I would be using Podman, but you can use almost all the commands as is, like using Docker and you can replace the Docker command with Podman. So I have installed Podman on my machine. You all can use Docker as well for the demo. So Podman, as I discussed earlier, is a tool designed to make it easier to create, deploy and run your apps or using containers. So we discussed about a container image, a container, which is a running instance of a container image. But how do we make that image? So how do we have that immutable file which will become a running instance of a container? So for that, we have Docker file. The Docker file is basically a bunch of commands that a user could have called on command line to assemble an image. This is a sample Docker file. And here we can see we are taking the base from Ubuntu and we're copying the current directory, the instance of the current directory in the container image in slash app path. And we are running this meek. And whenever I want my container to run, I want this app.py to run as well. I would encourage the audience to go ahead, look at a bunch of Docker files because I know at the very beginning it can not be really apparent. So I would want people to go and read about how to make a Docker file and to read about containers because that is really important for achieving microservice architecture. So whatever I discussed in the previous slides of creating and running or pushing container images, I would like to summarize it in a bunch of steps. So the first one is to have a Docker file with you. So in the Docker file you can have your commands and steps for creating your container image. So Docker file has all the steps. Now container images is the immutable file or the recipe with which we can make a container. So now we once we have container, it's the running instance of the image. That's that our app would be running once we run the container. If you want, you can also push your container image to a container registry so that anybody can pull your image and use the same recipe to run their containers. So it's demo time and I would be referring one of the examples that I have pushed on GitHub. So the audience can go to this link and follow the example with me. So this is the demo that I would be demoing for today's talk. Over here I have mentioned from where you can install Boardman and it is a dminless container engine for developing, managing and running OCI containers on your Linux system. Let us go ahead and start implemented examples and just see what a container is how to build one and how to get one running. So I have boardman installed. You can build the container image with boardman with this command and what this command does is that it will build your image from a Docker file. So let's go ahead and see what the Docker file contains. If I haven't mentioned any file name and I have just mentioned the current directory path it means that whichever file has the name Docker file it will take that by default. So you can see I have multiple Docker files but to specify a particular file I will have to use fflag but currently I'm going by the default Docker file. So here you can see that I have taken the base OSS alpine latest and here are multiple configuration steps and configuring go and I'm including a call command because I would be using it in my container. Here I'm making a directory and I'm running basically the commands that I would have run on my terminal as a user and I have made a working directory over here. I'm copying the current contents of my folder into this path and I will go build the app. So let me show the app that I'm going to build in my container. This is a very basic go app and I would be doing a call on localhost to see whether I get hello world and this is the path for getting Kubernetes as the output. So this is the app that I'm going to build in my container and let's go ahead and run this command. So let me just name this as OSS demo version one and use a name. So I'm using quay.io as my container registry. You all can use any other container registry that you want. So I'm just writing my username over here which is 16 and I will build my image whose tag so hyphen t is basically known as the tag. So this is the tag that my image will get quay.io on the OSS demo and dot meaning take the Docker file from my current directory and I'll just hit enter. So you'll see there are so many steps and for every step you will see different hashes. So what is happening is that every line in your Docker file is being saved as a layer and by every layer I mean that every layer gets a shot and that's basically a target. So your image is basically a target and you can literally enter it and you can check the contents in it. So containers are processes so don't get afraid of them. Image is being built and it says step 11 commit quay.io and this is the shot of my image. So let me just check on my images. Oh, I should have deleted the previous images anyway. So you see that one image was created about 16 seconds ago and this is the tag that I gave to my image and tag is given by hyphen t flag and the image is quay.io slash of knee 16 which is my username and demo OS is you can give any time. This is what I prefer. Now you see if you are a Docker user or for those who would be using Docker it requires pseudo but here I didn't require it which is pretty cool. Now let me just show you all the Docker file and here you see that this entry point has been commented and I haven't given the entry point. So now I have to manually go and I have to run the app from within the container. So I should have given the entry point but it's okay. I'll try to run the container now with this command. Let me just take the image show. I prefer taking the image show but you can take the name as well. So I'm telling to run my container because now I have the image. So I need to run my container which would be a product of that image. So run it in an interactive it and remove it after it has been, it has run and it is to the app is running as you might have seen it's in this path and this path had been mentioned in the Docker file so you can modify it as well and all the current all the files and everything from the current directory has been copied in this container. So if you see everything is present over here. So my container is there and you can also see that the container is running in this command and you see that this container is running. So I basically took the container image reference and I ran it for a container. So now let us, we have a running container. We know it's ideal. Let us now SSH into it. Let us now go inside the container and run our app interactive container. So here I am and I will run my app go run. This is the go command to run it. I had also configured the go path, the go route in my Docker file. So you can see how to configure go from the Docker file that I have mentioned. So my app is running and now you can actually go local host and you see over here hello world has been prompted. So my container is running and I tried to girl which and my app ran. So it is pretty cool. So let me try another endpoint a ho a Kubernetes. Okay, cool. This is really awesome that our container is running. So this is case one when I did not mention the entry point and that is the reason why I had to go inside my container and manually run the application. But if you want that the app starts running whenever I get my container running, I can mention entry point in my Docker file and that is what I would be doing in my case too. So over here I would be building another image which is given over here the command and I would be using my custom Docker file dot expose. So over here I would be mentioning the entry point as well as exposing a port number. So let us just exit from here. Exit from my container. Okay, let me now build a new container from oops, sorry. So this is my Docker file and let me just go ahead and show you the Docker file dot expose. So I just come to the expose part of the Docker file. Here I have mentioned the entry point. So if you remember I was doing a go run first app. I can now do it easily. Let us go and build it. So now let's just go and see if my image was built. Okay, I have it here. Try it with V1 version. This is the tag. This is the image ID and it says it was built seven seconds ago. So let me just have this image ID over here. Portman run the container hyphen P 80 and image. So what is happening over here with the command? So what happens? We all we also had the expose command in our Docker file. If you see that expose will allow communication between containers and other containers in the same network, but it will not allow communication of containers in one network with your host machine. So what do we do in that case? So in order to permit that you need to publish the port which is done by hyphen P. So what we are saying is that the container port which is 8080 needs to be published to my host machine port which is 8081. It's like right to left container port to my host machine port. Okay, so my app is running. And if you see that it got created three seconds ago. So now let us just call the command without, you know, SS etching into the container. So let me just call into it. And so yeah, here you see that hello world was printed. So basically I mentioned the entry point. I published the port so that from my host machine, I can access the APL and now I can see hello world. So how cool is that? This is really great. Okay, cool. So now the next case three is using Kubernetes. So imagine a scenario where, you know, you have many containers, not one. You're dealing with many containers. And what would happen if several containers start dying? How will you monitor it? What if you need more replicas of one container and who would basically watch the current state of the containers and get it to the desired state? This is not something which, you know, the portman or Docker will provide. It will provide your containers lifecycle, that is to build, create, run or push a container and a bit of network management between containers in the same network. But what would happen in a case where you need to orchestrate your containers and get multiple containers from a current state to a desired state? So to orchestrate them and to bring them into harmony, they are various orchestrators and the one which is very famous and has graduated from CNCF and you might have heard of it, it's Kubernetes. So I haven't prepared a slide on it, but I thought that I can just run through an example and due to the interest of time, I wouldn't be explaining each manifest file or the artifact that has been used in Kubernetes. I will just show you how it works in a demo and I would encourage you all to go ahead and look this example files and go through it. And if you have any questions, you can open up an issue. So yeah, let me just check if my mini cube, which is a single node plus two for Kubernetes is running or not. So I already have it running. You can do a mini cube start to get it running. So now I will first create a namespace. So to create a namespace, we can use the Kubernetes client, which is kubectl and create a namespace which you can say OSS demo. So now my namespace has been created. OSS demo. Okay, so my context has been modified to OSS demo. So now I'm in that namespace and I will just apply my deployment YAML. So a deployment YAML will get my pods up and running and the smallest unit would be containers. So a pod can have many containers, but it is advised that one pod has only one container. So now I have my deployments running. I'm using an alias for this big command and I have used an alias k. If you see my demo server is running and it has been 18 seconds since it was launched. So let me just show my deployment YAML. Here you can see that it is using containers in the spec. And I just showed how to build a container from Dockerfile, how to build an image and how to run a container. And you can push the same as well. And there is a command to push your image. And once you push it, so you can use podman push command and your basically image name. And you can push it to your registry and you can give the path here itself like quay.io and username and image name. So what will happen is that it would go and populate here. Suppose my username was of me 16 and it was quay registry could get populated here and you can check your images. So yeah, I will just show how to get the same example that I was trying on containers and how to get it on Kubernetes. So I have my deployment. Now what I will do is I will apply a service. So my service is created, which is demo server and the node port has been exposed so yeah, so now I can do the same call command. My app is running and I have mentioned it in this image. This image has been pushed and my deployment is basically pulling that image from a public repository. So I can show you that this is pushed over here as demo. This is my image demo. You can see the tag is V1 since I've given V1. Okay, so now I will see my single node cluster IP, which is this and then I can curl my node port would be this and this is not static. There are ways to make it static but currently I'm using port of type node port for this example. I'm not going into the depth of explaining the YAML but you all can go to this GitHub repository and check the example. So you see now it is coming hello world. So the app is working perfectly fine. If you want to try another endpoint, it works as well. So yeah, that was that for the demo and thank you all for attending the talk. And thank you Open Source Summit for having me here. I'm open to questions and feedback and by the time I'm addressing questions I encourage folks to scan the QR and you can submit your questions and feedback on the link which is there in the QR. So thank you all and have a great day. Bye.