  ав 칙u mo bi  Потому  cervical                     the and trying to secure it. So this talk is, you know, I'll say for the newbies, people trying to delve into the whole cloud native bandwagon, the ecosystem and then just to give you like a route on how to go about learning these technologies. So for me, it all started with the workshop when I was still in uni during my final year studying software engineering at Cardiff Met. So and I think it was six of my friends. We had like a friend group. We tried to attend as much workshops and seminars as possible. You know, possibly ones that will give us some badge to add to our LinkedIn profile to help us with our job search and job hunt. So that was how we came across the Kubernetes workshops and we all jumped on it. And then it was, I'll say it was a good experience. Then even though everything was so vague and so alien to us at the time. So but the instructor did a very good job starting to explain, give us an overview about the different architecture and the monolithic architecture and the microservice architecture. So for me at the time I got to, you know, understand that all this while I've been developing, you know, using the monolithic architecture without even knowing about it, went on after the research to, after the workshop to do my own research to really get to understand these different architectures. So, you know, for me I was like, if we've been developing using this architecture and everything has been working fine, so why they need for the whole microservices and Kubernetes, because everything sounded so difficult. So I'm just going to touch on some of the advantages of monolithic architecture and the disadvantages and then we'll move from there. So I guess one of the key advantages of the monolithic architecture will be that, you know, it's very easy to develop. You know, you could develop an entire application and get it to market very quickly compared to microservices. It's very easy for a team to pull together and build an executable app using this architecture. And then it's also simple to deploy. It's not as complex as your microservice technology. Multic applications have fewer parts. So there are fewer components to manage and to fix. And, you know, all in all, everything is self-contained and it makes it easier to deploy your application. Sorry. And then uncomplicated testing and debugging. Testing and debugging is, you know, a very big part of the debate of monolithic architectures versus the microservice architecture debate. You have to test all the parts of an application, of an application separately in regards to your microservices. You have to test that they work properly and also test that each service fits together and communicates properly. And then you also have the case of cache, caching, dependencies and data access. Forgive my pronunciation. English is not my first language. So, you know, we regards to monolithic architecture. This is another case because the application is fitted as a single unit and works together as a whole. You can do everything quickly and easily from a central login system. But it also has its own disadvantage in the sense that it's less scalable. So, with the monolithic architecture, because the software is tightly coupled, you know, it can be very difficult to like scale the software. For instance, if you wanted to like advance a particular feature or part of your application, you literally have to take down, you know, the whole application just to alter a single feature. It's difficult to adapt to new technology. As mentioned, it's a tightly coupled architecture. So, if, for instance, you take a music application, for example, where you have your catalog, it's tightly coupled to like the purchase and the play services. If you wanted to like just maybe alter the features of the catalog or the play purchase, like I said earlier, you still have to take the whole application down just to be able to alter that feature. And then there is very high dependencies between the functionalities of the monolithic architecture. So, applications can run into software engineering downtime and difficulties. If you go back to the music app, for example, because the catalog and the play and the purchase function are so dependent upon each other, again, if one of them should go down, the whole application will be affected. So, but in regards to the microservice architecture, which is basically a style of large application built as a collection of small, independently deployable services, as you can see with this example. So, these services communicate with each other through APIs and are designed to be loosely coupled so that they can be developed and tested and deployed independently. Microservice architecture also enables faster development and scalability and makes it easier to evolve and maintain the application over time because as you can see with the different services, each service can be developed or managed by a different team and they can make use of whatever technology that they are very comfortable with. And then, it also comes with its own pros and cons. In regards to microservices, it's very scalable, it's easier to scale. Each microservice can be scaled individually leading to better resource utilization and improved performance. It's very resilient and if one microservice fails, it doesn't bring down the entire system. The other parts keep functioning as intended. It improves the deployment of your application. Microservice can be deployed independently allowing for faster and more frequent release. It's flexible, allows for greater flexibility in choosing technology stacks. So, like I said earlier, each of the services can be created using whatever technology stacks that the team or the developer is comfortable with. But it also has its own disadvantages, which includes complexity. Microservice architecture adds complexity in terms of communicating and testing and deployment. Dependency management between the microservices need to be managed carefully to avoid errors and delays. It also comes with the issue of network latency. Increased network calls between the services can lead to slower performance. And debugging is also more complex in microservices with an environment issues can span between multiple services. And also testing a microservice in our application can be more demanding. You have to test the individual service on their own and also test that it communicates with each other as intended. So, when stacked up together side by side, you start to understand why microservice started becoming more popular. For me, I'll say relatively new within the industry. I can't really tell how far back we could go with microservices. But in some reamolentic application is built as a single unified unit while microservices is a collection of smaller independently deployable service. So, in the case of the example we gave above, for instance, if you wanted to go about developing such service, you're going to have your REST API crowd for the account DB or the user DB, whatever case it may be, programmed like in this case, I just try to sketch of quality draft code for the account DB using first API. You also have your REST API crowd for the eventry REST API and then for the shipping too. So, as you can see here in the project directory, all the different service or REST API is just to show that this service can be developed by a different team on its own and be a full-fledged application on its own. But as long as the design or the requirements for the application is followed and then the APIs are able to communicate with each other both in dev and in prod. So, as you can see for each service in its own directory, it has its own requirements.txt file, its own Docker file. But in this case, you also have your Docker file for the whole application and your Docker Compose file. I'll go into explaining those down the slide. So, where does Cloud Native come into this whole picture? Cloud Native, first of all, is an approach of building and running applications and services that take advantage of the features and capabilities of Cloud computing platforms. The Cloud Native approach focuses on using some of the following principles we've mentioned already, like your microservices and some other features like containers, which is basically packaging applications and their dependencies into lightweight portable containers which can run consistently across different environments. Cloud Native also focuses on automation, automating the deployment, scaling and management of applications and services, observability, sorry, monitoring and collecting data from applications and services to gain insights about their behaviors and performance and also it also encourages resilience, designing applications to be highly available and withstand failure issues. So, but before we go into details with container, containerization of application, I would just like to touch on, I will not really go into full details, but just so you can see the difference between VM virtual machines and containerization because VMs was the solution before containerization became a thing. So VMs is basically a software implementation of a physical machine which allows multiple operating systems to run on a single host. Each VM runs on its own operating system which provides an isolated environment for your application to run. While a container is a lightweight, standalone executable package that contains everything an application needs to run, including your code, your runtime, your system tools, your libraries and your settings, containers uses the host operating system kernel which makes them more lightweight and efficient than VMs. So in summary, a VM is a full-fledged virtualized environment while your container is a lightweight isolated environment that just shares the host's kernel's operating system. So in this case, from our example microservice application, you see that each of these services is on its own separate container specified by the black rectangle. So containerization has really become a popular approach for packaging and deploying applications in recent years. You offer several benefits like portability. Containers can run on any system as long as that system supports containerization technology. It makes it very easy to move application between different environments such as from development to production. Containers support isolation. They provide a level of isolation for applications which means that they are isolated from each other and from the host system. This reduces the risk of conflict between different applications and their dependencies. It's very resource-efficient. Containers are lightweight and share the host operating system's kernel which makes them more resource-efficient than in the case of virtual machines. They are very scalable. They can easily be scaled up and down to meet changing demands. It can be version controlled. Container image can be versioned and stored in a container registry and you can easily roll back to your previous version. And with all these things about containers and the features they bring to the table like the scaling up and down of your containers depending on the change of demand. How do we go about implementing all these things and making sure that everything works as intended. We can scale up and down as required. This really is where, as you must have guessed, Kubernetes comes in. So Kubernetes, which is often called KS in short, is a container orchestration system for automating the deployment, scaling and management of containerized applications. I hope I'm not far beyond time. I think I started this slide too early. So my time I think is wrong. So it was originally developed by Google and is now maintained by cloud native CNCF. Kubernetes provides a platform agnostic way to manage and orchestrate containers allowing developers to focus on writing codes instead of managing infrastructure. It also provides some of the following features like automatic bin packing or your automatically schedule containers to run on the most appropriate nodes available. It provides self-healing capabilities. Containers can automatically detect and replace field. Sorry, Kubernetes can automatically detect and replace field containers. It provides service discovery and load balancing capabilities. It has a built-in service discovery mechanism which allows containers to automatically discover and communicate with each other. It also provides load balancing capabilities that automatically distributes traffic amongst multiple replicas of a container. It has the automated roll-out and roll-back features that allows for automated roll-outs of new versions of an application which makes it easy to update applications without downtime. It also has the secret and configuration management features that allows for the secure storage of your secret passwords and encryption keys. There are also other popular choice for container orchestration, but Kubernetes is the most popular one and is widely used in production environments, both on-prem and in cloud, and can be used with other cloud-native technologies like Docker and Prometheus. For me, at the time, with this whole knowledge from the workshop and from personal research, it was back to usual for me. Luckily, I started studying my masters which was in cybersecurity and it was a two-year program, so we had the option of doing our third, I think it was the third semester, you either research on the topic or you go for an internship. So I went for an internship and I was lucky enough, I got three offers, two for software engineering, then one for a cloud-native deaf-secups internship, and as you'd have guessed, I went for it. So it was really an opportunity for me to really get hands-on experience with cloud-native technologies like Kubernetes and Docker and really delve into it big time and I mean it's been for me cloud-native since then. And I'm with the luxury of experienced colleagues during my internship, I was able to ask them the best approach and best route or the best way to go about learning these tools and most of them were like of the opinion of getting comfortable with the Linux system first, then delving into containers and then eventually Kubernetes. So, yeah, I brushed up on my Linux abilities because I've always been a Windows guy and then after that I delved head-deep into containers, I was able to really understand containers, how to create container from writing your Docker file for your application and then creating a container image from your Docker file using the Docker build command and then eventually a Docker container with the Docker run command got to understand how to pull and push container images to container registries and after that, I guess one of… I also, I would say some of the resources I used to really get hands-on with containers, Linux were… I think the main one was the 20-day upscale channel on Reddit and then some YouTube resources and then after getting to understand Docker images, how to create mine and how to make use of images already available on container registries, this is just an example of a Docker file and then really understanding the specifications for building containers, how to make use of CIS benchmarks or just really understanding your application to create, to be able to write a Docker file that is going to really run your application properly the way you want it or sometimes just following an internal specification depending on the organization you're working for. So I was really, I would say very comfortable with container at the time and then went on to really understand Kubernetes. So this is just a basic overview of the features or architecture of Kubernetes. So some of the features I can see here, the master node, which is the node… The master node is the control plane node of a Kubernetes cluster. It's responsible for maintaining the desired states and ensuring that the actual state matches the desired states. You have your worker nodes. The worker nodes are the machines that run your containerized applications and they communicate with the master node to receive instruction on what to run and how to run it. Each worker node runs a container runtime, such as Docker, Kubelet, and it communicates with the master node and ensures that the desired states of the cluster is always maintained. Then you have your CUBE API server, which is one of the key components of the Kubernetes control plane that exposes some restful API endpoints. I think you can see them there. That can be used to perform various operations on the cluster, including some cloud operations on resources such as pods, services, deployments. It communicates with the HCD data store to retrieve and update the states of the cluster and it communicates with the cubelets on each worker node to ensure that the actual state of the cluster matches the desired states. You also have your Kubernetes controller manager, which is a component that runs as part of the Kubernetes control plane. It is responsible for running various controllers, like your replication controller, your endpoint controller, namespace controller, service and token controllers. These controllers are responsible for maintaining the desired state of the cluster. For example, ensuring that the number of replica pods desired is running on the cluster. You also have your Kubernetes scheduler, which is a component that runs as part of your Kubernetes control plane. It's also responsible for scheduling pods on worker nodes in the cluster. The scheduler receives specifications from the CUBE API server and assigns them to the appropriate worker nodes based on various factors, such as available resources, constraints and affinity rules. You have your Kubernetes cubelets, which is a lightweight agent that runs on each worker node in the Kubernetes cluster. It's responsible for ensuring that the desired state of the cluster is reflected on the nodes it runs on. And then you have your Kubernetes proxy, which is a component that runs on each worker node of your Kubernetes cluster and is responsible for maintaining the network rules on the node and for forwarding traffic to the correct pods. Then your HCD is a distributed key store that is used by Kubernetes as a backing store for all its cluster data. It stores the configuration data for the Kubernetes control plane and all objects in the cluster. You also have your container engine, which is the software responsible for managing the life cycles of containers, including starting, stopping and managing the resources of the containers. There are so many other resources of the Kubernetes, including your pods, your deployment stateful sets and demand sets, but these are just a few of them. Then in our case, we tried to build up on the example rest API. If you were working on, like I said, for the newbies, people who are trying to really get accustomed with these technologies. So in a situation where you're creating a microservice like that and you wanted to test them, you can make use of the Docker compose, which is a tool for defining and running multiple container or Docker applications. It allows you to define the service that makes up your application in a single Docker compose file and then you can start them and manage them. Like in our case, I don't know if you can see it. You see we have the user service, the shipping service and the inventory service. In this case, they've all been converted to a Docker image, the inventory image, user image and shipping image, and then we also have a network to ensure that all the services are running within the same network and we have our database there. So you can use a Docker compose file to test your application or you could just write your own Kubernetes deployment configuration file. Using Kubernetes to deploy and you can use a Kubernetes deployment configuration file to deploy and scale your application. In this process, you just write your deployment configuration which defines the desired state of the application, the resources it needs to run and then using the Kubernetes command line or the API you create and manage your deployment. The configuration can include information such as the number of replicas, how many replicas you want the container to be and then the resource limits and your environment variables to run your services. You could also use the Kubernetes stateful set resource. This is used to manage deployment just like your deployment file or in this case for stateful applications. A stateful application is an application that requires a host name and a persistent storage like your database for example. So and unlike deployment which uses replication controller and replica sets to manage the scaling and availability of stateless pods, a stateful set uses a unique host name for each pod and guarantees that the pods are created and deleted in a specific order. This ensures that the pod maintain the same network identity throughout their life cycles, allowing them to maintain stable network connections and stateful set also provides a way to provision persistent storage for pods. So this is also an example of a stateful set configuration file. It's just like the deployment file but in this case this is more suited for pods or containers that require persistent storage. So and then down the configuration file you see where a service, configuration for a service is specified. Like I said, like I stated in our Docker compose example this to ensure that your containers and pods run within the same network. So and with this information with so far you are well on your way to being able to sit for the CKS certification. And then like I said, this is basically journey to cloud native Kubernetes and how to secure it. So where does security come into all this? But I would like to also point out that security shouldn't be an afterthought. It should be something you also think about in the process of implementing and designing your application. But for me as a newbie one of the very important place to start with with regards to security is your requirement.txt file. This is where you know I will say keep track or record of all the libraries, plug-ins and in third-party softwares or whatever you must have used in your in the production or development of your application. Like in this case using the first API framework we use the Python Python 3.4 alpine. First API and then the UVcon server. So I'm do this information you could easily maybe go to the documentations or research on the zero-day vulnerabilities announcement of security updates and patches. But something another thing you could do is scan this image using some open source and freely available image scanners like you have your Aqua3V for example which is what I use regularly. So after writing your own docker writing your own docker file and creating a container image from it you could easily scan your own container image or scan any of this like in this case we scan the Python like the Python library or the Python software itself. So and then the good thing about Aqua3V is that it doesn't just give you the vulnerability report but it also gives you the installed version of whatever vulnerability it finds in your container image it gives you the installed version of that library or plugin or whatever it is and then the fixed version of it. So I'm do this information all you have to do is to go to the documentation of that library or plugin look for the fixed version download it and implement it in your application but you know just to put it out there when you scan your container image and you don't get a report it doesn't necessarily mean that you know your app or product is completely secure it just means that at the time of scanning no vulnerability or security issue has been reported because the way Aqua3V works is by it scans your container image checks all the libraries and plugins and third-party softwares you're using and then scans this goes through the CVE database to know if anything has been reported on those so if anything has been reported it throws it back at you and gives you the results and it tells you if it's a critical vulnerability a medium one or a low one but whatever one it is it's always good to implement the patch immediately and then some additional tips for security is to you know always use security monitoring and logging tools to you know quickly detect and respond to security threats is always make it a duty to always keep your softwares and systems up to date you know and with later security patches and updates this will always this will ensure that your cloud resources are protected against known vulnerabilities use a cloud-native security solution such as ServiceMesh this can provide security features like service-to-service authentication and encryption make use of role-based access control to limit access to the cluster to only authorize users and rows use Kubernetes security context to limit the capabilities of your pods and your containers running in the cluster this will help to protect the cluster and its resources from malicious or misconfigured containers make use of Kubernetes network policies to control traffic between your pods and use of and make use of Kubernetes security policies to control pod and container security settings always use secrets and config maps to securely store and manage sensitive data like password and encryption keys use Kubernetes audit login to track and monitor activities within your cluster use third-party security solutions sorry I know this is more of like use use use use third-party security solutions as Kubernetes network and pod security solutions to provide additional security features and protections this can be helpful in providing extra security layers to detect and prevent vulnerability like I said earlier always update and patch your Kubernetes cluster seek help from experienced and professional and professionals or consult you know some online resources especially the official documentations of whatever third-party security software you're using or libraries and plug-in and then always be vigilant about new security threats and best practices to keep your cloud environments safe one other tip I would like to add there is to always link your code especially your Dockerfile code to make sure you're following industry standard in the case of Dockerfile for example you can make use of the Hadolynter you don't really need to install it you can just go to their website or the documentation page copy your code or your Dockerfile paste it there and then it gives you corrections to make how to implement your path and the base images to how to specify your base images or your commands that helps a lot and then I guess that's it so but feel free if you have any question or anything you would like to know and resources are used like the code cloud resource and moonshark tutorial feel free to reach out and I don't mind sharing those thank you