 are doing things with the cloud, everything is in the cloud, everything is processing inside the cloud. So, let's see what type of clouds we have. So, we have three types of clouds. The public cloud, the private cloud and the hybrid cloud. So, what is a private cloud? So, let's just say for example, you are a government organization, you want higher security, you don't want public intervention in your network, you'll go with the private clouds. For example, AWS, VPS, VPC, virtual private cloud or VMware on premise, right? That's called a private cloud. Then we have public clouds. Public clouds are, for example, let's just say you are a food vendor and you want maximum engagement from the world. So, you'll go out with the public cloud, but it's not as secure as the private cloud. So, I ask you this, is it too much to ask for both? Like we can have the privacy and security of a private cloud and the engagement of public cloud. So, let's discuss about the hybrid cloud platform from Red Hat, which is OpenShift 4. So, what is OpenShift 4? OpenShift is the enterprise Kubernetes offering from Red Hat. Hello. So, at the base, we have Red Hat Enterprise Linux and REL CoreOS. On top of that, we have Kubernetes installed on it. Then we have our automated operation and cluster services, application services, as well as developer services. Now, let's see the architecture. So, this time on OpenShift 4, we are using immutable operating system CoreOS. Now, what is immutable and why we are using it? We'll see that in a moment. Then we have our worker nodes, that too we have on Red Hat, as well as you can also use REL machines. The load balancers, routing, logging and monitoring, everything is there. Now, the best part about OpenShift 4 is the easy installation method. All you need to do is run that command, which you can see up top there, dot slash open shift, install create cluster. Now, we have used Terraform in this version of OpenShift. Terraform, just like Kubernetes, is an orchestration tool. Now, we also discussed yesterday what orchestration is. Orchestration is the meaning, which means that your entire infrastructure will be taken care of. So, you just need to run this simple script and Terraform will not only select the nodes on AWS, but also install OpenShift along with it. Now, let's talk a little bit about CoreOS. What is CoreOS? So, you must be using Windows or REL, Ubuntu, CentOS, Red Hat Enterprise Linux. Similarly, there's one more operating system, CoreOS. Now, CoreOS is specifically designed to use with containers. So, if you are in a containerized environment and you want the optimum utilization of your resources to deploy your applications, you can go along with CoreOS. We'll see that in a moment on my terminal. Now, when I say immutable, what does that mean? Immutable means that you don't get to be root on your system. Now, this is a good thing because many a times people mistakenly deletes your slash ETC or slash VAR or the entire slash folder, or they run commands which they are not supposed to run. On CoreOS, you don't get to run those commands. If you make a human error because you are not root, you won't be able to delete something by mistake. Apart from that, we are using Cryo and Podman as the latest container run times because Docker had a couple of vulnerabilities earlier. With Podman, you are able to run your containers rootless, which was not present earlier. Now, let's talk a little bit about some of the features of OpenShift 4. One of the top features that we are introducing this time is Istio service mesh. Now, what is a service mesh? So, let's take an example of Grab. How many of you use Grab here? So, most of us know about Grab or Uber. When you open your application on your phone, there is a container running in the background, which takes care of the part which uses the GPS. Then, when you select, when you click the button, Grab this Grab, there is another container running inside a pod. So, there are multiple microservices running and every microservice takes care of a different part of that application. When these microservices talk to each other, this communication becomes really complex and previously it needed some human intervention in order to take care of it. But with Istio, we have introduced a concept of Sidecar. So, inside that container where your microservice is running, you start another container which can act as a proxy. So, we have three components here, Pilot, Gale and Citadel. Citadel takes care of your security. Gale has the configuration data in it and Pilot has the configuration data which goes to the proxies. Now, with the help of this mixer here, your entire microservice communication which was previously pretty complex to handle is taken care by Istio alone. Now, let's talk about another one of the very highly talked about and requested features in OpenShift 4, which are the operators. Now, when I talk about automation, anything which is not automated is slowing you down, right, guys? When we speak of pods and microservices, many are times a lot of human intervention is again needed and required. Let's say, for example, you need to install new database like CouchDB, but you don't know anything about it, right? You don't know where the configuration files are, you don't know how to upgrade it. So, what you can do, you can simply install an operator. Now, this operator is nothing but an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex, stateful application on behalf of the Kubernetes user. So, things like upgradation, updation, configuration management is taken care by the operator and you don't need to do that. Now, because OpenShift 4 is launched, we are continuously upgrading the product and with OpenShift 4.2, we are introducing disconnected and air-gapped installation, which was again requested by our customers. We have introduced the special resource operator for our GPU and for our developers, we have introduced code-ready containers. Service match will be fully supported. When it comes to installation, you can use all these methods. So, if you want to do a full stack automation or if you already have a pre-existing hardware, you can install OpenShift 4 on your bare metal nodes and you can go with the hosted OpenShift offerings like Azure Red Hat OpenShift and OpenShift Dedicated. So, we have partnered up with some of the best and the biggest cloud vendors out there because I'm talking about hybrid cloud. Hybrid cloud has to be installed on top of another cloud or on top of another bare metal infrastructure. So, if you have AWS as your Google Cloud bare metal VMware, you can simply go ahead and start up your cluster with just a single line of command. So, let's just take a look at the improved console and operator. Okay, so this is the latest console and this is running right now. I just took the access to it. This is our operator hub, the operator hub which I previously mentioned, right? So, let's see if we can install an operator here. It's taking a little bit of time. So, this is the 4.1 version and 4.2 will be out, I think, in the next quarter or this one. I also have my, yes, okay. So, this is the terminal for your core OS. Let's see the version. So, as you can see, this is the Red Hat Enterprise Linux core OS released 4.1. So, all my nodes right now are on core OS only. So, if I run OCGet nodes, okay. So, I have one master and two worker nodes and all these worker nodes are running on core OS. Let's go back to our web console. Yeah. So, these are the operators which comes in the operator catalog and if you want to create your own operator, you can do that with the operator SDK. So, let's see, let's take a look at the couch based operator. All you need to do is just click on install and your operator will be installed here. So, it's that easy to do it. Just click on subscribe. So, this is just a sample which I'm showing. Okay. So, it's launched and this is the YAML configuration for your operator. If you need to make some changes there, you can do it from here and then save it. It'll simply just start running. All right. So, I'll hand out the mic to my peer, Ruthvik, and he'll speak on the topics like Quay and monitoring. Thank you. Thanks, Ushil. Hello, everyone. Good morning. Hope you're having a good time here. So, we talked a lot about hybrid cloud, private cloud, public cloud, then microservices. But in the end, all these things are going to end up with containers, right? So, you need a container registry which will store your containers in a secure way. Your registry needs to be robust. You need 24 by 7 enterprise level support in case you face any issue. So, you need features like that, right? So, how many of you are aware of Docker registries like any public, private? Have you seen Docker hub like that? So, similarly, not similarly, but Docker hub is like a public registry, right? But QAV3, with QA, we are providing a private registry which is secure and robust, as I mentioned. So, this is the high level architecture of QAV3. So, let's see it in a flow. If a developer or admin push image into your private registry, it gets stored, right? So, while storing, a postage database is used to store its metadata, metadata as in your image tags and image revisions. Then comes the clear service. So, clear is responsible to store and scan your containers in a secure way. So, it does scan each and every layer of your container and suggests us if there are any vulnerabilities present inside that. It also has its own vulnerability database and it keeps updating itself. So, we do not have to worry about the new vulnerabilities available out there. It will automatically detect it and fix it. Then there is Redis in-memory key value show, which is responsible to like monitor your live builder logs as well as runtime logs. So, these are the features that are going to come in recent version of QAV3. So, it supports multiple architecture manifests, such as IoT-based images, Windows containers and ARM-based containers, etc. Then we are going to introduce new repository mirror functionality. So, what does that mean? So, let's say you have an external repository configured somewhere, not in your premise, but you configure, you need to configure another instance of your registry on your premise. So, you do not need to again go back and configure and code all those things. You just can mirror your existing repository, which is present externally. Then, as we have discussed the importance of operator, right? It helps us in day to task, such as patch management, upgrade and maintenance like that. So, we are going to introduce QAV setup operator, which will take care of your whole registry operations. Then, crunchy data operator, which will be managing your Postgres database. So, you will be like on a safer side. So, none of the users who are supposed to work on Postgres database, they are not able to access or touch its configuration because it's being managed by the operator. Then, with QAV 3, you can have, you can configure time machines. So, what does that mean? So, let's say if you or your developer accidentally removes any image or tags of it, you can still retrieve that using time machine. You can achieve availability just to avoid single point of failure. Then, metrics. So, when we deploy registries on large scale, it's important to understand its resource utilization and have better judgment how much resources it requires in future. So, for that, it supports metrics which are consumable like by monitoring systems like Prometheus, etc. Then, it can scan containers to provide security as we discuss using clear service. Then, you can use robot accounts. So, using robot accounts, you can simply provide a granular access to your end users as well as your developer. So, it's nothing but access control mechanism system. Then, let's talk about CI CD. So, how many of you are working with CI CD operations? Jenkins, have you heard of that? So, it's just... So, there was a traditional Jenkins server which we used to run builder jobs in streamline. Like, you can deploy Jenkins server on your premise and you can have Jenkins slave on the other side which will take care of your build operations. So, you might want to deploy or build your application on multiple stages such as build, UAT, production, etc. So, at that time, we need Jenkins server to automate all these tasks. But, there was a challenge with traditional Jenkins such that it used to consume resources which are not required at some point and it came in the picture when there was no cloud native workloads present. So, with cloud native workloads and containerized systems, we're going to introduce new CI CD pipelines which is Tecton. So, Tecton does not require any traditional CI like server. It just need your Kubernetes controller which is known as pipeline controller. It doesn't require any additional resources or any additional infrastructure. So, it gives you standard CI CD pipeline definition. It can build images with Kubernetes tools such as S2I, Builder, Buildpacks, etc. It can run on hybrid cloud or multi-cloud. It can easily extend and integrate with existing tools and we can even scale our pipelines on demands. So, that's the features and this is the high-level architecture or flow of the pipelines. So, in the task, we're just going to define what containers need to do. For example, mounting a volume on a host. Then, pipeline would read and store those tasks and pipeline run will execute the pipeline. So, in your pipeline resources, you are providing input as a GitHub source code and it will produce the output as a Docker image. So, these are the points with tecton. Then, let's see what monitoring is. So, how many of you are aware of Prometheus? Right? So, are you using Prometheus on production? Yeah, that's nice. So, when we think of hybrid cloud or any private cloud, so monitoring is important, right? We need to, we need a solution which not just read the matrix, but it would understand your matrix and scale your workloads accordingly. So, let's say if you plan a go live service on weekend which is expecting a huge traffic and you are not aware how much resources it's gonna consume. So, in that case, you need an end-to-end monitoring system which will take care of your resources as well as containerized workloads. So, in that case, Prometheus has proven a significant results. So, Prometheus originally designed by Sound Cloud in 2012. Later, it got adopted by CNCF community and they have made it open source. Since then, it has become very robust. Many people are started using Prometheus for their production monitoring. So, the advantage is that it gives you end-to-end monitoring in such way. It stores the matrix. It analyzes it. It gives alert based on your rule configuration. And you can, in fact, add Graphon and dashboard to perform more analytics and have better view of your clusters. So, let's see the features. So, it gives you multi-dimensional data model with time series data identified by matrix name and key value pairs. It has Promql which is flexible and very powerful language to leverage that dimensionality. No reliance on distributed storage. Only single server nodes are autonomous. Then, it gives time series collection happens via pull model over STTP. So, STTP is like lightweight protocol. So, using that, there are no delays while pulling your matrix from a huge clusters. So, targets are discovered via service discovery or static configuration. So, if you're using Kubernetes, you might be aware of what service discovery is. So, based on your labels, Prometheus detects the endpoints and the targets which is going to monitor. Then, this is the high-level architecture which we use in OpenShift. So, this is kind of optimized architecture, I would say. So, as we have seen importance of operators, so we have introduced Prometheus operator which will take care of your main Prometheus server as well as your alert manager. And there will be cluster monitoring operator on top of all these components which ensures that each and every component of your monitoring system is up and ready and updated, up to date. Then, we have Grafana for advanced visualization of your cluster. Then, node exporter which will pull matrix of each and every node present in your cluster. Kupset matrix. It's responsible to convert Kubernetes object matrix into Prometheus consumable matrix. Then, Prometheus adapter. So, this is one of the advantage of Prometheus which we're going to introduce in 4.2. So, using Prometheus adapter, you can scale the workload not only based on your CPU and memory utilization, but you can configure custom resource auto-scaling using Prometheus adapter. So, what I mean by custom matrix auto-scaling is let's say I want to auto-scale my workloads based on the number of API requests is getting or based on the HTTP requests it's getting. So, I would be able to auto-scale my workload based on such custom matrix as well. So, before that it was not possible. Yeah, that's the overview. So, let's have a quick demo. So, I will show you how it look like. So, I'm just gonna click on monitoring then dashboard. So, I'll be getting Grafana dashboard. So, let's say I want to monitor my HCD cluster and the read-write operations it's consuming at this moment. So, I will choose HCD monitoring. So, I can see there is only one server, one HCD server, then the RPC rich, dissing duration, memory consumed by it, then client traffic, etc. Yeah, so that's about monitoring systems. Now, I would let you know how to get started. If you are newbie and you are interested in OpenShift, so there is one cool website called learn.openshift.com. It is free. You just need to have credentials to log in into that and you can find playgrounds like this. Let's say I want to learn operators. I'll just click on start. Start the course. So, I'll be getting all these playgrounds here. I can even learn the fundamentals and multiple operators like HCD, Ansible. So, if you want to try OpenShift 4 on your premise or on any cloud provider which we have suggested earlier, so you can just try to access try.openshift.com and you just need to click here get started. So, it will ask you for the credentials or maybe not. So, yes, you can see you can install OpenShift on AWS, bare metal, Azure, VMware, Google Cloud, even an OpenStack and your own laptop. Yeah, that's it from our side. Thank you. Any more questions? Yeah, so let me just add something to your comment. You are correct when you say that OpenShift, yes, it is a cloud but unlike your Azure or AWS, so when we talk about Azure or AWS, we say that they are IAAS like infrastructure as a service, right? And OpenShift is a P AAS platform as a service. So, on OpenShift what you can do is you can take your Azure instances and then you can install OpenShift on top of Azure or on top of AWS or on top of VMware. Then you'll have a cloud on top of another cloud. So, that's why it's called a hybrid cloud. Okay, yeah, so you mean to say that like if you are in Thailand and your users may be in Australia, right? And so you'll have to install that in a location where it's accessible to both the countries and you get the minimum ping, right? You less latency. Yeah, I understand. Yeah, so yes, you can do that. See, if you're using AWS or Azure, so you can install your clusters in the specific regions and specific availability zones that you require. And then according to that AWS and Azure, they both provide very good security. So that's why you can, that's why, see, this is the reason why the hybrid clouds are picking up so much heat today. Because you are using a private offering from AWS and also you have your public, you know, engagement. So you will have that security that you have. But if I say that is it is hybrid cloud specifically for only for government organizations which are using like really, really sensitive data. So no, for that you have other options, right? So hybrid cloud has its own market, right? It has its own specific set of applications which you can deploy. So I hope that answers your question. Thank you. Does anyone else have any question how to get started or any confusions or anything? Okay then, thank you very much for attending our session. Thank you.