 So, good evening everyone. We are some interns under fundamental research group, namely Suraj Goyal, Saurav Vijay, and myself Anzumbano. So, we are here to give presentation on a very interesting project we are working from past seven weeks, that is, micro-cloud setup using OpenShift. OpenShift, it is basically platform as a service, which provides an easier way to develop, create, or deploy your containerized application. So, the next question arises that why are we doing this project? What's the significance of this project? So, deployment of apps is a very basic requirement in IT industries. Every now and then we need to deploy some apps. So, the existing servers make this process a little complicated and a little time-taking and effort-taking. So, why not make this process a single click away? Just you just click and you get the app deployed. So, basically, we are trying to implement the same thing using OpenShift. It also provides some additional features like autoscalability and portability. So, how we are achieving this, we are using by two methods we can deploy. One is using command-lang interface or using OpenShift console that we will show in the later part of presentation. So, good evening all. I am here to discuss about why we have used OpenShift and how OpenShift has made things easier for us. So, our object of the project is basically to set up micro-cloud architecture with commodity storage and server nodes using OpenShift. So, as we have mentioned earlier, OpenShift is an open-source hybrid cloud platform application which is actually a platform as a service. So, OpenShift is used here to provide infrastructure for the orchestration of the dockerized images of various applications. So, what is OpenShift? Like, what we were lacking earlier and what is OpenShift providing that features? Firstly, by using OpenShift, we can provide all the access to one person, to a developer. Like, he can create the application, he can modify the application and he can deploy the application according to the demand of users at any time with just the execution of a single line command. By using OpenShift, we can make applications more portable and more scalable. Application can be made portable using OpenShift with Delta Cloud API. And application can be scaled as well. Like, if there are higher user traffic at some day, so we can scale up our application if there are relatively lower, like lower user traffic, we can accordingly scale down our applications. And moreover, OpenShift is used to provide the security and other extra features by containerize code and data in different containers. And OpenShift is used to speed up deploy of dockerized images of application. So, in our project, majorly we have used these technologies. First, it is shell scripting. Like, we use shell scripting to work around OpenShift CLI. Second one is Git. Git has been used to fetch our application source code from remote rapport. NCBAL script. So, to configure OpenShift, we need NCBAL script to actually docker and Kubernetes has been discussed by the previous project team. Good evening, everyone. So, basically Open edX platform is an open source, online MOOC platform. So, these are the few major components of Open edX platform. And as discussed by earlier project, I'm skipping it. And these were the containers that were deployed during DevStack installation of OpenShift. DevStack installation is the docker-based installation of OpenShift. Open edX. No, no sir. This was on local system. These containers were deployed on local machine. So, next we come to our OpenShift cluster. OpenShift basically uses the Kubernetes master node architecture. And apart from the master node architecture, it provides various features like OpenShift registry, OpenShift routing layer. In our project, we set up a cluster on four machines. Master node, infer node, and two other nodes were set up. These are the configuration of nodes in our cluster. So, this is the architecture of our deployed cluster. We had one master node, one infer node, which contains OpenShift registry, OpenShift router, and persistent storage. And we had two other worker nodes. So, the applications were exposed to outside world through the routing layer. So, after the setting up of nodes and installation of OpenShift, we moved on to our next step that was Open edX installation. For this, we converted the Docker compose file of Open edX using Kubernetes compose to various YML files like LMS service, browsers like Chrome, Firefox, and dependencies like MySQL and Django, NDB, how we can, now we deployed, try to deploy these services. For this, we needed persistent storage. We created them using NFS server and we needed persistent storage because we don't want that when our app has finished its working, we lost all our all data. We wanted our data to be persistent. Then we had various images which we pulled from the Docker Hub and then pushed them into our OpenShift registry. And then using deployment config, we have deployed these apps. So, this is how we deployed Open edX, but still, we are lacking, what we are lacking here is the provisioning. Means we tried various method of provisioning. We also tried our hands-on RNOLD that is tool provided by OpenFun. And also, but one way is that manual provisioning, but it is not a very good option. So, we are still working on that. And later, we got many more errors like handshake is a very common error. We got system problem that we can't do things on our own system because of hardware requirements. So, this is how we resolve them. This is what error unable to perform provisioning step. Our result was same. We were able to deploy various services. We will show the demo now. Like Ruby, PHP, we were able to deploy them successful. Open edX just provisioning step is lacking. And then our future goal is that cover up that provisioning step and also do the deployment using RNOLD. Thanks. Provisioning. So, the major step is the provisioning step. The provisioning of Open edX platform. The last step that is there in DevStack. So, the provisioning step is done by going to each container. And in OpenShift, each container is present on pods. So, there is a... But the Kubernetes, the one which they have used, the same Kubernetes is present here. Yes, sir. Which is not different. Same version. So, the same problem we are facing. So, there is a problem. There is a problem. They have been able to deploy it. You know, sir, we can deploy it. Provisioning here also is a problem. So then, what is the problem? So, basically they are also facing the provisioning problem. And... No, that is... They have a... They have done it by going to each container. Manually. They have done it manually. Configurations are required. Network configurations. Networking between containers which are inside the pods are very complicated. Complicated means what? Like the architecture doesn't explain correctly the networking between... So, you have to define a network. Network. Yes. What kind of network? What is the subnet? What masking you want? All those things should be there, right? Yes. So, that's an SDN. Yes, sir. Yeah. This is the Ruby app that we deployed. Currently, there are three pods and we can scale up by just a single click. The four pods are deployed and the application is successfully deployed. Now, we have deployed LMS but we weren't able to do provisioning step. So, yes. So, this web console is running on the master node. One last time. These are some of the services and deployments. DevPy, LMS, Mongo and MySQL. Yes, ma'am. We can show you deployments, running services, status, memory, storages, persistent volume claims but we were not able to provision. So, all the PVCs. Sir, these are the PVCs? Yes. How much time does it take to do all the exercises? You said 10% of the work is mainly for 30%. 10 to 30%. Yes, ma'am. 10 to 20% work. Thank you all.