 Good evening everyone. Our project is improving SBH's remote triggered virtual lab server and website. In my team, I have Devkanth, Deep, Rupesh, Ram who could not be here with us unfortunately, and I'm Shreya. So these are the objectives we'll be covering through the course of our presentations, admin features, health monitoring, bug fixing, proposing another architecture with load sharing and scalability, testing both architectures and the few other random features. Now, essentially, single board virtual system labs is a laboratory which is used in instrumentation control labs. Now, the three essential features of the single board heterosystem virtual labs is that the equipment itself is incredibly small and portable, which makes it handy for use in setup. Secondly, it's open source, which is a thing the entire program is trying to promote, and third and most importantly, it's remote triggered. Now, what remote triggered means is it overcomes the hindrances posed by the two alternatives. A, a physical laboratory where I actually have to be present to conduct the experiments and B, a pure simulation which can at best only model the real world and not represent it completely. Whereas in case of remote triggered virtual laboratory, you have an experiment been conducted on a particular equipment and only the input and output are streamed to the user through web browsers. We'll present you a brief demonstration of how a user is supposed to connect an experiment on our virtual labs. Now, first he logs in through his already created account. As the user is logged in, he's already allotted in MIT, so he can book a slot on that particular board on which is allotted. As he books a slot, he can go into the directory where the user files are there in order to, which have the codes in order to run the client code. This is the client interface where he logs in again, and then he goes on to this iLab and executes this file which enables him to conduct the experiment. On executing this particular code, he can actually see the video. He can, he sees the current board which he's allotted in order to figure out that the values return to him and actually current. He can also download the user logs which have been generated through his experiment. These are the logs which are generated when a user conducts an experiment. So, what we have incorporated in this project are some admin features which were not previously present. For example, the first one is a feature called, which includes making a board temporarily offline. Now, why was this needed? This was needed in case a board currently goes offline, but there are, suppose some users have been allocated that board, but the sensor is not working, so obviously we must take it offline. But what temporarily offline means is, it is as good as offline to a user, but it is online for an admin, so he can conduct tests on it, but the board will be shown as offline to the other users. And he can talk to the state when he's done testing. The second one is, in test boards, for example, there is an experiment ongoing right now. And the admin can actually monitor the experiment which is ongoing, the iteration, the heater, fan, and temperature values in real time. And for boards which are not ongoing, he can check the state of the boards, get the current temperature of the board, set parameters, or reset them. And in case the admin wants to check the state of all the boards, he can click on show all images to see the state of all the boards simultaneously. Yeah, so all the online boards, currently online in the lab, live webcam feeds of them have been taken, and they are auto refreshed every minute. The next feature which we have is the update MID. Now, some user has been allocated a board which, suppose, comes out to be faulty for some reason. Now, the user still has to conduct the experiment. So in that case, the admin can change the currently allocated MID to the one which has currently bearing the least load. For example, in the below, we can check that which machine has what load currently. And he can allocate a machine to the user which has the least load. And the next one is fetching logs. This is just a feature to make it easier for the admin to, like, the experiments which have been conducted before, he can fetch the log files of them based on a, when a stipulated time period. Now, we just gave today's date, so there's no results. Okay, good evening all. Up next, the feature is health monitoring. The health monitoring means we are trying to monitor the condition of the boards that are even used. Previously, the health monitoring script that was there, it is only checking the connectivity, not about the status of the board. It was only checking that the board is connected or not. But currently, we have modified it. It not only checks the connectivity, but it also tries to analyze its sensor, its fan and its heater are working or not. Like, means we are trying to send some specific values of heater and fan over a period of time and get the results of the temperature of the sensor that the sensor is getting. And trying to analyze those values, we are expected to get a value like inverted V shape. If it is not getting such a value, then there is some error and these boards are taken as temporary offline and a mail has been sent to the admin regarding that these MID are having these probable problems. Okay, this is a feature which we incorporated to automate the server maintenance and server setup as much as possible. Now, when the webcams are mounted onto a system, they are allocated a slash dev slash video path, which actually changes on reboot. So what we did was, we extracted some information about the device. For example, the kernel ID or the product ID. These values in the Linux kernel, they stay same even after reboot. So we wrote a script which extracts them and forcefully mounts them at specific dev parts so that even after reboot, we don't need to again manually set the boards. And one more problem which we encountered was, for example, we have designed a new architecture involving master slave architecture involving Raspberry Pis. Now Raspberry Pis do not have an RTC or a real-time clock. So syncing the time with the master server was an issue, but there's an automated script which runs every hour to sync the time of all the Raspberry Pis with that of the master server. Yeah, and to set up the server code on the Raspberry Pi, all you have to do is just W get a script and run it and it'll set up entire all the code dependencies and it'll be ready for production. So the existing architecture basically had a single server connected to a bunch of SBHS devices through USB Humps. Now there were two problems in the former scenario. A, USB Humps were unreliable. And B, if some error happened in the master server, the entire system collapsed. So we came up with a new architecture where you have a master slave configuration. You have one master connected to a bunch of R Pis acting the slaves, which have a low subdivision where there are SBHS devices connected. Now how does it overcome the defects of the former architectures? A, it is load sharing. So the load is shared between the master and the slaves and the master can no longer be overburdened. B, the Raspberry Pis have turned out through experimentation to be more reliable than USB Humps. And C, it is scalable. That is the number of SBHS devices connected can be extended as we want to. And if one of the Raspberry Pis fails, only the SBHS devices connected to that particular Raspberry Pis, that particular slave fails, and not the entire system, not the entire master of collapses. Next I'll be explaining how this master slave is working, actually. This is the master server with the database and the Django server. And an Apache gateway that is acting as a proxy gateway setup. And these are the R Pis acting as slave servers. And four SBHS boards have been connected to each of them. And these are the client. And all of these R Pis are connected through LAN, through a switch. So whenever all the requests that are regarding the SBHS, like setting a heat or getting the temperature or setting a fan value, those kind of requests are directly proxied to these R Pis without affecting the Django server in the master system. And all the EYs and the website things are being directed to the Django server. And these are simply, from this, we can simply infer that during the experiment, the master servers remains unloaded. And each R Pis are handling four SBHS. So they also remain quite, they prove to be quite reliable. The database is centralized in order to stop inconsistency of data. There is only one database. And each R Pis is getting the data and storing in that one only. And for security purposes, what you have done is that other than IP of the master system, no one can access these R Pis. So I just have a very small thing. You're using R Pis just to turn on and off your systems? No, sir. It collects data. And actually, they have in the vision that they want to replace this. It means these are SBHS devices, and they want to use more complex devices in the future, where there will be some computations in the server end. Server end. R Pis now are acting as a DAG devices, data additional devices. And in the future, the R Pis can also act as a node devices, which can actually compute some stuff which is required to be done on time. So essentially, the master server just acts as a sharing, task sharing machine. And the tasks are allocated to the R Pis. Task sharing, if I'm not wrong. You want to just put a task sharing or anything like that. So turning on and off, you can use a simple Arduino also there. It is not turning on and off. It is setting data. It is communicating with it. And also storing the log files and everything. Storing the log files in the local R Pis. Initiality is in local. In each one hour, a cron job pulls everything to the master server. So why it is not sent directly to the master server? Every time means every second one line will be created in the file. So it will be better to create the whole file because one slot takes you for one hour. So one experiment will be conducted in one hour. So the whole log file will go to one line. Each time one line, it may be some mess of data will be there or something. So these were the bugs which cropped up during our project time. And the first was the zero temperature problem where the board sometimes returned to zero temperature value, which we rectified by opening a new serial port in between the experiment. The second was a cannot connect to server message which cropped up in the client app, which happened due to unhandled exceptions, which we obviously handled. The second was when an initial slot is allocated to a user, there was a foreign key mismatch between the tables of accounts and boards tables which we rectified. And another bug which we rectified was the index out of an exception which arose when we were allocating a board to the initial user and there were no other users online. So we handled that exception too. And what it essentially was doing is trying to access an empty list, which we rectified. Second, we corrected the order in which the booking should appear. That is, in order of the most recent booking to the earliest booking that the user had done. And empty experiment log files, we removed the discrepancy that was created while downloading an empty log file. And we prevented the Arduino from, we discarded the result that Arduino used to send which caused a false in my data to be displayed. Yeah, coming to the hardware developments in this project. In the SBA just set up, we have on the client side, there is a silo script which runs on the background. There's a Python script which sends data, the heater and the fan values to the SBH's devices. Now the communication from the server to the SBH's devices occur over the USB channel. There was a problem with the SBH's board set up where there was a USB channel, there are five pins. Power plus, power minus, data plus, data minus, and a shield pin. Now what happens if you don't connect the shield pin to the ground? The shield pin is not directly shorted to the ground. All the noise that is coming over the channel will be directly implemented on the data signals, corrupting the data signals, and the SBH's devices will go out of the order. So in the layout diagram of an ATMEGA16 board, which is being used in the SBH's board, on the top right you can see the USB type B female connection board. The identified shield pin and the ground pins have been shorted together to avoid the problem. The problem that came earlier was there was a serial exception, and the SBH's devices would return no data while the data was being sent to the SBH's devices. Also another system that was implemented was an Arduino-based relay board controls. Arduino controls the relay boards, which switches off the devices when they were not in use. A script was written on the Arduino code, which accepts a string of command, let's say N01 or F01, which turns on and turns off the devices based on the logic that the next slot, if the next slot is empty, the device will be turned off. So it was required that our code was pushed to production because this would be used by the students of the Institute in the upcoming semester. So before pushing the code to production, we tested it for around one week, and we had a script which automatically books all the slots for all the devices and performs experiments parallely on all the devices. Now this script ran on for a week, around a week, with intervals in between. And after that, almost every feature we did was tested through the scripts, and everything is automated with minimum admin intervention. And these are a few challenges that we faced. The code that we initially had was Django 1.6, which had a lot of third-party dependencies, including database interactions. It uses South for handling database interactions, so we updated the entire project to Django 1.11. And the next thing was there was no proper documentation. So in order for future maintainers, we have written the developer documentation. The documentation for all the functions, methods, classes, whatever we have has proper documentation, which is available on the VLab site. And the GitHub repos has readme's with all the setup instructions and the setup script as well. You can just run the setup script, and the server will be up. So what we learned here was web development following an MVC-based framework. For example, Django. We learned how to configure Apache web servers, configuration, and batch scripting. Because detecting the SBHs, devices, and the webcams, they were all kernel events when they were put in and out. So that was an effective learning experience. And we learned the importance of extensive testing before pushing anything to production. So essentially the most important thing which can come out of this is extending this entire system to new laboratories. This we created and set up the single board heater system virtual laboratory. The architecture that we gave is load-shared and very reliable. So this can be extended onto other laboratories with minor changes in the server codes. Thank you.