 We're going to get started with the demos from the hackathon. We have eight projects here that are going to demonstrate one after another. You'll have to make a note of these because we will be doing public voting. And then based on that, we will decide the winners of this hackathon. So if you guys don't mind, please move towards the front because some of the things these guys will be showing will be really tiny parts you might not even be able to see from the back. So the further you sit up, the better you will be able to see some of the tiny parts here. Good evening, everyone. I am Yogesh, working as a junior embedded engineer on IoT products. And I'm Surya Sundaraj. I'm also an embedded engineer. And here we are onto the smart LPG stand. So before going to the topic, why this is a smart LPG stand? OK, we are making a stand which will sense the weight of the LPG gas system. And it will be also able to sense the leakage of the gas. And the next thing is, in the present scenario, as you are seeing, if you want to book a gas, you want to call the number of your gas agency. In the back end of gas agency, they have built IVRS system. But we want to call. Every month we want to call when our gas is empty, we want to call. So we want to make this complete loop. We want to complete the system by eliminating the user interference in booking the gas cylinder, actually. So we are building a smart stand which will automatically book the gas for us, gas cylinder for us, when we will customize it according to your wish that if you want to book a gas when 2 kg of gas is remaining in your cylinder, we will give you customization according to that. So why we need a smart LPG stand? So every week, people does a gym at home by lifting the gas cylinder and checking that amount, how much it is remaining. So the thing which people will be looking for, it is a tedious thing. Like people looking like me, it is like if I'm lifting that one, I'll be feeling that it is filled. It's a joke. OK, fine. And then imagine that if you are in the middle of cooking and delicious food you are cooking and the gas goes off. Imagine the embarrassment. What is that? It will be horrible thing. Our chicken goes in the wrong way, right? And then what if gas leaks and it will be a major disaster at home? And if people doesn't know what happens there, and next house people will be coming to us and they'll be telling gas, some gas leakage is there and those problems will be arising. What if some application does that? What if that device does that? It is taking up whatever it is leaking. It is detecting that and it is giving a notification to us and so that or else it is giving some alarm type so that the people will be aware of that what is happening. What will be beneficial that we no need to track the dates of the previous gas? We no need to track anything. We no need to track the system. We no need to track the number of gas booking. That will be also a tedious thing. So here comes the block diagram. The leftmost part it is cylinder stand which we are going to design. It will look like that and it will be kept under the cylinder and what and all things it is there, it is the block diagram for that. We are using an AVR chip. It is an 8-bit controller which controls all the power sensor parts. And we'll have a load sensor which senses the gas, this one, weightage and we'll have a MQ05, this is an gas sensor. Gas leakage sensor. And we'll have an ESP826 Wi-Fi module and we'll have a SIM900 module which is used to see the user cases. It can be a rural area based system or it can be an urban area system. Rural area we can't expect people to have an Android phone or iOS based phones. So for those people who need lower end things, they can have an SMS based system which will notify to them. And here, we'll use an ESP Wi-Fi module to upload the data to the web server. From web server we'll download to the app or whatever things we need in future. And just a minute. And also we'll have this one SIM module which is used mainly for booking the gas automatically without any of this one, interruption, like user, it can be automatic thing without user knowledge or it can be sending an SMS to the user that after five days, we are going to book that gas automatically without your knowledge. So keep your money ready or something, you should be at home like that. And the next is, technologies, what and all we are using this project. It is like internet of things, thingspeak, Arduino, AVI controller, ESP and GSMC module and load cell. This thingspeak it is used for generally for a demo purpose, we'll show that. And what is the present market for the device? Only devices nowadays we can see in market is that they are weighing gas or they are sensing the gas leakage, but nobody is doing that, booking the gas cylinder from automatically booking the gas cylinder. Our main aim is to reduce the cost and we have estimated that we'll give less than 400 rupees to sell that. And we have estimate of battery life for eight months, minimum eight months we have estimated the battery life of that product. So next is that it is easy installation which people used to buying a product and learning what is this and those things and all. But in our product it is just buy and place it where it is required and just use it. And it is also safe and it is automatic, that is the booking system. Problems you face. So we'll show the demo. This is the FSR, just we are using FSR for demo purpose, in main purpose we'll use load sensor. This is the thing which is streaming here. It is streaming the load. Now we have done a reverse technology like we are seeing that how much it is pressure we are giving, it will sense that and it will call. Now I am putting pressure, it will call this mobile and it will at the same time. At the same time it will be updating in the thingspeak.com, it will give a map. How much gas it is there. So just when. Actually this is like, this is for an IVR system. It will send this call to them and the IVR system is like, it will get a call and automatically they will ask to, it is recorded call. It will ask to press one, two or book the system. We'll automatically do that through the device. Just we'll press one and gas will be booked and user will get notification of that. Any comments? I'll just start my demo, asking you a question like, I can say every person here may face the worst side of the traffic jam, traffic jam, right? And you can see the ambulances, like sirening on back of you, waiting for the signal getting cleared. Like at that time what you will feel like. Okay, if there is any chance to make the ambulance clear. Okay, people may help the person who is in the ambulance. Right? Every second matters like if it is getting late. So this is like when an ambulance is stuck in the middle of the traffic, what we can do. So our project is like real-time emergency traffic corridor. My name is Rajiv Brahma. And I'm Arvind. Like what we're going to do in this project? Like suppose an ambulance is reaching a signal. When the ambulance is reaching the signal, 300 meters of the distance, it will turn automatically into green. So irrespective of the position of the signal, current condition of the signal, it will turn into the green. So what we can make into that, we can reduce the time taken by the ambulance to reach the destination, so that we can save a person's life. So technologies we have used in this project. Yeah, the technology we have used here is Mapbox. It's an open source tool for creating a map and we can design any feature in this. Then for communication protocols, we are using PubNup, which is streaming real-time data. And for a hardware part, we are using Raspberry Pi. As of now, in the full demonstration, we'll go with an SOC. Like how we are going to solve a problem. How is going to solve this problem? Well, when it comes to real-time, we can make, what you can say, we can set a 500 meter distance or a 300 meter distance before the signal. When an ambulance reaches the 500 meters distance, it will automatically turn into the green. So we'll have a demo first, then we can go to the challenges of this thing. We have taken one predefined route. We have taken a Bangalore route from IAAP signal to Domelo signal. So we'll see how this model is working. Consider this is the ambulance. We can see that. Consider these are the signals. Just a minute. We are running the server code also. The ambulance is reaching the first signal. And you can see that you reached the signal one. And you can see the changes. Like, first signal is green and the second signal is red. You can see there. Once it is crossed the signal, you can see that it revert back to the normal mode where the previous, where it's in previous condition. Like in Raspberry Pi, you can see that for first signal, you have given the changes occurred. You can see that your reach is signal two. When it reaches the signal two, it will automatically turn into the green. You can see that signal three is two and signal three there. You can see that. Like this, we can make our ambulance, let it go so that we can make a free way to ambulance to move in a path. Though we cannot reduce the traffic nowadays, we can make at least our ambulance to go so that we can reduce the time. We can save one person's life because when he's in a crisis situation, when he's in a danger, every second matters, right? So that's, this is our demo. For first signal, we have set it. Like, I didn't show you that, but it happened. In the signals. It's kind of hard, actually. That's what it is. So when it comes to real time, we are going to have an app or any associate in the ambulance. It will keep on sending you the GPS values, your updated values for each and every second. In the server part, where we can calculate distance, the theta angle, where it is coming, in what direction it is coming. Assume you will have a four-road junction, okay? We can say by calculating the distance, the ambulance is approaching that junction, but we cannot say in what direction it is coming. Every junction will have four ways, right? East, west, north, so something like that, okay? So by using theta calculation, by using distance calculation, everything we can make, okay, this is the way ambulance is coming, so server will take decision, okay? This is the side we have to open the signal. So in that way, we can develop when it comes to real time. Right now we are using Raspberry Pi, but when it comes to real time, we can make an SUC so that we can make it with less cost. All right, imagine this scenario. You're at a coffee shop. There are about 50 different coffees. You're trying to think about what to buy and you just can't think about anything. And your friend told you something the other day, you cannot seem to remember that as well. Now the cashier is giving you dirty looks, the people behind you are giving dirty looks, everybody's getting impatient, my heart beats are rising, it becomes a huge issue. It's a real-world problem, we have a real-world solution. So I'm Vaishak and this is Invenio. Invenio is a basic recommendation system. It's an internet of things project based on recommended systems. So what this does is it basically finds your location and depending on your location and past interests, what it's gonna do is it's gonna build recommendations using our custom recommendation engine and interact with real-world displays to give you real-world suggestions. Right, so the problem with traditional recommendation systems are that they're on your laptop or they're on your phone. They give you good recommendations, but you don't tend to remember them when you're in a shop and you wanna buy something. You're just, you're flustered at the moment, you don't know what you wanna buy. So what we're gonna do is it's combined hardcore recommendation with hardcore IoT to create a responsive environment. We're creating an environment that responds to what you're doing and what your preferences are. The basic workflow is whenever a user enters to a cafe or outlet, the low-powered Bluetooth device usually transmits a unique identifier, which is usually picked up by a device. Then a device then connects to a Facebook graph API and checks into a user's likes and dislikes and gives a recommended system of a complete list of what he likes and what the system would recommend for it. That stack, we have used a flask which runs as a server and we have, for a frontend, we have used Jinja templating and jQuery for the design. We'll show you a demo, how it works. Yeah, imagine that this is a bookstore. So all of our phones have Bluetooth, right? And they tend to be turned on quite a bit of a time. So we can use the Bluetooth in our phone to respond to real-world environments. So here what we're doing is we're in a library, we're in a bookstore, I'm here to buy a book. So this is my phone, it's a standard phone. It's got Bluetooth enabled right now, it's in my pocket. So what I do, I'm just a clueless customer who's wandering into the bookstore. So I walk near the bookstore, yeah. So now, ideally what would happen is blame the Wi-Fi. This is my phone and what this is, is a low-energy Bluetooth device. It doesn't really have to be a phone. It could be a small chip which you can get for around 50 rupees. All you need to do is embed this into your shop or near your information kiosk or anything. So what it does is it detects me, it detects my MAC address and finds out who I am. After that, our back-end will collect data from all kinds of social media. Right now what we're doing is using Facebook Graph API. So these are all the books that I have. These are the kind of books that I usually read. So if you go to my Facebook right now, you can see that I've read quite a few Jeffery Argers. So the system is recommending books what I would like. So now, the thing about this is it's scalable. We want this to be a platform, not just an independent thing. You could use this in a restaurant, if you like. So for example, you walk into McDonald's and there's a huge menu in front of you. You don't know what you want to buy. But for example, you can look into your Twitter history and figure out what you like to eat. And the menu could be customized based on what you usually eat. So you walk in. What you usually like is up there. It takes you two minutes to decide what you want. You order it and you get a hell out of there. There is no delay. There's no line or anything which you need to worry about. You could use the system practically anywhere. Anywhere that has a Bluetooth low-energy chip. You could use it in a taxi, for example. Somewhere in the cloud, it's stored at what temperature you like your AC to be. So you walk into the car and the thermostat automatically sets in. Sorry, lunch now. Okay, let's come to the interface, Harish. He's Nikhil and the guy is Arshad. We are B-perceivers and this is... He is BS, the two in astrophysics. Actually, we don't have any link between what we have done. But we have learned every part of this thing and we have done with what experience we have got from what we learned. What the thing we have done today is called Socosys. It's just a short word of what it is called. It's a social media-based control system. What it does is it just uses the social media to get our work done. Works like what we do daily. Good afternoon all. Before I start my demo, I would like to ask an open question to the audience. Is there anybody here who is not on any social website or social networking? Facebook, Twitter? That's a social network. Practically, there's nobody here who is not using a social network. That shows how much we are addicted to social networking and it is a part of our life. We actually like to know what's happening in our neighbor's house. We like to showcase what's happening in our life. We are more interested in that. We have used that concept for controlling or automating or we have used an automation system based on that social site. The reason for using this automation system based on social media is that it is basically user-friendly. People already use it so they find it very easy to use it and we need not build any extra app for that. It's a cool factor. People like using Facebook or people like to be always on social networks than socializing actually. The concept here is let's take Twitter. We have specific keywords on Twitter where we have a program using Python and the Twitter communicates between Python and communicates between Python and when we give some keywords in the Twitter we can actually open or close our door. It's not just opening or closing our home doors. We can actually access our home appliances. We can actually use it for switching on lights, fans, everything. It is basically a complete home automation solution. You name it, I actually can access it to any other device. So, we'll have a demo here. We have a dedicated account for that on Twitter. We have named it as SoCosys 2015. Now, let my friend tweet a code that has a hashtag open door. Let's see what actually it does. It can actually open the door. It's so cool. We need not use any app. We need not spend our energy learning what's happening or how to use a new app. We already know how to use Twitter. Just tweet, we'll open the door. Okay, open the door. Now, let's see how to close the door. The next code. Just hashtag close door. Okay, this is just a demo. Okay, let's say you're coming back from your office and you're tired and you want somebody to cook food, but there's nobody home. So, what you can do is you can actually use this system for your microwave or something. You can keep food in the morning. Just tweet and your food is ready before you reach your home. That's so cool, no? So, and you love it already because social media, you use it every day and you obviously love this. So, that is the actual, and let me come to the technology used here. We have used Arduino for communicating between our devices and the Python code. And the Python is actually a substitute for Ethernet Shield or Wi-Fi module. We can use a Raspberry Pi instead. We have basically used a computer in Python. We can modify it for our use and cost. That's the next step of our automation. So, just for the demonstration, we have used the computer. So, this is all about our project. I think this is the future technology. So, I think this can change the world. Thank you all. Thank you. Good afternoon, guys. This is Sonika and Myself Tarun. So, this is our app, Log My Day. It's more than a diary. So, coming to journaling and writing diaries, it's more like a habit. Now, at this point of time, you have these mobile apps. It's more like digitized. But it's, you know, and things like, at this point of time, you have people who generally write. They tend to use these apps because it is convenient on the fly. And also, you have new customers who are actually coming into this field because it's convenient, because it's not as hard as going and being disciplined and sitting and writing. So, the point is you happen to have a set of users, user groups who are actually continuing to do this activity. And these are the chaps that we are targeting on. And what the existing apps, like, for example, Flava Notes, Private Diary and Diary Notes, each of them have been rated by so many users with an average dating so high, what they tend to provide is a security on a lock and then various input mediums like audio, video, photo. And then you have various awesome... It's not like I'm saying that they're really good at what they're doing. The point is there's a lot more which could be done. And we are leveraging on that. And where we are fitting in this, we are doing textual analysis, basically NLP, natural language processing, and we are making use of core NLP, which is an NLP tool from Stanford. And what are the features which you're taking as a pause tagger, which is a parts of speech tagger, then dependency parser. Lemma and stemming is basically cutting down, like, if you have words like swimming, running and all, just bring it to the root form. And then this is the final core sentiment analysis, is what we are actually providing to the user. Another thing, we would like to come up with a score in between zero to five, where two means neutral, saying that he had a bad day or a good day. It's basically sentiment or a mood for the user. So I'll just go through fast the... I'll go which way to use. We have done a top-down passing initially because the information which the tool provides is basically the sentence and sub-sentence level, the phrasal and the token level. So what we had to do is we basically got that information because when we had to climb up back, I mean, when we wanted to do a bottom-up approach, we had to merge the sentences together to get a better score. So we happened to maintain the scores over there. And then on our bottom-up scoring, this is the approach which we used. And we had our own heuristic, which is a weighted mean of the critical words which we call it, which has a certain amount of tokens, sorry, the sentiment value, based also depending on the sentence length. So we made use of these parameters to actually come up with a weighted mean. And finally, we come up with a score. And yeah, I'll present you with the demo. So this is like an input. Let me give you a sample sentence which I'm gonna put in. So I'll just read it out. It says, she said she was sad that I've been hanging out with Jane more lately and things that I don't want to be her friend anymore. I can't believe she thinks that, especially after talking with her on the phone for hours and hours last month, while she was going through her breakup with Nick. And let's see the score which the system generates. It comes up with the score of one. So it's kind of luckier because it's one. It happens to be a number, but it's a fractional value which is supposed to come out. And this information is picked up from merging the sentence and sub-sentence level with the heuristic mean. And you can check out the other examples which are given in. Like, say, direct examples. Like, this is a good evening. It's a high score of value three. And this is a bigger sentence. And we have the sentences, and the point is, so this is a value four, because the basically token level and sentence level, like, get it, on top of it, on a paragraph level. And then from here, what we could do is generate a pattern on the temporal level, on the temporal axis, saying, these are the days which are good to me. These are the days which are bad. Like, I had a rough patch kind of a thing. And then, so this is with respect to the app at the moment. So, what we intend to do is, so we also have these information at hand, things like co-reference analysis. So, it gives me the, like, say, a given entity to the text. It gives me related words. And second thing is dependency-passed object, which says two words how they are dependent. So, you know, I have this bag of tools. I could actually get highs and lows, likes and dislikes, changes in this life pattern over time, and actually tell them you're not doing, you're not, you're doing it wrong, in this way, and then also the reasons because of the dependency-passed object. The thing is we actually experimented it, and it turned out to be pretty good, obviously not really as good as a human. But, yeah, it's pretty good for our app. And the challenges which we faced were, the correlation conjunction, like, that's a pause tag, it was kind of hard, we found it hard to pass because there was very sparse, like, they're very far away in the parse tree. And second thing is that the training corpus, which we always have, is very domain-specific, because it's, at the moment, the research is going on in the fields, like, you know, where, on the ads, customer reviews, restaurants, movie reviews, but it's not on a personal level, personal sentiment, emotional level. So that's where we are pitching in, and, yeah, that's about our app. Thank you. Hi, everyone. I'm David, this is Aparna, and this is Sumit. So, show of hands, how many have seen Iron Man 3? Please, show of hands. So, how this project started was that, the three of us were watching Iron Man 3 a couple of days ago, and we were coming for the hackathon over here, and we were watching this exact scene, and this is what inspired us. So, let's have a short look at it. Here we go. Okay, so that's really fast-fetched. Don't worry, Iron Man is not going to come over here. But what we wanted to do was that implement gesture control, what Tony Stark is trying to do for web. So, you know, there were a lot of options which we tried to figure out. We thought we could go with the Xbox Kinect, you know, which is used to play games. We could go with the Leap Motion, but what we figured out at the end of the day is that this is too expensive for my taste. So, frankly, we can't afford it. So, what we needed was that we needed a solution which was very simple, which did not require any additional hardware, you know, which you had to attach with your system. Something which was scalable, extendable, which could be used for a variety of applications. It's not just limited to what we develop. We can explore and, you know, attach it to anything we want. Something which is very simply usable, something which has a thousand ables like this. But the key important thing which we want in our system is we want it to be free. So, what we did was that we came up with something known as Puppeteer. What Puppeteer does is that it gives you the privilege to access the entire web using just gestures. It's a very simple, very lightweight Chrome extension. You can just download it, attach it to a browser. You just need to have a Chrome browser. No, you know, limitations over what kind of OS you have. And once you have this, you can control the web using the predefined set of gestures which we have. So, if you have YouTube and you want to move on to the next video, you want to pause the video, you can use the gestures which we have over here. You're using SlideShare, you want to move on to the next slide, you have an eBook reader, you want to move to the next page. So, the number of use cases is simply endless for this. But I say enough talking. Let's go into a demo and see how this really works. Okay, for this demo, I'd like to have some volunteer, maybe Peter if you want to come here. Actually, I would prefer someone without specs because it works better without specs. It's really interesting for you. So, this is the application. So, we'll explain later on what exactly it does. So, this was just a simple demo of what Puppeteer could do. Now, we'd like to tell more about Puppeteer, like the different modes which Puppeteer supports. So, currently, we support three different modes on Puppeteer. One is eye-tracking, which you just saw. And the other one is media. We'll give you a demo of that as well. Devrat, go ahead and show them. This is normal bespoke.js, it's a micro framework for slides. So, I'll try to use Puppeteer for this. Let's see how it goes. So, basically, I'm controlling the slides via hand gestures. I can also do previous slides. We can also control slide share as of now. So, this is another portal which is a very common slide share which everyone uses. We just did it five minutes back, integrated with slide share. I'm not sure whether it will work or not. By the way, any guesses, how many lines of code did it take to integrate slide share with Puppeteer? Any guesses. Any guesses. Come on, Peter, any guess. Actually, it's one. Just one line of code. Just one. So, that's how easy it is to integrate your application with Puppeteer. Yeah. So, do you guys want to see more demos? Okay. Okay. Maybe we can see later. We guys are sitting here. We can see demos from us. So, you might be thinking, how is it working, right? So, it's simple. Science. But to be on a serious note, we have used few libraries. So, basically what Puppeteer does, it detects from camera the event. So, a swipe left is an event, swipe right is an event. So, we have made a library. It's basically like jQuery for developers out here. You can use this as an event-based library, but you can detect gestures out of it. You can integrate with anything. Also, it uses a few other libraries like object detect to track the eyes and jQuery, of course. So, yeah. When we started this a couple of days ago, we weren't even familiar how to make a Chrome extension. But the reason we were able to hack it together in two days was because of the open source libraries which were out there. And, you know, of course, help from Stack Overflow. So, but what we wanted to do was we didn't want to keep this with us. We wanted to share it. Give it to people so, you know, people can add upon it. They can, you know, explore it. Tell us how we can make it better. Because at the end of the day, what this is, is image processing. And, you know, there are a ton of ways in which we can enhance it. There are a ton of gestures which we can add. Simple for example, you know, right now it's on X and Y axis. You can add depth by just taking the surface area. There's a ton of ideas which we got in just a couple of days. So, we want to find out what other things we can do with this. So, these are the download sites. You can go and just download it from here. We'll also be posting the get, soon make it open source. So, you can watch out for that. Last thing which I want to talk about is that since everything is open source and what we thought was let's provide people and developers the ecosystem so that they can develop on it. So, our vision for this does not stop here. We'll provide more gestures for it. We'll provide tons of other features and we all need help of other developers to achieve this. Just to add to what Swamiji says. So, what you wanted to do is not make a Chrome extension which you add to your browser and, you know, it works for a couple of sites. You use it, then you forget it. What we wanted to do was create a library. People can just come, take interactions from their websites, associate it with the gestures we have in our library. We'll keep adding to that and, you know, integrate it with anything we want. So, our user base is everybody who has a Chrome browser and anybody who owns a website and wants to integrate it with Puppeteer. So, thank you. I think... How many of you can see yours? I hope somebody got that right. I'm Rajat. So, I'll start with a question. How many of you think what are the greatest inventions by human beings? Any answers? What are some of the greatest inventions done by humans? Huh? Not cheese? Okay. So, according to me, these are some of the greatest inventions. One is the travel domain and another is the digital age. So, the travel lets you go to an information point and gather information and another lets the information come to you. So, these have really added to the evolution path of human beings. So, it's all about information, right? So, if information is such a big, you know, game changer in our lives, so already people with power and institutions and people who can modify this are playing around with our lives. So, the idea is to democratize the access to information. Facebook, Google, YouTube, they have already done a lot of things to do that. So, my idea is to bring that to another level. The idea is to ask questions on location-based polling. So, if you are in a conference or in a new place, you're buying a house, you can just post a question or you can post a question to the community there with some options and they can start polling around that question. This way you will have access to information to that place and you can easily be sure enough if you're going to buy a thing or you're taking a decision in your life. Nobody can fake around with your life. So, I'll start with the demo. It's a simple app on Android and iOS. So, these are some of the questions which have been posted and you can see the numbers here. So, you can post a new question there. The app is just built on Ionic here and you can just add options here. The question will be how are the prices here? You can do high. You can reorder the options here. So, I guess the screen resolution is screwed up. So, that's how you just put a question up there to the community and people can start polling and if you want to respond to some questions, vote it successful. So, it's a simple app but it can be a great idea to democratize this access to information. So, this can be a game changer in terms of choosing your government officials to a local area. You can just start polling before the election poll. You can see the wave around which politician. You can go around in a hospital and see people have voted yes or no for the quality of service they provide. You can go around educational institution and see how people are responding to its service. So, it's a simple app but it can be a game changer. The current model is not decided. The business model is not decided yet but this can be decided later depending on how people respond to this. It can be turned into an organization or a non-profit one or it can be a paid service later where people start paying for premium service for asking more questions or seeing the results around that location. Thank you. The project that we have done here is wholly meant to a particular group of prosthetic people. I mean to say, think of a person who don't have fingers. He can't write something right but he wants to write something. In that case, he can go with our project which I or we call it as invisible pencil. Let's go for the demonstration. I'm sorry. What is our next step of implementation is Kinect is not just a sensor. It is a cluster of sensor. It has microphones and so many other things. Now in this project what we did is we used gesture to get colors, to get letters. Next what we are going to do is if I say A, then it has to print A and what we have done here in this application is I'm standing when I'm pointing this towards the Kinect. I'll make this point as the closest point and I'll make the Kinect to point towards this closest point and taking the location of that pixel I'll draw a circle over there and as and when I move the point I'll draw circles over those points. Suppose if I'm writing in A so I'll make the circle to follow that path so thereby getting A. So in this way you can write anything and think about this project is people who ever see this Kinect they'll say that this is a very costly project. Kinect sensor is very costly but what I want to say is Kinect is just a cluster of sensor. It has too many sensors and in this case I'm just using depth sensor like to be more specific I'm just using IR sensor and I don't have an IR source here that is why I'm using IR projector in it. And let's face the fact here that not everybody is as lucky as us sitting here many have deformities and stuff many don't have fingers but still they're equally as determined as us. So let's just give them a chance as well to write and stop. Thank you. Like I showed you how to write letters and letters we'll go to get more letters we'll concatenate them to get words and by getting words I'll get a sentence by getting sentences I can get paragraph. In this way prosthetic people can also write like us. Thank you everyone.